text
stringlengths 16
1.15M
| label
int64 0
10
|
---|---|
aug exact approaches travelling thief problem junhua markus wagner sergey polyakovskiy frank neumann optimisation logistics school computer science university adelaide adelaide australia abstract many evolutionary constructive heuristic approaches introduced order solve traveling thief problem ttp however accuracy approaches unknown due inability find global optima paper propose three exact algorithms hybrid approach ttp compare approaches gather comprehensive overview accuracy heuristic methods solving small ttp instances introduction travelling thief problem ttp recent academic problem two combinatorial optimisation problems interact namely travelling salesperson problem tsp knapsack problem reflects complexity applications contain one problem commonly observed areas planning scheduling routing example delivery problems usually consist routing part vehicle packing part goods onto vehicle thus far many approximate approaches introduced addressing ttp evolutionary heuristic initially polyakovskiy bonyadi wagner michalewicz neumann proposed two iterative heuristics namely random local search rls based general approach solves problem two steps one tsp one bonyadi introduced similar twophased algorithm named heuristic method inspired approaches named cosolver mei also investigated interdependency proposed cooperative coevolution based approach similar cosolver memetic algorithm called matls attempts solve problem whole faulkner outperformed existing approaches new operators corresponding series heuristics named recently wagner investigated ant system mmas ttp yafrani ahiod proposed memetic algorithm simulated annealing algorithm results show new algorithms competitive different range ttp instances wagner found study involving approximate ttp algorithms small subset actually necessary form algorithm portfolio however due lack exact methods approximate approaches evaluated respect accuracy even small ttp instances address issue propose three exact techniques additional benchmark instances help build comprehensive review approximate approaches remainder revisit definition ttp section introduce exact approaches section section elaborate setup experiments compare exact hybrid approaches best approximate ones conclusions drawn section problem statement section outline problem formulation comprehensive description refer interested reader given set cities set items city contains set items item positioned city characterised profit pik weight wik thief must visit cities exactly starting first city return back end distance dij pair cities known item may selected long total weight collected items exceed capacity renting rate paid per time unit taken complete tour denote maximal minimum speeds thief assume binary variable yik yik iff item chosen city goal find tour along packing plan ynmn combination maximises reward given form following objective function dxi dxn pik yik max constant value defined input parameters minuend sum packed items profits subtrahend amount thief pays knapsack rent equal total traveling time along multiplied fact actual travel distance pspeed palong dxi depends accumulated weight wxi wjk yjk items collected preceding cities slows thief impact overall benefit exact approaches ttp section propose three exact approaches ttp simplified version ttp polyakovskiy neumann recently introduced packing travelling problem pwt tour predefined packing plan variable furthermore neumann prove pwt solved time dynamic programming taking account fact weights integer dynamic programming algorithm maps every possible weight packing plan guarantees certain profit optimal packing plan selected among plans obtained adopt findings derive two exact algorithms ttp let denote possible weights given ttp instance let designate best solutions instance tour obtained via dynamic programming pwt variable optimum objective value ttp arg yields basis two approaches dynamic programming branch bound search bnb following sections describe two approaches well constraint programming technique adopted ttp dynamic programming approach based algorithm tsp dynamic programming pwt algorithm depicts pseudocode approach let subset cities refer particular city tour starting city visiting cities exactly ending city optimal solution ttp therefore described fxn wxn wxn total weight knapsack leaving last city fxn results dynamic programming algorithm pwt considering tour following statement valid respect ttp statement fxn wxn wxn dxn vmax total weight total profit items picked city clearly wxn optimal tour furthermore relationship exists every pair algorithm dynamic programming ttp procedure dynamic programming store mapping calculate store max calculate wxn fact optimal solution given ttp instance one compute optimal solution instance excludes last city solution original problem following idea build ttp costly terms memory consumption reaches reduce cost let define upper bound value feasible solution built partial solution follows max pij vmax estimates maximal profit thief may obtain passing remaining part tour maximal speed generating minimal possible cost obviously guarantees complete optimal solution exceed bound therefore incumbent solution known valid eliminate partial solution objective value incumbent practice one obtain incumbent solution compute two stages first feasible solution tsp part problem computed solvers concorde algorithm second dynamic programming applied contributes packing plan branch bound search introduce branch bound search ttp employing upper bound defined section algorithm depicts pseudocode denotes cities visited mapping calculated dynamic programming pwt algorithm branch bound search ttp procedure bnb search create initial solution gain benefit best tour permutation create empty mapping set search best function search best calculate return max max best else swap cities set calculate max best best max best search best swap cities return best way tighten upper bound providing better estimation remaining distance current city last city tour currently shortest distance used following two ways improve estimation use distance city city farthest unvisited city use distance shortest path distance passed far achieve city tour two ideas joined together using max enhance result constraint programming present third exact approach adopting existing constraint programming paradigm model employs simple permutation based representation tour allows use alldifferent filtering algorithm similarly section vector used refer total weights accumulated cities tour specifically weight knapsack thief departs city model bases search two types decision variables denotes particular positions cities tour variable takes value indicate ith city visited initial variable domain subsequently visited city signals selection item packing plan variable yik binary therefore yik furthermore vector used express distance matrix element equals distance dxi two consecutive cities model relies alldifferent constraint ensures values distinct also involves element expression returns hth variable list variables total model cpttp consists following objective function constraints max pij yij element element vmax vmax alldifferent wij yij expression calculates objective value according function constraint verifies cities assigned different positions thus visited exactly elimination constraint equation calculates weight items collected cities equation capacity constraint performance model depends solver specifically filtering algorithms search strategies applies use ibm ilog optimizer searching algorithm set restart mode mode adopts general purpose search strategy inspired integer programming techniques based concept impact variable impact measures importance variable reducing search space impacts learned observation domains reduction search help restart mode dramatically improve performance search within search cities assigned positions first items decided therefore solver instantiates prior ynmn variables applying default selection strategy extensive study shows order gives best results fast computational experiments section first compare performance exact approaches ttp order find best one setting baseline subsequent comparison approximate approaches experiments run cpu cluster phoenix hpc university adelaide contains intel xeon cpu cores memory allocate one cpu core memory individual experiment computational set instance uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr running time seconds bnb table columns denote number cities number items respectively running times given seconds bnb different numbers cities items denotes case approach failed achieve optimal solution given time limit run experiments generate additional set instances following way proposed use single instance original tsp library starting point new subset entitled contains cities cities select uniformly random cities removed order obtain smaller test problems cities set knapsack component problem adopt approach given use corresponding problem generator available one input parameters generator asks range coefficients set total create knapsack test problems containing items characterised knapsack capacity category experiments focus uncorrelated uncorr uncorrelated similar weights multiple instances available online http php strongly correlated types instances stage assigning items knapsack instance particular cities given tsp tour sort items descending order profits second city obtains items largest profits third city next items instances use ceil distances means euclidean distances rounded nearest integer set tables illustrate results experiments test instances names read follows first stays name original tsp problem values succeeding denote actual number cities total number items respectively followed generation type knapsack problem finally postfixes instances names describe knapsack capacity comparison exact approaches compare three exact algorithms allocating instance generous time limit aim analyse running time approaches influenced increasing number cities table shows running time approaches comparison approximate approaches exact approaches introduced approximate approaches evaluated respect accuracy optima case ttp approximate approaches evolutionary algorithms local searches memetic algorithm simulated annealing cosolverbased hybrid approaches addition existing heuristics introduce enhanced approaches hybrids two one dynamic programming pwt original work follows first single tsp tour computed using chained fast packing heuristic applied performs two steps order repeats time budget exhausted hybrids equivalent two algorithms however use exact dynamic programming pwt packing solver provides better results compute optimal packing sampled tsp tours results start showing performance summary algorithms instances table addition table shows detailed results subset best gap avg stdev opt table performance summary heuristic ttp solvers across instances optimal result obtained opt number times average independent repetitions equal optimum show number times averages within approaches subset instances figure shows results entire comparison include trend two selected approaches explain following would like highlight following observations performs badly across wide range instances restart variant performs better however lack local search becomes apart relatively bad performance compared approaches small instances performs better likely due local searches differentiate still see hump trend line smaller instances flattens quickly larger instances dynamic programming variants perform slightly better shows difference quality packing strategy however times balanced faster packing allows tsp tours sampled small instances lacks local search tours gap optimum relatively large shown respective trend lines dominates field outstanding performance across instances independent number cities number items remarkable high reliability reaches global optimum interestingly approaches seem difficulties solving instances knapsack configuration see table compared two knapsack types takes longest solve strongly correlated ones also tend instances heuristics rarely find optimal solutions fitted polynomials degree six used visualisation purposes instance uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr uncorr opt gap std gap std gap std table comparison approximate approaches running minutes limits approximate algorithm runs times instance use average objective obj gap measured opop runtime second results obtained reach time limit minutes per instance highlighted blue best approximate results runs memory instances without results figure showing gap optimal solution one obtained exact approach left right instances first sorted number cities total number items conclusion traveling thief problem ttp attracted significant attention recent years within evolutionary computation community paper presented evaluated exact approaches ttp based dynamic programming branch bound constraint programming used exact solutions provided approach evaluate performance current ttp solvers investigations show obtaining cases close optimal solutions however small fraction tested instances obverse gap optimal solution acknowledgements work supported australian research councils grants supercomputing resources provided phoenix hpc service university adelaide references applegate bixby chvatal cook concorde tsp solver http benchimol hoeve rousseau rueher improved filtering weighted circuit constraints constraints jul issn doi bonyadi michalewicz barone travelling thief problem first step transition theoretical problems realistic problems evolutionary computation cec ieee congress pages bonyadi michalewicz przybylek wierzbicki socially inspired algorithms travelling thief problem proceedings annual conference genetic evolutionary computation gecco pages acm yafrani ahiod efficient local search heuristic travelling thief problem computer systems applications aiccsa international conference pages ieee yafrani ahiod heuristics travelling thief problem proceedings genetic evolutionary computation conference gecco pages acm faulkner polyakovskiy schultz wagner approximate approaches traveling thief problem proceedings annual conference genetic evolutionary computation gecco pages acm held karp dynamic programming approach sequencing problems proceedings acm national meeting acm pages acm hooker logic optimization constraint programming informs journal computing lin kernighan effective heuristic algorithm problem operations research mei yao improving efficiency heuristics large scale traveling thief problem pages springer international publishing cham isbn doi mei salim yao heuristic evolution genetic programming traveling thief problem ieee congress evolutionary computation cec pages may doi mei yao investigation interdependence travelling thief problem soft computing neumann polyakovskiy skutella stougie fully polynomial time approximation scheme packing traveling arxiv pisinger hard knapsack problems comput oper issn doi polyakovskiy neumann packing traveling problem european journal operational research polyakovskiy bonyadi wagner michalewicz neumann comprehensive benchmark set heuristics traveling thief problem proceedings annual conference genetic evolutionary computation gecco pages acm refalo principles practice constraint programming chapter search strategies constraint programming pages springer reinelt traveling salesman problem library orsa journal computing hoos max min ant system future generation computer systems wagner stealing items efficiently ants swarm intelligence approach travelling thief problem dorigo birattari ohkura pinciroli editors swarm intelligence international conference ants brussels belgium september proceedings pages springer wagner lindauer nallaperuma hutter case study algorithm selection traveling thief problem journal heuristics pages
| 8 |
nov attention models visual question answering idan schwartz department computer science technion idansc alexander schwing department electrical computer engineering university illinois aschwing tamir hazan department industrial engineering management technion abstract quest algorithms enable cognitive abilities important part machine learning common trait many recently investigated tasks take account different data modalities visual textual input paper propose novel generally applicable form attention mechanism learns correlations various data modalities show correlations effectively direct appropriate attention relevant elements different data modalities required solve joint task demonstrate effectiveness attention mechanism task visual question answering vqa achieve performance standard vqa dataset introduction quest algorithms enable cognitive abilities important part machine learning appears many facets visual question answering tasks image captioning visual question generation machine comprehension common trait recent tasks take account different data modalities example visual textual data address tasks recently attention mechanisms emerged powerful common theme provides form interpretability applied deep net models also often improves performance latter effect attributed expressive yet concise forms various data modalities present day attention mechanisms like example however often lacking two main aspects first systems generally extract abstract representations data entangled manner second present day attention mechanisms often geared towards specific form input therefore particular task address issues propose novel generally applicable form attention mechanism learns correlations various data modalities example second order correlations model interactions two data modalities image question generally order correlations model interactions modalities learning correlations effectively directs appropriate attention relevant elements different data modalities required solve joint task demonstrate effectiveness novel attention mechanism task visual question answering vqa achieve performance vqa dataset original image unary potentials pairwise potentials final attention man head man head man head man head many cars picture many cars picture many cars picture many cars picture figure results attention one image two different questions column unary image attention identical construction pairwise potentials differ questions images since modalities taken account column final attention illustrated column results visualized fig show visual attention correlates textual attention begin reviewing related work subsequently provide details proposed technique focusing nature attention models conclude presenting application attention mechanism vqa compare related work attention mechanisms investigated image textual data following review mechanisms image attention mechanisms past years single image embeddings extracted deep net extended variety image attention modules considering vqa example textual long short term memory net lstm may augmented spatial attention similarly andreas employ language parser together series neural net modules one attends regions image language parser suggests neural net module use stacking attention units also investigated yang stacked attention network predicts answer successively dynamic memory network modules capture contextual information neighboring image regions considered xiong shih use object proposals rank regions according relevance attention scheme proposed extract details joint attention mechanism discussed fukui suggest efficient outer product mechanism combine visual representation text representation applying attention combined representation additionally suggested use glimpses recently kazemi showed similar approach using concatenation instead outer product importantly approaches model attention single network fact multiple modalities involved often considered explicitly contrasts aforementioned approaches technique present recently kim presented technique also interprets attention probabilistic model incorporate structural dependencies deep net recent techniques work nam dual attention mechanisms work kim bilinear models contrast latter two models approach easy extend number data modalities textual attention mechanisms also want provide brief review textual attention address challenges long sentences faced translation models hermann proposed rnnsearch address challenges arise fixing latent dimension neural nets processing text data bahdanau first encode document query via bidirectional lstm used compute attentions mechanism later refined word based technique reasons sentence representations joint attention two cnn hierarchies discussed yin among attention mechanisms relevant approach work approach presented discuss attention mechanisms operate jointly two modalities use pairwise interactions form similarity matrix ignore attentions individual data modalities suggest alternating model directly combines features modalities attending additionally suggested parallel model uses similarity matrix map features one modality hard extend approach two modalities contrast model develops probabilistic model based high order potentials performs inference obtain marginal probabilities permits trivial extension model number modalities additionally jabri propose model answers also used inputs approach questions need attention mechanisms develops alternative solution based binary classification contrast approach captures attention correlations found improve performance significantly overall early work propose combination language image attention vqa attention mechanism several potentials discussed detail yet following present approach joint attention number modalities higher order attention models attention modules crucial component present day decision making systems particularly taking account data different modalities attention mechanisms able provide insights inner workings oftentimes abstract automatically extracted representations systems example system captured lot research efforts recent years visual question answering vqa considering vqa example immediately note dependence two even three different data modalities visual input question answer get processed simultaneously formally let rnv rnq rna denote representation visual input question answer respectively hereby number pixels number words question number possible answers use denote dimensionality data simplicity exposition assume identical across data modalities due dependence multiple data modalities present day decision making systems decomposed three major parts data embedding attention mechanisms iii decision making vqa system one developed three parts immediately apparent considering system architecture outlined fig data embedding attention modules deliver decision making component succinct representation relevant data modalities performance depends represent data modalities oftentimes attention module tends use expressive yet concise data embedding algorithms better capture correlations consequently improve decision making performance example data embeddings based convolutional deep nets constitute many visual recognition scene understanding tasks language embeddings heavily rely lstm able capture context sequential data words phrases sentences give detailed account data embedding architectures vqa sec yes mcb mcb unary potential decision sec mcb ternary potential pairwise potential concatenate lstm lstm word embedding unary potential softmax softmax softmax pairwise potential unary potential pairwise potential word embedding resnet yes yellow food dog trying catch frisbee attention sec data embedding sec figure vqa system attention apparent aforementioned description attention crucial component connecting data embeddings decision making modules subsequently denote attention words question via word index similarly attention image referred via attention possible answers denoted consider attention mechanism probability model attention mechanism computing first unary potentials denote importance feature question word representations multiple choice answers representations image patch features vqa task second pairwise potentials express correlations two modalities last potential captures dependencies three modalities obtain marginal probabilities potentials model performs inference combine unary potential marginalized pairwise potential marginalized third order potential linearly including bias term smax smax smax hereby learnable parameters smax refers operation respectively converts combined potentials probability distributions corresponds single iteration linear combination potentials provides extra flexibility model since learn reliability potential data instance observe question attention relies unary question potential pairwise question answer potentials contrast image attention relies pairwise question image potential given aforementioned probabilities attended image question answer vectors denoted attended modalities calculated weighted sum image features vnv rnv question features qnq rnq answer features ana rna viv qiq aia tanh conv tanh conv tanh conv tanh tanh conv conv conv conv unary potential conv conv conv conv conv tanh ternary potential unary potential tanh conv conv conv tanh pairwise potential tanh conv unary potential conv tanh conv conv tanh conv tanh conv tanh tanh attended modalities effectively focus data relevant task passed classifier decision making ones discussed sec following describe attention mechanisms unary pairwise ternary potentials detail conv conv module visual question marginalized two data modalities ternary attention module visual question answer marginalized three data modalities conv tanh unary pairwise pairwisepotential ternary unary threeway threeway potential potential potential potential figure illustrationpotential attention unary attention module visual potential pairwise attention conv tanh conv conv ternary potential conv tanh conv pairwise potential tanh conv conv pairwise potential unary potentials illustrate unary attention schematically fig input unary attention module data representation either visual representation question representation answer representation using representations obtain unary potentials using convolution operation kernel size data representation additional embedding step followed tanh case followed another convolution operation kernel size reduce embedding dimensionality since convolutions kernel size identical matrix multiplies formally obtain unary potentials via tanh tanh tanh trainable parameters pairwise potentials besides mentioned mechanisms generate unary potentials specifically aim taking advantage pairwise attention modules able capture correlation representation different modalities approach illustrated fig use similarity matrix image question modalities qwq alternatively entry correlation column qwq column qwq trainable parameters consider pairwise potential represents correlation word question patch image therefore retrieve attention specific word convolve matrix along visual dimension using dimensional kernel specifically tanh wiv tanh wiq similarly obtain omit due space limitations potentials used compute attention probabilities defined ternary potentials capture dependencies three modalities consider correlations qwq awa threeway potential unary potential pairwise potential threeway potential outer product space outer product space outer product space mcb mcb mct mct pairwise potential unary potential figure illustration correlation units used decision making mcb unit approximately sample outer product space two attention vectors mct unit approximately sample outer product space three attention vectors trainable parameters similarly pairwise potentials use tensor obtain correlated attention modality tanh wiv wiq tanh wiq potentials used compute attention probabilities defined decision making decision making component receives input attended modalities predicts desired output attended modality vector consists relevant data making decision decision making component consider modalities independently nature task usually requires take account correlations attended modalities correlation set attended modalities represented outer product respective vectors correlation two attended modalities represented matrix correlation modalities represented tensor ideally attended modalities correlation tensors fed deep net produces final decision number parameters network grows exponentially number modalities seen fig overcome computational bottleneck follow tensor sketch algorithm pham pagh recently applied attention models fukui via multimodal compact bilinear pooling mcb pairwise setting multimodal compact trilinear pooling mct extension mcb pools data three modalities tensor sketch algorithm enables reduce dimension tensor referring implicitly relies count sketch technique randomly embeds attended vector another euclidean space tensor sketch algorithm projects tensor consists attention correlations order using convolution example two attention modalities correlation matrix convolution randomly projected attended modalities correlations fed fully connected neural net complete decision making visual question answering following evaluate approach qualitatively quantitatively describe data embeddings data embedding attention module requires question representation rnq image representation rnv answer representation rna computed follows image embedding embed image use convolutional deep nets resnet extract last layer fully connected units dimension vgg net case dimension resnet case hence obtain table comparison results vqa dataset variety methods observe combination three unary pairwise ternary potentials yield best result method num hiecoatt vgg hiecoatt resnet rau resnet mcb resnet dan vgg dan resnet mlb resnet resnet resnet unary pairwise ternary vgg unary pairwise ternary resnet embed resnet features dimensional space obtain image representation question embedding obtain question representation rnq first map encoding word question embedding space using linear transformation plus corresponding bias terms obtain richer representation accounts neighboring words use temporal convolution filter size combination multiple sized filters suggested literature find benefit using approach subsequently capture dependencies used long short term memory lstm layer reduce overfitting caused lstm units used two lstm layers hidden dimension one uses input word embedding representation one operates conv layer output output concatenated obtain also note constant hyperparameter questions words cut questions less words answer embedding embed possible answers use regular word embedding vocabulary specified taking frequent answers training set answers included top answers embedded vector answers containing multiple words embedded single vector assume real dependency answers therefore need using additional conv lstm layers decision making vqa example investigate two techniques combine vectors three modalities first attended feature representation modality combined using mct unit feature element form first solution general cases like vqa experiments show better use second approach mcb unit combination permits greater expressiveness employ features form therefore also allowing image features interact note terms parameters approaches identical neither mcb mct parametric modules beyond mcb tested several techniques suggested literature including multiplication addition concatenation optionally followed another hidden fully connected layer tensor sketching units consistently performed best results experimental setup use rmsprop optimizer base learning rate well batch size set dimension hidden layers set mcb unit feature dimension set apply dropout rate word embeddings lstm layer first conv layer unary potential units additionally last fully connected layer use dropout rate use top frequent many glasses table many glasses table many glasses table anyone scene wearing blue anyone scene wearing blue anyone scene wearing blue kind flooring bathroom kind flooring bathroom kind flooring bathroom room room room figure image column show attention generated two different questions columns columns respectively attentions ordered unary attention pairwise attention combined attention image question observe combined attention significantly depend question animal drinking water kind animal red yes white forks tomatoes presidential blue green fila animal drinking water blue red cutting cake green bear white objazd elephant giraffe yes reject cow spain attention kind animal attention white boy red yes aspro pimp player blue pain green image yes next blue green parka pirates gadzoom picture picture clock white photo red wall light wall attention light attention figure attention generated two different questions three modalities find attention multiple choice answers emphasis unusual answers answers possible outputs covers answers train set implemented models using torch comparison attention mechanism use approach technique fukui methods based hierarchical attention mechanism compact bilinear mcb pooling contrast approach demonstrate relatively simple technique based probabilistic intuition grounded potentials comparative reasons visualized attention based two modalities image question evaluate attention modules vqa datasets dataset consists training images test set images image comes questions along multiple choice answers quantitative evaluation first evaluate overall performance model compare variety baselines tab shows performance model baselines datasets multiple choice questions obtain multiple choice results follow common practice use highest scoring answer among provided ones approach fig multiple choice answering task achieved reported result iterations requires hours training dataset using titanx gpu despite fact model million parameters techniques like use million parameters observe behavior additionally employ model similar experimental setup observe significant improvement model shows importance attention models due fact use lower embedding dimension similar compared existing models model achieves inferior performance believe higher embedding dimension proper tuning improve starting point additionally compared proposed decision units mct generic extension mcb mcb greater expressiveness sec evaluating val dataset training train part using vgg features mct setup yields https using device boy girl using device using battery device yes yes boy girl boy girl girl boy girl girl figure comparison attention results column attention provided column column fourth column provides question answer different techniques color table brown blue color table color table color table color umbrella blue blue color umbrella color umbrella color umbrella figure failure cases unary pairwise combined attention approach system focuses colorful umbrella opposed table first row mcb yields also tested different ordering input mcb found yield inferior results qualitative evaluation next evaluate technique qualitatively fig illustrate unary pairwise combined attention approach based two modality architecture without multiple choice input image show multiple questions observe unary attention usually attends strong features image pairwise potentials emphasize areas correlate question words importantly combined result dependent provided question instance first row observe question many glasses table pairwise potential reacts image area depicting glass contrast question anyone scene wearing blue pairwise potentials reacts guy blue shirt fig illustrate attention model find attention multiple choice answers favor unusual results fig compare final attention obtained approach results obtained techniques discussed observe approach attends reasonable pixel question locations example considering first row fig question refers battery operated device compared existing approaches technique attends laptop seems help choosing correct answer second row question wonders boy girl correct answers produced attention focuses hair fig illustrate failure case attention approach identical despite two different input questions system focuses colorful umbrella opposed object queried question conclusion paper investigated series techniques design attention multimodal input data beyond demonstrating performance using relatively simple models hope work inspires researchers work direction acknowledgments research supported part israel science foundation grant material based upon work supported part national science foundation grant thank nvidia providing gpus used research references jacob andreas marcus rohrbach trevor darrell dan klein learning compose neural networks question answering arxiv preprint stanislaw antol aishwarya agrawal jiasen margaret mitchell dhruv batra lawrence zitnick devi parikh vqa visual question answering iccv dzmitry bahdanau kyunghyun cho yoshua bengio neural machine translation jointly learning align translate arxiv preprint moses charikar kevin chen martin finding frequent items data streams icalp springer ronan collobert koray kavukcuoglu farabet environment machine learning biglearn nips workshop number abhishek das harsh agrawal lawrence zitnick devi parikh dhruv batra human attention visual question answering humans deep networks look regions arxiv preprint akira fukui dong huk park daylen yang anna rohrbach trevor darrell marcus rohrbach multimodal compact bilinear pooling visual question answering visual grounding arxiv preprint karl moritz hermann tomas kocisky edward grefenstette lasse espeholt kay mustafa suleyman phil blunsom teaching machines read comprehend nips pages allan jabri armand joulin laurens van der maaten revisiting visual question answering baselines eccv springer schwing creativity generating diverse questions using variational autoencoders cvpr equal contribution vahid kazemi ali elqursh show ask attend answer strong baseline visual question answering arxiv preprint kim lee donghyun kwak heo jeonghee kim byoungtak zhang multimodal residual learning visual nips kim jeonghee kim zhang hadamard product bilinear pooling arxiv preprint yoon kim carl denton luong hoang alexander rush structured attention networks arxiv preprint jiasen jianwei yang dhruv batra devi parikh hierarchical visual question answering nips lin zhengdong hang learning answer questions image using convolutional neural network arxiv preprint mateusz malinowski marcus rohrbach mario fritz ask neurons approach answering questions images iccv nasrin mostafazadeh ishan misra jacob devlin margaret mitchell xiaodong lucy vanderwende generating natural questions image arxiv preprint hyeonseob nam jeonghee kim dual attention networks multimodal reasoning matching arxiv preprint hyeonwoo noh bohyung han training recurrent answering units joint loss minimization vqa arxiv preprint ninh pham rasmus pagh fast scalable polynomial kernels via explicit feature maps sigkdd acm tim edward grefenstette moritz hermann karl phil blunsom reasoning entailment neural attention iclr kevin shih saurabh singh derek hoiem look focus regions visual question answering cvpr caiming xiong stephen merity richard socher dynamic memory networks visual textual question answering arxiv preprint huijuan kate saenko ask attend answer exploring spatial attention visual question answering eccv pages springer kelvin jimmy ryan kiros kyunghyun cho aaron courville ruslan salakhudinov rich zemel yoshua bengio show attend tell neural image caption generation visual attention icml zichao yang xiaodong jianfeng gao deng alex smola stacked attention networks image question answering cvpr wenpeng yin hinrich bing xiang bowen zhou abcnn convolutional neural network modeling sentence pairs arxiv preprint yuke zhu oliver groth michael bernstein grounded question answering images cvpr
| 2 |
double deep machine learning moshe benbassat arison school business interdisciplinary center idc herzliya israel abstract important breakthroughs machine learning algorithms led impressive performance transactional point applications detecting anger speech alerts face recognition system ekg interpretation nontransactional applications medical diagnosis beyond ekg results require algorithms integrate deeper broader knowledge capabilities integrating knowledge anatomy physiology heart ekg results additional patient findings similarly military aerial interpretation knowledge enemy doctrines force composition spread helps immensely situation assessment beyond image recognition individual objects initiative proposed build wikipedia smart machines meaning target readers human rather smart machines named rekopedia goal develop methodologies tools automatic algorithms convert humanity knowledge learn schools universities professional life reusable knowledge structures smart machines use inference algorithms ideally rekopedia would open source shared knowledge repository similar shared open source software code repositories double deep learning approach advocates integrating machine techniques machineteaching techniques leverage power overcome corresponding limitations illustration outline project described produce reko knowledge modules medical diagnosis disorders applications based solely machine learning algorithms typically point solutions transactional tasks lend automatic generalization beyond scope data sets based today industry fragmented establishing broad deep enough foundations enable build higher level generic universal intelligence let alone must find ways create synergies fragments connect external knowledge sources wish scale faster industry examples article based inspired systems deployed decades career benefit hundreds millions people around globe second spring long winter avoid sliding winter essential rebalance roles data knowledge data important deep equally important introduction recent years deep learning algorithms achieved important breakthroughs outstanding results primarily image speech natural language understanding leading impressive performance transactional tasks verification detecting anger alerts face recognition system ekg interpretation algorithms use data automatically train neural networks make intelligent inferences tasks covered training data based models means behavior predictable trained certain level likely perform within statistical error cases within scope training data another key advantage algorithms human touch feature engineering pattern recognition tasks meaning need lengthy research projects requiring domain experts fingerprints experts ekg experts good differentiating features classification decisions impressive achievements universal early part career could certainly benefited many pattern recognition projects including object recognition ballistic missile defense system cold war time ultrasound wave recognition autonomous machine digging coal moon ekg waves radar signals handwritten character recognition view remarkable progress achievements key development future mathematician music ears however due appreciation algorithms limitations discussed specifically nontransactional applications require broader deeper reasoning possibly involving multiple deep knowledge sources optimizing manufacturing service operations medical diagnosis equipment troubleshooting military situation assessment mission planning early successes led overstating applicability reaching extreme claims sufficient amount data solve problems examples bellow inspired real life challenges presented long career illustrate radically religious claims limits progress article deep learning algorithms architecture track record speaks favor enriching amplifying general using wealth knowledge mankind developed many fields thousands years following discussion deep learning current limitations certain needs proceed presenting double deep learning approach idea wikipedia smart machines directions overcome limitations double use word deep refers first deep learning know second teaching computers deep knowledge like difference teaching physicians versus paramedics teaching engineers versus technicians computer teaching process could potentially involve automatic learning publications documented sources soon program computers train like dogs https title june wired magazine article focusing key difference classic software programming provides explicit step step instructions solve problem machine learning software provide sample cases generic training algorithm keeps iterating software learns solve problem desired performance level dog analogy inspired classic behaviorism studies triggered dog salivation deep understanding hunger simply repeating sequence events provided data code rewrote paraphrasing title describe double deep approach would say soon program computers train like train human university students universal deep knowledge sample cases medical students let teach computers fundamental knowledge equip generic inference algorithms leverage knowledge modules solve specific case medical schools teach students anatomy physiology characteristics specific means results along sample cases execute good diagnostic process generically note knowledge modules independent generic inference process practice patient arrives announce appendicitis presents initial physician uses generic inference engine accessing knowledge modules drive diagnostic process built team medical diagnostic systems endocrinology emergency critical care arthritis space medicine toxicology inference based bayesian inference applied knowledge modules different medical fields anatomy physiology part exist way would today see last sections article double deep learning approach could serve build next generation solutions integrate knowledge modules wealth humanity knowledge repository thus amplify expand applicability likely improve sides spectrum increase quality scope breadth good decisions less important reduce amount glaring mistakes see contemporary personal assistants like siri cortana alexa algorithm make mistake twilight zone human professional may also err totally unacceptable algorithm make glaring mistakes even human beginner would make research effort devoted eliminating glaring mistakes solutions raise fundamental questions intelligence algorithm risk credibility also lot said risks machines taking world one way start dealing integrate output points within neural net sanity checks based external knowledge monitor output needed interfere ensuing actions executed mistakenly shutting nuclear station production line several examples given next two sections article based lessons learned decades career examples based inspired systems deployed teams benefit hundreds millions people around globe example clicksoftware products schedule daily close field engineers many world largest service providers assuming engineer delivers average jobs per day works roughly field days per year means year period products touch life million people roughly world population limitations overview sufficient volume data exist requires data massive amounts data thousands speech recording hours needed build speech understanding system many business scenarios volumes data simply exist notwithstanding big data see example troubleshooting new equipment consider building solution support field service technicians new complex medical imaging equipment came take several years large rich enough fault data become available training solution guide service technicians efficient fault isolation subsequent repair actions nothing sufficient data available solution fact time sufficient data accumulated current equipment model replaced newer model would algorithm know data partially applicable longer applicable big data internet things iot connected vehicles intense technologies around producing vast amounts data serving excellent source building solutions factory floor optimization traffic capacity management two solutions actively involved machine learning technologies perfect fit many cases recognizing potential also aware though big data sometimes big enough following story illustrates recent business plan presentation young entrepreneur proposed using recognize patterns certain business situations referencing recent successes face recognition animal classification evidence power asked data plans use replied proudly confidently years good quality comprehensive daily data variables indeed quite nice business environment cases small number cases algorithm produce useful results explainable today solutions operate like even developers fully explain reasoning applications business medicine military explaining reasoning mandatory least highly desirable darpa recent initiative explainable important transactional tasks used earlier terms transactional tasks rather going formal definitions let use ekg interpretation clarify difference example ekg classification medical diagnosis thousands ekg training data signals algorithm excellent job classifying ekg signal shape producing output class elevation see based ekg records user asks algorithm elaborate meaning output patient diagnosis unlikely receive meaningful answer physician hand explain elevation represents ventricular contraction may indicate artery clog may damage heart muscle myocardial infarction difference algorithm physician level understanding ekg findings physician answer based layers top layers anatomy physiology knowledge including electrical impulses relationship ekg findings medical system ekg classification narrow point solution transactional small part problem system beyond signal analysis also connect given ekg shape way heart functions fails integrate patient findings algorithm learn patient data anatomy physiology human heart learn four chambers structure arteries valves conduction system pacemaker walls function module overall blood flow doubt simply patient data contain information enable learning appreciate complexity check https excellent heart simulation connects human heart ekg even fully documented equipment semiconductor equipment medical imaging challenge machine equipment structure function process flow enormous medical diagnosis challenge higher still looking engineering design documentation human body scientist favor research push spectrum approaches learn also understanding boundaries business executive practitioner believe producing today working system tasks like diagnosis cardiology emergency medicine promising approach teach computers explicitly knowledge like teach human medical students rather wait algorithm learn zero comparable level several articles report successful medical diagnosis cancer fairer description situation would algorithms support narrow aspects medical diagnosis process providing point solutions classifying ekg signal detecting tumors medical image searching past patients similar given patient thinking versus calculating newton physics data billions things thousands apples fall every day algorithms human touch certainly come model calculate time given falling object touch ground today come newton laws mean produce models represent deeper understanding earth forces along compact formula time hit hight opposed gigantic black box neural net interested building calculators smart machines probably would ask back need compact formula get right result neural network possibly even accurate data also includes air resistance answer importance deep newton laws mid goes way beyond calculating tool falling objects example beautiful law inertia calculation object rest motion remains rest motion unless acted upon external abstraction generalization analytic formula newton laws basis future physicists refine expand discover new ones earth forces similar processes led deeper understanding heat energy discovery thermodynamics laws electricity continuing way plank einstein quantum physics relativity theory apollo project today time also fairly good understanding gravity forces space far away earth cases science discoveries based sparks brilliant human theoretical thinking little data learn sir isaac newton put great discovery ever made without bold guess relied already known quoting newton seen standing shoulders giants opposed learning zero typical algorithm relevant physics one cornerstones engineering turn basis equipment around enable life modern world agriculture equipment food production equipment medical equipment cars airplanes course computers mobile phones systems support equipment maintenance troubleshooting improve substantially equipment uptime thereby offering great value led development aitest troubleshooting software deployed dozens complex equipment around globe confident design knowledge well universal engineering knowledge greatly improve performance smart machines data tell equipment fails design knowledge universal engineering knowledge tell works types knowledge required reach high performance diagnostic repair decisions aitest system illustrates first steps direction automatic conversion engineering diagrams topology test paths bayesian inference network massive scale summary one thing learn data recognize visual objects acoustic signals totally different challenge derive data full understanding newton laws build bridge equipment human organs operate dynamic process flows glaring mistakes every day hear jokes glaring mistakes personal assistants siri cortana alexa quickly lead recognize limited understanding scope depth software developers typically focus maximizing overall percentage accuracy protect smart machine glaring mistakes even human beginners would make push algorithm overall higher percentage accuracy higher likelihood glaring mistakes sneak one manifestation overfitting applications glaring mistakes may something joke others military could catastrophic imagine shooting passenger airplane mistakenly classified threatening object similarly potential misclassification mistakes autonomous land vehicles drones takes one glaring mistakes make users question true intelligence software point solution loses credibility soon costly catastrophic event makes late arguing overall accuracy performance within say promised error boundaries help much business software providers specifically note recovering unforgivable events could take long time costly business client may activate liability clauses contract worse competition may run whole marketing campaign around destroy product reputation always guide teams following principle glaring mistake protection principle addition working towards high overall accuracy also always include sanity checks protect glaring mistakes individual cases displaying output algorithm human user take automatic action run sanity checks extra protection developers autonomous vehicles investing multiple sensors cameras lidar light radar ultrasound overcome limitations individual sensor technology one direction overcome limitations add intelligent algorithms based humanity knowledge including common sense knowledge well deeper universal world knowledge environmental physics engineering behavior beyond ordinary glaring mistakes algorithms could also contribute overcoming mistakes due intentional adversarial images designed fool systems see https context confusing turtle rifle external knowledge good way equip solutions sanity checks second opinion fight glaring mistakes example next paragraph also illustrates point external knowledge data set could helpful model produce good enough results radically religious human touch philosophy adopt closed garden doctrine limit options improve performance within world add data change neural net architecture augment data operators run using external knowledge alternative inference algorithms taboo following example provide good reasons avoid doctrine example aerial image interpretation twilight zone case david top notch experienced aerial image interpretation analysts faced one challenging cases decide whether object image equipment type agriculture military tried options enhance image uncertainty still high sam security specialist stops say hello arrives peak heated debate waiting looks picture calmly says guys coming farmer family judging terrain vegetation strongly doubt equipment farmer would use situation short pause abi says major development david replies absolutely going wake jim boss sam using knowledge picture data adopting data doctrine contemporary practitioners like david abi ignoring sam input limits progress well fan would suggest collecting data cover sam farming knowledge good theoretical exercise far equipment shows twice week short every time slightly different silhouette meaning pictures full year sufficient indeed data augmentation also used fight low volume data yes separate networks built learn contexts objects appear terrain vegetation time year simply asking sam mean explicitly embedding farmers knowledge solution learn data humanity already know table summarizes key messages limitations rows table based inspired byreal life systems deployed provide evidence knowledge plus data likely yield higher performance data see also shoham table data tells knowledge tells object data tells mri medical equipment fails frigate looks like ekg signal arrhythmia type apples falling ground calculate time hit ground knowledge tells works capabilities relationship heart anatomy physiology deeper understanding earth forces double deep learning maximum knowledge deep learning algorithm learn encapsulated data set given task typically substantially less full humanity knowledge task double deep learning approach advocates integrating machine techniques techniques leverage power overcome corresponding limitations first deep deep learning second deep machine teachers knowledge stands extra focus teaching deep foundations first principles reasoning task domain addition shallow prescriptive knowledge facts whenever needed deep teaching mean going beyond teaching shallow experiential knowledge algorithm also achieve data available main drawback early days expert systems analogy like difference teaching physicians versus teaching paramedics teaching engineers versus teaching technicians quoting aristotle knowledge fact differs knowledge reason fact discussed example knowing ekg signal changes different understanding reasons finding actions take dietterich horvitz also called attention made surprisingly little progress date building kinds general intelligence experts lay public envision think artificial intelligence darpa important initiative explainable also likely contribute double deep learning approach wikipedia smart machines rekopedia shared knowledge repository rekopedia concept think repository contains software representation structures humanity science technology knowledge various disciplines call rekopedia repository reko stands reusable knowledge representation structures could whatever agree state transition graphs neural nets bayesian nets logic fuzzy logic frames rules long reusable smart machines proposing initiative establish shared knowledge repository people contribute knowledge structures compatible protocols enable others use ideas service oriented architecture soa software initiatives serve basis learn modules need know inside technical details course concept web services soap corba rest modularity key together mechanisms combine modules higher level knowledge modules applied iteratively create layers top layers humanity knowledge today shared economy spirit almost every algorithm found libraries like python reko repository significant complement energize industry way agreed share open source software code agree share reko knowledge structures note knowledge sharing via textbooks school teaching already hallmark mankind generations let smart machines early google set goal scan humanity published books era let embark similar audacious goal create wikipedia smart machines target readers human rather smart machines goal humanity knowledge software structures develop methodologies tools automatic algorithms convert humanity documented knowledge software structures smart machines use inference algorithms talking monolithic centrally managed initiative rather distributed initiative managed mutually agreed upon governance rekopedia content content contribution rekopedia repository made order come everywhere subject covenant rules people build smart machines variety applications contribute knowledge modules repository also envision different disciplines medicine agriculture environment military engineering manufacturing field service finance insurance marketing sales coming plan priorities content populate domain rekopedia knowledge repository reviewing syllabuses schools universities different areas teach content teach humans good starting point content teach machines cyc project initiated lenat initial focus commonsense knowledge led commercial product offers extensive set reusable knowledge modules available company reko representations algorithms today answers build rekopedia smart machines terms knowledge problem representations corresponding algorithms good foundations mathematics computer science classic simulation theory management science operations research related fields key principle opinion maximize separation knowledge modules inference algorithms operate example illustrates medical diagnosis example rekopedia modules medical diagnosis representation practice indication bayesian network structures figure developed knowledge representation medas system emergency critical care disorders inference algorithms figure also used space medicine arthritis toxicology situation assessment applications beyond medicine years experience learned average human expert hours needed put bayesian network templates knowledge diagnosing single medical disorder means hours complete disorders likely involve tens thousands symptoms signs syndromes test results findings assuming working part time support staff years budget would take care expenses build reko modules part medicine fact medas approach advocates hierarchical structures disorders means completing base set average time per disorder come hours per disorder medas reached convincing performance early stages reached agreement gold standard long moved areas probability values links nodes bayesian nets start known values medical publications data bases expert subjective values patient data accumulated apply machine learning algorithms update bayesian nets complemented connected reko modules capture knowledge anatomy dna knowledge sources improve inferences made bayesian inference algorithms looking far future large parts world patient data automated starting birth date including genome map data every individual systems take intelligent healthcare automation new heights terms early warning prevention diagnosis treatment reducing cost improving quality automatic learning reko structures humanity documented knowledge developing tools automate conversion process natural language material including diagrams pictures reko structures accelerate considerably requires taking natural language understanding nlu higher level yet get appreciation challenge consider task automatic summary generation documents state art field tells nlu software still far away truly understanding reads let alone extracting knowledge compared college student reading chapter economy textbook able solve homework exercises stop however starting manually build reko repository whichever field science technology desire figure hierarchical structure medical disorders figure cycle diagnostic assessment benbassat summary today applications typically point solutions transactional tasks lend automatic generalization beyond scope data sets based industry fragmented establishing broad deep enough foundations enable build higher level generic universal intelligence let alone superintelligence must find ways create synergies fragments connect external knowledge sources wish scale faster industry second spring long winter avoid sliding winter essential rebalance roles data knowledge data important deep equally important indeed driver next economic social revolution like electricity better establish solid foundations infrastructure develop disseminate preferably standards fair economics personal note acknowledgments article based decades career blend scientist usc university ucla business entrepreneur years ceo clicksoftware nasdaq cksw public company researching practicing educating artificial intelligence first spring winter early years renaissance century academic research supported nih nsf darpa nasa bmd ballistic missile defense agency ari army research institute israel defense forces others founder ceo clicksoftware inventor service chain patent awarded acquired private equity firm plataine leader solutions manufacturing optimization leverage technologies solve large scale business problems developed innovative products benefit hundreds millions people around globe details publication list see http grateful israel beniaminy son avner colleagues deep useful discussions well comments early drafts article references brynjolfsson mcafee business artificial intelligence harvard business review july carlson puri davenport schriver latif smith portigal lipnick weii interactive diagnosis multiple disorders medas system ieee transactions pattern analysis machine intelligence vol march expert systems clinical diagnosis approximate reasoning expert systems gupta kandel bandler kiszka eds north holland use diagnostic expert systems aircarft maintenance real life examples proceedings aircraft maintenance engineering conference singapore beniaminy joseph combining expert systems research perspectives case studies system test diagnosis dietterich horvitz rise concerns reflections directions communications acm volume oct gunning explainable artificial intelligence xai http gunning georgakis trace evens statistical evaluation diagnostic performance medasthe medical emergency decision assistance system proc annu symp comput appl med care nov knight dark secret heart mit technology review april lecun bengio hinton deep learning nature may lenat cyc investment knowledge infrastructure communications acm volume rajpurkar hannun haghpanahi bourn arrhythmia detection convolutional neural networks https july shoham knowledge representation matters comm acm jan appendix key messages machines learn many things data data source machines learn maximum knowledge deep learning algorithm learn encapsulated data set given task typically substantially less full humanity knowledge task instance using field service data algorithm learn equipment fails learn data works developing deploying solutions data yet available possible substantial business value directly embedding explicit humanity knowledge one thing train calculate time falling apple hit ground using data falling apples totally different challenge train algorithm data come newton earth gravity laws humans learn explicit teaching machines machine learning technologies still limitations learning wait someone trains algorithm learn data humanity already learned documented textbooks publications anatomy physiology electrical conduction human heart merit makes sense today world data knowledge two solutions benefit considerably hand adopting doctrine give many options improve performance expand applicability double deep learning approach advocates integrating machine machine teachers second deep machine teachers extra focus teaching deep foundational first principles knowledge aiming higher level intelligence like difference teaching physicians versus paramedics teaching engineers versus technicians radically religious specific algorithm aiming push applicability envelope far possible good even desirable science perspective business perspective however may may even prevent deploying today business solutions deliver tremendous value wikipedia smart machines grow faster establishing shared repository reusable knowledge modules coined rekopedia covering humanity science technology various disciplines illustration project proposed produce rekopedia modules medical diagnosis disorders
| 2 |
logarithmic integrality gap bound directed steiner tree graphs zachary jochen mohammad apr department computing science university alberta edmonton canada zacharyf department combinatorics optimization university waterloo waterloo canada jochen department industrial engineering operations research columbia university new york usa april abstract demonstrate integrality gap natural relaxation directed steiner tree problem log graphs terminals instances seen generalize set cover integrality gap analysis tight constant factor novel aspect approach use method technique rarely used designing approximation algorithms network design problems directed graphs introduction instance directed steiner tree dst problem given directed graph costs terminal nodes root remaining nodes steiner nodes goal find cheapest collection edges every terminal using edges throughout let denote denote problem simply arborescence problem solved efficiently however general case fact problem seen generalize group steiner tree problems latter approximated within constant unless dtime npolylog dst instance let denote value optimum solution instance say instance dst terminals partitioned every edge zelikovsky showed dst instance integer compute dst instance poly time dst solution efficiently mapped dst solution cost work part supported nserc discovery grant program second author greatfully acknowledges support hausdorff institute institute discrete mathematics bonn germany charikar exploited fact presented log running time poly integer particular used obtain time constant finding polylogarithmic approximation remains important open problem set nodes let set edges entering following natural linear programming relaxation directed steiner tree min called relaxation natural correspondence feasible solutions dst instance feasible solutions corresponding thus let tlp denote value optimum possibly fractional solution tlp particular instance say integrality gap tlp interested placing smallest possible upper bound quantity interestingly shortest path problem arborescence problem extreme points integral integrality gap respectively however general case zosin khuller showed useful finding polylog algorithms dst authors showed integrality gap relaxation unfortunately bad even instances graph examples number nodes exponential integrality gap may still logc constant hand rothvoss recently showed applying rounds semidefinite programming lasserre hierarchy extended formulation yields sdp integrality gap log instances subsequently friggstad showed similar results weaker linear programming hierarchies paper consider class dst instances instance dst quasibipartite steiner nodes form independent set directed edge endpoints instances still capture set cover problem thus admit kapproximation constant unless furthermore straightforward adapt known integrality gap constructions set cover show integrality gap bad instances hibi fujito give log instances dst provide integrality gap bounds instances context undirected steiner trees class graphs first introduced rajagopalan vazirani studied integrality gap bidirected map given undirected steiner tree instances currently best approximation instances undirected steiner tree goemans also bound integrality gap bidirected cut relaxation quantity relaxation applied directed graph obtained replacing undirected edge two directed edges slight improvement prior constant byrka best approximation general instances undirected steiner tree constant however best known upper bound integrality gap bidirected cut relaxation instances open problem determine integrality gap better contributions main result following let log nth harmonic number theorem integrality gap log graphs terminals furthermore steiner tree cost tlp constructed polynomial time noted theorem asymptotically tight since log integrality gap constructions set cover instances items translate directly integrality gap lower bound using usual reduction set cover instances directed steiner tree integrality gap bound asymptotically matches approximation guarantee proven hibi fujito dst instaces remark approach unlikely give integrality gap bounds iteratively choose full steiner trees spirit give log finding optimum dst solution contain path steiner nodes particular approach also find log approximation optimum dst solution graphs know integrality gap instances prove theorem constructing directed steiner tree iterative manner iteration starts partial steiner tree see definition consists multiple directed components containing terminals set arcs purchased augment partial solution one fewer directed components arcs discovered moat growing procedure feasible solution dual constructed cost purchased arcs bounded using dual solution technique successful undirected network design problems see far fewer success stories known directed domains examples include interpretation dijkstra shortest path algorithm see chapter edmonds algorithm arborescences cases special structure problem instrumental construction one issue arising implementation approaches directed network design problems appears certain overlap moat structure maintained algorithms able handle difficulty exploiting nature instances integrality gap bound preliminaries definitions present algorithmic proof theorem follow strategy first present dual max sums range sets nodes algorithm builds partial solutions defined follows definition partial steiner tree tuple subset nodes subset edges endpoints following hold sets form partition subset steiner nodes every every contains say set free steiner nodes head edges denoted simply say components root component components figure illustrates partial steiner tree note partial steiner tree components fact feasible dst solution finally subset edges let cost figure partial steiner tree components root pictured top edges shown white circles heads various sets black circles terminals heads components squares outside components free steiner nodes note particular head reach every node respective component require minimal set edges property approach algorithm builds partial steiner trees iterative manner ensuring cost increase significant amount iterations specifically prove following lemma section recall tlp refers optimum solution value lemma given partial steiner tree components algorithm finds partial steiner tree components cost cost tlp theorem follows lemma standard way proof theorem initialize partial steiner tree components follows let set steiner nodes furthermore label terminals let note cost iterate lemma obtain sequence partial steiner trees components cost cost tlp return final steiner tree found efficiently follows simply iterating efficient algorithm lemma times cost steiner tree bounded follows cost tlp tlp tlp tlp tlp idea presented resembles one proposed guha bounding integrality gap natural relaxation undirected steiner tree log like approach guha also build solution incrementally phase algorithm authors reduce number connected components partial solution adding vertices whose cost charged carefully value dual solution algorithm constructs simultaneously proof lemma consider given partial steiner tree components lemma promises partial steiner tree components cost cost tlp section present algorithm augments forest sense computes set edges add proof presented constructive design algorithm maintains feasible dual solution uses structure solution guide process adding edges algorithm two nodes let cost cheapest generally subset node let assume every otherwise could merge adding usual conventions algorithms adopted think algorithm continuous process increases value dual variables time time dual variables initialized value point time exactly dual variables raised rate one unit per time unit use time algorithm terminates customary say edge goes tight dual constraint becomes tight dual variables increased edge goes tight perform updates various sets maintained algorithm standard convention applies multiple edges tight time process order algorithm describes main subroutine augments partial steiner tree one fewer components maintains collection moats edges ensuring dual solution grows remains feasible mainly aid notation algorithm maintain called virtual body ensure mate edge cost notational convenience let virtual body root component algorithm grow moat around root since dual variables exist sets containing root algorithm ensure moats pairwise fact ensure two moats may intersect together structure input graph allow charge cost arcs added augmentation process duals grown intuitive overview process following time moats consist nodes moats grown time least one pair tight path connecting point algorithm stops adds carefully chosen collection tight arcs partial steiner tree merges potentially components crucially cost added arcs charged value dual solution grown around merged components due structure graphs able ensure step algorithm active moats pay one arc ultimately bought form also components arc paid moats around different heads total cost purchased arcs finally total dual grown tlp due feasibility cost edges bought bounded tlp algorithm invariants precise procedure presented algorithm following invariants maintained time execution algorithm variable ymi dual distinct furthermore mate cuv feasible value exactly concepts illustrated figure figure moats around two partial steiner trees depicted gray circles dashed edges bought first moat solid edges bought second moat note moats intersect particular lying moats also lies virtual body left partial steiner tree dashed arc entering coming mate edges original partial steiner trees shown observe edge entering goes tight must either terminal would allow merge least one partial steiner tree body another algorithm dual growing procedure raise uniformly moat edge goes tight return partial steiner tree described lemma else let unique moat step proposition invariant analysis lemma invariants maintained algorithm condition statement step true furthermore algorithm terminates iterations proof clearly invariants true initialization steps time given see algorithm terminates polynomial number iterations note iteration increases size moat decrease size moats iterations moat grow include virtual body another moat point algorithm stops assume invariants true point step executed condition step false step finishes show invariants continue hold next iteration starts let denote edge went tight considered step also let denote total time algorithm executed grown moats point proceeding proof exhibit following useful fact follows let mjt moat around time algorithm proposition demonstrates control overlap moats exploiting structure proposition mjt proof suppose sake contradiction mjt since mjt subset moat containing time must invariant implies since therefore since terminating condition step would satisfied contradiction following proposition let unique index step invariant first note never loses vertices algorithm execution therefore always contains head vertex also vertex part otherwise algorithm would terminated step hence also contain root node invariant reinterpretation dijkstra algorithm framework chapter coupled fact edge considered step iteration crosses one moat given time proposition invariant suppose implies hence thus termination condition step satisfied algorithm terminated contradiction added thus remains unchanged continues hold also must otherwise would violate termination condition step suppose added still otherwise contradicts fact invariant holds start iteration also otherwise would mean well proposition established however contradicts fact invariant clear simply add nodes sets suppose added case start claim also part since otherwise contradicting invariant hence note implies hence proposition finally implies moats crossed moats around since algorithm grows one moat around time cuv completes proof invariant invariant step stops first time constraint becomes tight feasibility maintained step algorithm grows precisely moats simultaneously objective function simply sum dual variables value dual times total time spent growing dual variables augmenting complete final detail description algorithm show construct partial steiner tree step reached lemma shows invariants hold step final iteration say final iteration executes time units edge goes tight considered step lemma step reached algorithm efficiently find partial steiner tree components cost cost tlp proof let unique index time exactly one ensured invariants next let note consists indices except perhaps termination condition vertex lies definition let mate defined invariant otherwise let notational convenience let path consisting single edge trivial path edges either case say cost invariant let shortest invariant implies observe also tightness time definition imply cuv fact precisely dual variables contribute cuv contribution variables cuv construct partial steiner tree obtained algorithm follows sets head unchanged replace components component head edges component free steiner nodes steiner nodes contained components namely consists nodes contained path show steiner tree constructed satisfies conditions stated lemma first verify constructed indeed valid partial steiner tree clearly new sets partition subset steiner nodes note construction moat contains thus replaced constructed head new component next consider reached follows follow path cross edge follow reach finally follow finally lies path case reached similar way also clear number components also cost cost cost paths plus cuv easily follows cost cuv cuv tlp last bound follows feasible dual grown value tlp let number nonroot components conclude observing wrap things executing algorithm constructing partial steiner tree lemma yields partial steiner tree promised lemma conclusion shown integrality gap relaxation log stances directed steiner tree gap known instances log instances since graphs generalization instances natural ask generalization instances log even integrality gap one possible generalization graphs would subgraph induced steiner nodes node positive indegree positive outdegree none known results directed steiner tree suggest instances bad gap even restricted graphs straightforward adaptation algorithm grow moats around partial steiner tree heads partial steiner trees absorbs another fails grow sufficiently large dual pay augmentation within reasonable factor new idea needed references byrka grandoni rothvoss sanita steiner tree approximation via iterative randomized rounding journal acm calinescu zelikovsky polymatroid steiner problems combinatorial optimization charikar chekuri cheung dai goel guha approximation algorithms directed steiner problems algorithms edmonds optimum branchings res natl bur feige threshold approximating journal acm friggstad louis shadravan tulsiani linear programming hierarchies suffice directed steiner tree proceedings ipco guha moss naor schieber efficient recovery power outage extended abstract proceedings stoc goemans olver rothvoss zenklusen matroids integrality gaps hypergraphic steiner tree relaxations proceedings stoc goemans williamson general approximation technique constrained forest problems siam journal computing halperin krauthgamer polylogarithmic inapproximability proceedings stoc hibi fujito greedy approximation directed steiner trees applications proceedings papadimitriou steiglitz combinatorial optimization algorithms complexity rajagopalan vazirani bidirected cut relaxation metric steiner tree problem proceedings soda rothvoss directed steiner tree lasserre hierarchy corr steurer dinur analytical approach parallel repetition corr vazirani approximation algorithms zelikovsky series approximation algorithms acyclic directed steiner tree problem algorithmica zosin khuller directed steiner trees proceedings soda
| 8 |
transfer learning recognition neural networks young mit jjylee franck mit francky may abstract recent approaches based artificial neural networks anns shown promising results recognition ner order achieve high performances anns need trained large labeled dataset however labels might difficult obtain dataset user wants perform ner label scarcity particularly pronounced patient note instance ner work analyze extent transfer learning may address issue particular demonstrate transferring ann model trained large labeled dataset another dataset limited number labels improves upon results two different datasets patient note introduction electronic health records ehrs widely adopted countries united states represent gold mines information medical research majority ehr data exist unstructured form patient notes murdoch detsky applying natural language processing patient notes improve phenotyping patients ananthakrishnan pivovarov elhadad halpern many downstream applications understanding diseases liao however patient notes shared medical investigators types information referred protected health information phi must removed order preserve authors contributed equally work peter szolovits mit psz patient confidentiality united states health insurance portability accountability act hipaa office civil rights defines different types phi ranging patient names numbers addresses phone numbers task removing phi patient note referred essence recognizing phi patient notes form recognition ner existing systems often approaches machine learning approaches however techniques require additional lead time developing rules features specific new dataset meanwhile recent work using anns yielded performances without using manual features dernoncourt compared previous systems anns competitive advantage model new dataset without overhead manual feature development long labels dataset available however may still inefficient mass deploy system practical settings since creating annotations patient notes especially difficult due fact restricted set individuals authorized access original patient notes annotation task making slow expensive obtain large annotated corpus medical professionals therefore wary explore patient notes deidentification barrier considerably hampers medical research paper analyze extent transfer learning may improve performances datasets limited number labels training ann model large dataset mimic transferring smaller datasets demonstrate transfer learning allows outperform results related work transfer learning studied long time standard definition transfer learning literature follow definition pan yang transfer learning aims performing task target dataset using knowledge learned source dataset idea applied many fields speech recognition wang zheng finance stamate successes anns many applications last years escalated interest studying transfer learning anns particular much work done computer vision yosinski oquab zeiler fergus studies parameters learned source dataset used initialize corresponding parameters anns target dataset fewer studies performed transfer learning models field natural language processing example mou focused transfer learning convolutional neural networks sentence classification best knowledge study analyzed transfer learning models context ner nating outputs token embedding layer character lstm layer outputs sequence vectors fully connected layer takes output token lstm layer input outputs vectors containing scores label corresponding tokens sequence optimization layer takes sequence vectors output fully connected layer outputs likely sequence predicted labels optimizing sum unigram label scores well bigram label transition scores figure shows six components interconnected form model layers learned jointly using stochastic gradient descent regularization dropout applied token lstm layer early stopping used development set patience epochs labels token sentence sequence optimization fully connected token lstm model model use transfer learning experiments based type recurrent neural networks called long memory lstm hochreiter schmidhuber utilizes token embeddings character embeddings comprises six major components concatenate character lstm concatanate token embeddings token embedding layer maps token token embedding character embeddings character embedding layer maps character character embedding character lstm layer takes input character embeddings outputs single vector summarizes information sequence characters corresponding token token lstm layer takes input sequence token vectors formed jth token sentence clj characters jth token figure ann model ner transfer learning experiments train parameters model source dataset transfer parameters initialize model training target dataset experiments datasets use three datasets transfer learning experiments mimic mimic dataset introduced dernoncourt subset dataset johnson goldberger saeed datasets released part shared task track stubbs cegs shared task respectively table presents datasets sizes mimic vocabulary size number notes number tokens number phi instances number phi tokens table overview mimic datasets phi stands protected health information transfer learning goal transfer learning leverage information present source dataset improve performance algorithm target dataset setting apply transfer learning training parameters ann model source dataset mimic using ann retrain target dataset use mimic source dataset since dataset labels perform two sets experiments gain insights effective transfer learning parameters ann important experiment quantifying impact transfer learning various train set sizes target dataset primary purpose experiment assess extent transfer learning improves performances target dataset experiment different train set sizes understand many labels needed target dataset code extension ner library neuroner dernoncourt committed neuroner repository https achieve reasonable performances without transfer learning experiment analyzing importance parameter ann transfer learning instead transferring parameters experiment transferring different combinations parameters goal understand components ann important transfer lowest layers ann tend represent features whereas topmost layers result try transferring parameters starting bottommost layer topmost layer adding one layer time results experiment figure compares ann trained target dataset ann trained source dataset followed target dataset transfer learning improves training target dataset though improvement diminishes number training samples used target dataset increases implies representations learned source dataset efficiently transferred exploited target dataset therefore transfer learning adopted fewer annotations needed achieve level performance source dataset unused example dataset performing transfer learning using train set leads similar performance using transfer learning using train set transfer learning thus allows cut half number labels needed target dataset case datasets performance gains transfer learning greater train set size target dataset small largest improvement observed using dataset train set consisting around phi tokens tokens transfer learning increases around percent point even train set used improves using transfer learning albeit percent point target train set size baseline transfer target train set size figure impact transfer learning baseline corresponds training ann model target dataset transfer learning corresponds training source dataset followed training target dataset target train set size percentage train set whole dataset corresponds full official train set tra nsfe emb lst ken lly ted tra nsfe emb lst ken lly ted figure impact transferring parameters layer ann model using various train set sizes target dataset official train set experiment figure shows importance layer ann transfer learning observe transferring lower layers almost efficient transferring layers transferring token lstm shows great improvements layer less improvement added layer beyond larger improvements observed character lstm less beyond layer parameters lower layers therefore seems contain information relevant task general supports common hypothesis higher layers ann architectures contain parameters specific task well dataset used training despite observation transferring lower layers may sufficient efficient transfer learning interesting see adding topmost layers transfer learning hurt performance retraining model target dataset ann able adapt target dataset quite well despite higher layers initialized parameters likely specific source dataset conclusion work studied transfer learning anns ner specifically patient note transferring ann parameters trained large labeled dataset another dataset limited human annotations demonstrated transfer learning improves performance results two datasets transfer learning may especially beneficial target dataset small number labels references ashwin ananthakrishnan tianxi cai guergana savova cheng pei chen raul guzman perez vivian gainer shawn murphy peter szolovits zongqi xia improving case definition crohn disease ulcerative colitis electronic medical records using natural language processing novel informatics approach inflammatory bowel diseases franck dernoncourt young lee peter szolovits neuroner program recognition based neural networks lili mou zhao meng rui yan yan zhang zhi jin transferable neural networks nlp applications arxiv preprint travis murdoch allan detsky inevitable application big data health care jama hhs office civil rights standards privacy individually identifiable health information final rule federal register maxime oquab leon bottou ivan laptev josef sivic learning transferring image representations using convolutional neural networks proceedings ieee conference computer vision pattern recognition pages sinno jialin pan qiang yang survey transfer learning ieee transactions knowledge data engineering franck dernoncourt young lee ozlem uzuner peter szolovits patient notes recurrent neural networks journal american medical informatics association page rimma pivovarov elhadad automated methods summarization electronic health records journal american medical informatics association ary goldberger luis amaral leon glass jeffrey hausdorff plamen ivanov roger mark joseph mietus george moody chungkang peng eugene stanley physiobank physiotoolkit physionet components new research resource complex physiologic signals circulation mohammed saeed mauricio villarroel andrew reisner gari clifford lehman george moody thomas heldt tin kyaw benjamin moody roger mark multiparameter intelligent monitoring intensive care mimicii intensive care unit database critical care medicine yoni halpern steven horng youngduck choi david sontag electronic medical record phenotyping using anchor learn framework journal american medical informatics association page cosmin stamate george magoulas michael thomas transfer learning approach financial applications arxiv preprint sepp hochreiter schmidhuber long memory neural computation alistair johnson tom pollard shen wei lehman mengling feng mohammad ghassemi benjamin moody peter szolovits leo anthony celi roger mark freely accessible critical care database scientific data amber stubbs christopher kotfila uzuner automated systems longitudinal clinical narratives overview shared task track journal biomedical informatics dong wang thomas fang zheng transfer learning speech language processing signal information processing association annual summit conference apsipa asiapacific ieee pages literature survey domain adaptation algorithms natural language processing department computer science graduate center city university new york pages jason yosinski jeff clune yoshua bengio hod lipson transferable features deep neural networks advances neural information processing systems pages katherine liao tianxi cai guergana savova shawn murphy elizabeth karlson ashwin ananthakrishnan vivian gainer stanley shaw zongqi xia peter szolovits development phenotype algorithms using electronic medical records incorporating natural language processing bmj matthew zeiler rob fergus visualizing understanding convolutional networks european conference computer vision springer pages
| 2 |
apr cbmm memo april bridging gaps residual learning recurrent neural networks visual cortex qianli liao tomaso poggio center brains minds machines mcgovern institute mit abstract discuss relations residual networks resnet recurrent neural networks rnns primate visual cortex begin observation shallow rnn exactly equivalent deep resnet weight sharing among layers direct implementation rnn although orders magnitude fewer parameters leads performance similar corresponding resnet propose generalization rnn resnet architectures conjecture class moderately deep rnns model ventral stream visual cortex demonstrate effectiveness architectures testing dataset work supported center brains minds machines cbmm funded nsf stc award ccf introduction residual learning novel deep learning scheme characterized architectures recently achieved performance several popular vision benchmarks recent incarnation idea hundreds layers demonstrate consistent performance improvement shallower networks error achieved residual networks imagenet test set arguably rivals human performance recent claims networks alexnet type successfully predict properties neurons visual cortex one natural question arises similar residual network primate cortex notable difference depth residual network many layers biological systems seem two orders magnitude less make customary assumption layer architecture corresponds cortical area fact half dozen areas ventral stream visual cortex retina inferior temporal cortex notice takes order neural activity propagate one area another one remember spiking activity cortical neurons usually well evolutionary advantage fewer layers apparent supports rapid image onset meaningful information neural population visual recognition key ability human primates intriguingly possible account discrepancy taking account recurrent connections within visual area areas visual cortex comprise six different layers lateral feedback connections believed mediate attentional effects even learning backpropagation unrolling time recurrent computations carried visual cortex provides equivalent feedforward network might represent appropriate comparison computer vision models addition conjecture effectiveness recent neural networks primarily come fact efficiently model recurrent computations required recognition task show compelling evidences conjecture demonstrating deep residual network formally equivalent shallow rnn rnn weight sharing thus orders magnitude less parameters depending unrolling depth retain performance corresponding deep residual network furthermore generalize rnn class models models cortex show effectiveness equivalence resnet rnn intuition discuss simple observation residual network resnet approximates specific standard recurrent neural network rnn implementing discrete dynamical system described activity neural layer time nonlinear operator dynamical systems corresponds feedback system figure figure shows unrolling discrete time feedback system gives deep residual network shared weights among layers number layers unrolled network corresponds discrete time iterations dynamical system identity shortcut mapping characterizes residual learning appears figure thus resnets shared weights reformulated form recurrent system section show experimentally resnet shared weights retains performance comparison plain rnn resnet shared weights appendix figure unfold fold resnet shared weights resnet recurrent form figure formal equivalence resnet weight sharing rnn identity operator operator denoting nonlinear transformation called main text value input time kronecker delta function formulation terms dynamical systems feedback frame recurrent residual neural networks language dynamical systems consider dynamical systems discrete time though definitions carry continuous time neural network assume simplicity single layer neurons dynamical system dynamics defined activity neurons layer time continuous bounded function parametrized vector weights typical neural network synthesized following relation activity single neuron inputs nonlinear function linear rectifier standard classification dynamical systems defines system homogeneous alternatively equation reads inital condition time invariant residual networks weight sharing thus correspond homogeneous systems turn correspond feedback system see figure input time normal residual networks correspond homogeneous systems analysis corresponding inhomogeneous system provided appendix generalized rnn fully recurrent processing shown previous section recurrent form resnet actually shallow ignore possible depth operator section generalize moderately deep rnn reflects processing primate visual cortex graph propose general formulation capture computations performed processing hierarchy full recurrent connections hierarchy characterized directed cyclic graph vertices edges vertices set contains processing stages also call states take ventral stream visual cortex example lgn note retina listed since known feedback primate cortex retina edges set contains connections transition functions etc one example graph figure fully recurrent system receive raw inputs rather deep neural network serve preprocesser call preprocesser shown figure hand one also needs postprocessor provide supervisory signals recurrent system recurrent system trained fashion backpropagation models paper unless stated otherwise simple convolutional layer pipeline batch normalization relu global average pooling fully connected layer convolution use terms interchangeably take primate visual system instance retina part receive feedback cortex thus separated recurrent system simplicity section also tried layers convolutions might similar retina observed slightly better performance transition matrix set edges represented matrix element represents transition function state state one also extend representation matrix third dimension time element represents transition function state state time formulation transition functions retina loss input fully recurrent neural network full model pool loss loss conv input input optional simulating model time unrolling conv example resnet comparison figure modeling ventral stream visual cortex using fully recurrent neural network vary time blocked time time increased expressive power formulation allows design system multiple locally recurrent systems connected sequentially downstream recurrent system receives inputs upstream recurrent system finishes similar recurrent convolutional neural networks system weights also represent exactly resnet see figure nevertheless dynamical systems recurrent networks real neurons synapses offer interesting questions flexibility controlling time dependent parameters example transition matrices used paper shown figure multiple transition functions state outputs summed together shared weights weight sharing described level unrolled network thus possible unshared weights transition matrix even transitions stable time weights could given unrolled network weight sharing configuration described set whose element set tied pool conv pool size size loss size size input input resnet without changing spacial feature sizes conv loss size conv connection available specified time conv resnet changes spacial feature sizes recurrent form subsample increase features subsample increase features recurrent form figure show two types resnet corresponding rnns single state resnet spacial featural sizes fixed resnet spacial size reduced factor featural size doubled time meta parameter type corresponds ones proposed weights wim wim denotes weight transition functions state time requires weights wim initial values actual gradients used updating element sum gradients elements used original training objective rnns weights usually shared across time one could unshare weights share across states perform complicated sharing using framework notations unrolling depth readout time meaning unrolling depth may vary different rnn models since unrolling cyclic graph well defined paper adopt definition simulate time onset visual stimuli assuming transition function takes constant time use term readout time refer time reads data last state definition principle allows one quantitive comparisons biological systems model readout time paper wall clock time estimated considering latency single layer biological neurons regarding initial values states empty except first state data received inf connection available specified time inf inf conv example transition matrix fully recurrent example transition matrix fully recurrent example transition matrix fully recurrent modeling visual cortex inf inf conv transition matrix resnet transition matrix resnet figure transition matrices used paper denotes batch normalization conv denotes convolution deconvolution layer denoted deconv used transition function spacially small state spacially large one denotes pipeline similar residual module always nearby states stride convolution upsampling deconvolution used transition functions match spacial sizes input output states intermediate feature sizes transition function chosen average feature size input output states denotes identity shortcut mapping design transition functions could interesting topic future research start simulate transition function input state populated sequential static rnn model supports sequential data processing principle tasks supported traditional rnns see figure illustrations however batch normalization model use normalization described section might feasible tasks batch normalizations rnns additional observation found generally hurts performance normalization statistics average standard deviation learnable scaling shifting parameters batch normalization shared across time may consistent observations however good performance restored apply procedure call normalization mean standard deviation calculated independently every using training set learnable scaling shifting parameters models use learnable parameters since tend affect performance much expect procedure benefit rnns trained batch normalization however use procedure one needs initial enumerate possible feasible visual processing needs modifications tasks input recurrence output example sequential data processing figure model supports sequential includes mappings related work deep recurrent neural networks final model deep similar stacked rnn several main differences model feedback transitions hidden layers hidden layer model identity shortcut mappings inspired residual learning transition functions deep convolutional suggested term depth rnn could also refer connections model deep senses see section recursive neural networks convolutional recurrent neural networks unfolding rnn feedforward network weights many layers tied reminiscent recursive neural networks recursive first proposed recursive characterized applying operations recursively structure convolutional version first studied subsequent related work includes one characteristic distinguishes model residual learning recursive convolutional recurrent whether identity shortcut mappings discrepancy seems account superior performance residual learning model latters recent report became aware finished work discusses idea imitating cortical feedback introducing loops neural networks highway network feedforward network inspired long short term memory featuring general shortcut mappings instead hardwired identity mappings used resnet experiments dataset training details test models standard dataset images pixels color data augmentation performed way momentum used hyperparameter experiments run epochs batchsize unless stated otherwise learning rates first epochs epoch last epochs experiments used loss function softmax classification batch normalization used experiments learnable scaling shifting parameters used except last layer network weights initialized method described although expect initialization matter long batch normalization used implementations based matconvnet experiment resnet shared weights sharing weights across time conjecture effectiveness resnet mainly comes fact efficiently models recurrent computations required recognition task case one able reinterpret resnet rnn weight sharing achieve comparable performance original version demonstrate various incarnations idea show indeed case tested resnets described figure results shown figure epoch epoch param shared param epoch resnet epoch fully recurrent param shared param param shared param validation error param shared param resnet validation error fully recurrent param shared param training error training error param shared param training error resnet validation error resnet epoch epoch figure models robust sharing weights across time supports conjecture deep networks well approximated rnns transition matrices models shown figure param denotes number parameters resnet single state size height width features trained tested readout time resnet states size transition via simple convolution time state unrolled times fully recurrent states size trained tested readout time resnet generalization directly comparable resnet showing benefit states fully recurrent shared weights fewer parameters outperforms resnet weights sharing weights across convolutional layers less pure engineering interests one could push limit weight sharing sharing across time also across states show two resnets use single set convolutional weights across convolutional layers achieve reasonable performance parameters figure parameters feat param feat param validation error training error feat param feat param epoch epoch figure single set convolutional weights shared across convolutional layers resnet transition time nearby states max pooling stride means state unrolled times denotes number feature maps across states learning rates experiments except epochs used experiment recurrent neural networks shared weights although rnn usually implemented shared weights across time however possible unshare weights use independent set weights every time practical applications whenever one initial enumerate possible rnn weights feasible similar batch normalization described results fully recurrent neural networks shared weights shown figure effect readout time visual cortex useful information increases time proceeds onset visual stimuli suggests recurrent system might better representational power time allowed tried training testing fully recurrent network various readout time unrolling depth see section observe similar effects see figure larger models states shown effectiveness fully recurrent network comparing resnet discuss several observations regarding networks first models seem generally outperform ones expected since parameters introduced models minimum engineering able get validation error fully recurrent different readout time param param param param validation error training error param param param param epoch epoch figure fully recurrent network readout time see section definition readout time consistent performance improvement increases number parameters changes since recurrent connections contributing output thus number parameters subtracted total next computational efficiency tried allowing state transitions adjacent states disabling bypass connections case number transitions scales linearly number states increases instead quadratically setting performs well networks slightly less well networks perhaps result small sizes adjacent connections models longer fully recurrent finally fully recurrent networks models tend become overly computationally heavy train large large number feature maps small feature maps achieved better performance networks reducing computational cost training densely recurrent networks would important future work experiments subsection choose moderately deep three convolutional layers model layers retina essential outperforms shallow slightly within validation error results shown figure generalization across readout time rnn model supports training testing different readout time based theoretical analyses section representation usually guaranteed converge running model time nevertheless model exhibits good generalization time results shown figure minor detail model experiment adjacent connections expect affect conclusion recurrent neural networks full adjacent epoch training error full param adjacent param validation error training error full param adjacent param epoch full param adjacent param validation error recurrent neural networks small model epoch full param adjacent param epoch figure performance models state sizes model state sizes model models small since computationally heavy readout time models models systems weights shared across time discussion dark secret deep networks trying imitate recurrent shallow networks radical conjecture would effectiveness deep feedforward neural networks including limited resnet attributed ability approximate recurrent computations prevalent tasks larger shallow feedforward networks may offer new perspective theoretical pursuit question deep better shallow equivalence recurrent networks turing machines dynamical systems particular discrete time systems difference equations turing universal game life cellular automata demonstrated turing universal thus dynamical systems feedback systems discussed equivalent turing machine offers possibility representing computation complex single instance boolean function number learnable parameters consider instance powercase learning mapping input vector output vector belong space output thought asymptotic states discrete dynamical system obtained iterating map expect many cases dynamical system asymptotically performs mapping may much simpler structure direct mapping words expect mapping appropriate possibly large much simpler mapping means iterate map empirical finding recurrent network residual networks weight sharing work well key finding recurrent networks seem perform well deep residual networks state corresponds cortical area without shared weights one hand surprising number parameters much reduced hand recurrent network fixed parameters equivalent turing machine maximally powerful conjecture cortex recurrent computations cortical areas models cortex led deep convolutional architectures followed neocognitron hmax recent models neglected layering cortical area feedforward recurrent connections within area also neglected time evolution selectivity invariance areas conjecture propose paper changes picture quite drastically makes several interesting predictions area corresponds recurrent network thus model trained readout time error training set error test set error train test readout time figure training testing different readout time recurrent network adjacent connections trained readout time test system temporal dynamics even flashed inputs increasing time one expects asymptotically better performance masking mask input image flashed briefly disrupt recurrent computations area performance increase time even without mask briefly flashed images finally remark proposal unlike relatively shallow feedforward models implies cortex fact component areas computationally powerful universal turing machines acknowledgments work supported center brains minds machines cbmm funded nsf stc award ccf references references using deep learning models understand sensory cortex nature neuroscience christian friston modulation connectivity visual pathways attention cortical interactions evaluated structural equation modelling fmri cerebral cortex isaac caswell chuanqi shen lisa wang loopy neural nets imitating feedback loops human brain report stanford http google scholar time stamp march david eigen jason rolfe rob fergus yann lecun understanding deep architectures using recursive convolutional network arxiv preprint salah hihi yoshua bengio hierarchical recurrent neural networks dependencies citeseer kunihiko fukushima neocognitron neural network model mechanism pattern recognition unaffected shift position biological cybernetics april alex graves generating sequences recurrent neural networks arxiv preprint kaiming xiangyu zhang shaoqing ren jian sun deep residual learning image recognition arxiv preprint kaiming xiangyu zhang shaoqing ren jian sun delving deep rectifiers surpassing performance imagenet classification proceedings ieee international conference computer vision pages kaiming xiangyu zhang shaoqing ren jian sun identity mappings deep residual networks arxiv preprint sepp hochreiter schmidhuber long memory neural computation hupe james payne lomber girard bullier cortical feedback improves discrimination figure background neurons nature sergey ioffe christian szegedy batch normalization accelerating deep network training reducing internal covariate shift arxiv preprint minami ito charles gilbert attention modulates contextual influences primary visual cortex alert monkeys neuron alex krizhevsky learning multiple layers features tiny images alex krizhevsky ilya sutskever geoffrey hinton imagenet classification deep convolutional neural networks advances neural information processing systems pages victor lamme hans super henk spekreijse feedforward horizontal feedback processing visual cortex current opinion neurobiology laurent gabriel pereyra brakel ying zhang yoshua bengio batch normalized recurrent neural networks arxiv preprint ming liang xiaolin recurrent convolutional neural network object recognition proceedings ieee conference computer vision pattern recognition pages qianli liao joel leibo tomaso poggio important weight symmetry backpropagation arxiv preprint hrushikesh mhaskar qianli liao tomaso poggio learning real boolean functions deep better shallow arxiv preprint guido montufar razvan pascanu kyunghyun cho yoshua bengio number linear regions deep neural networks advances neural information processing systems pages razvan pascanu caglar gulcehre kyunghyun cho yoshua bengio construct deep recurrent neural networks arxiv preprint pedro pinheiro ronan collobert recurrent convolutional neural networks scene parsing arxiv preprint rajesh rao dana ballard predictive coding visual cortex functional interpretation effects nature neuroscience riesenhuber poggio hierarchical models object recognition cortex nature neuroscience november schmidhuber learning complex extended sequences using principle history compression neural computation thomas serre aude oliva tomaso poggio feedforward architecture accounts rapid categorization proceedings national academy sciences united states america richard socher cliff lin chris manning andrew parsing natural scenes natural language recursive neural networks proceedings international conference machine learning pages rupesh kumar srivastava klaus greff schmidhuber highway networks arxiv preprint simon thorpe denis fize catherine marlot speed processing human visual system nature andrea vedaldi karel lenc matconvnet convolutional neural networks matlab proceedings annual acm conference multimedia conference pages acm yamins dicarlo using deep learning models understand sensory cortex matthew zeiler dilip krishnan graham taylor rob fergus deconvolutional networks computer vision pattern recognition cvpr ieee conference pages ieee illustrative comparison plain rnn resnet plain rnn resnet recurrent form unfold unrolled rnn resnet weight sharing figure resnet reformulated recurrent form almost identical conventional rnn inhomogeneous resnet inhomogeneous version resnet shown figure let asymptotically power series expansion equation inhomogeneous version resnet corresponds standard resnet shared weights shortcut connections input every layer model one state experimentally observed shortcuts undesirably add raw inputs final representations degrade performance however unfold fold resnet shared weights shortcuts input layers folded figure inhomogeneous resnet model multiple states like visual cortex might first state receive constant inputs retina lgn figure shows performance inhomogeneous recurrent network comparison homogeneous ones recurrent neural networks full param adjacent param adjacent inhomogeneous param input homogeneous input inhomogeneous training error epoch full param adjacent param adjacent inhomogeneous param validation error epoch figure inhomogeneous models settings figure models
| 9 |
existence slices tame context sophie marques jan contents keywords acknowledgment introduction hypotheses basic concepts notation definition slices slice theorem actions finite smooth group scheme unramified case definitions global slice theorem local slice theorem linearly reductive group scheme definition cohomological properties liftings linearly reductive group schemes tame quotient stack coarse moduli spaces exactness functor invariants definition tame quotient stack local definition tameness existence torsor slice theorem tame quotient stacks tame action affine group scheme tame quotient stack algebraic interlude tame actions exactness functor invariants tame actions relationship two notions tameness references abstract study ramification theory actions involving group schemes focusing tame ramification consider notion tame quotient stack introduced aov one tame action introduced cept establish local slice theorem unramified actions proving interesting lifting properties linearly reductive group schemes establish slice theorem actions commutative group schemes inducing tame quotient stacks roughly speaking show actions induced action extension inertia group finitely presented flat neighborhood finally consider notion tame action determine notion related one tame quotient stack previously considered date submitted cejm keywords group schemes ramification unramified freeness tameness slice theorem linearly reductive group schemes lifting actions quotient stack trivial cohomology acknowledgment would like thanks supervisors boas erez marco garuti also angelo vistoli yuri tschinkel help advise introduction locally topology actions constant group schemes slightly generally etale group schemes induced actions inertia groups one reconstruct original action action inertia group point theorem direct extension classic result decomposition finite extensions valuation field moving completion corollaire proposition statement instance slice theorem see theorem paper establish general slice theorem tameness hypothesis one motivation studying slices generality theory tame covers sense grothendieck murre admit slices see abhyankar lemma another motivation fundamental theorem luna states actions linearly reductive algebraic groups affine varieties tame see admit slices simple case action trivial inertia groups slice theorem simply statement freeness local proposition general tame quotient stacks introduced abramovich olsson vistoli study certain ramification issues arising moduli theory characterize tameness quotient stacks actions finite commutative group schemes via existence finitely presented flat slices linearly reductive slice groups theorem roughly speaking show actions induced action extension inertia group finitely presented flat neighborhood furthermore lifting inertia group constructed also case subgroup initial group flat linearly reductive group scheme theorem moreover theorem shows tameness characterized property inertia groups topological points linearly reductive suffices require geometric points one could expected definition tameness fact let consider dedekind rings abstract finite group invariant ring action prime ideal write kppq well known tame prime inertia group ppq order prime characteristic kppq last condition equivalent requiring group algebra ppqs semisimple constant group scheme ppq attached ppq linearly reductive ppq exactly inertia group action constant group scheme attached specpbq iii example independently chinburg pappas erez taylor introduced notion tame actions thus natural anticipate relationship two notions tameness prove quite general hypotheses tame actions define tame quotient stacks theorem fact additional hypotheses finiteness two notions equivalent theorem thus previous results apply notion tameness answer precisely question hypotheses basic concepts notation write fppf faithfully flat finitely presented throughout fix following notation let commutative noetherian unitary base ring modules algebras algebras commutative let specprq corresponding affine base scheme schemes scheme sscheme let resp base change resp particular scheme write resp instead resp base change resp given schemes fiber product together natural projections let flat finitely presented commutative hopf algebra denotes comultiplicaton unit map antipode affine flat group scheme associated specpbq affine scheme remark simplicity consider actions involving affine schemes even though following results true actions involving general schemes glueing definition tameness always generalized actions involving schemes see action denoted write structure map write structure map giving structure see use sigma notation hopf algebra literature specifically write paq presentation purely symbolic terms stand particular elements comultiplication takes values know elements integer sigma notation way separate words notation stands generic notation stands generic similarly right comodule respectively left comodule write pmq spectively denote ppq inertia group fiber product ppq specpkppqq kppq residue field specpkppqq morphism induced canonical morphism kppq galois map sending let pbq ring invariants action specpcq morphism induced inclusion remark also consider scheme data morphism defining action equivalent data morphism defining action denote xyy base change considered following say together fppf galois map isomorphism choose work fppf topology reasonable topology work assumptions write quotient stack associated action recall defined gsptq together morphism get canonical sending trivial together morphism present hypothesis fppf artin criterium see artin stack denote classifying stack quotient stack associated trivial action moreover stands categorical quotient category algebraic spaces algebraic space morphism algebraic space morphism factorizes via morphism making following diagram commute finite flat theorem remark quotient stack always defined soon flat group scheme instead exist necessarily even exist quotient stack gives information action categorical quotient precisely difference two holds existence automorphisms points stack automorphisms correspond inertia groups action slice theorem reducing fppf locally action action lifting inertia group would imply data quotient stack associated enough rebuild locally action case action finite group scheme see theorem denote resp category right resp left resp category right resp left let two right denote coma morphisms recall map called morphism ida denote category objects also right left structural map comodules pbnq pnq morphisms morphisms simultaneously acomodule morphisms write bima set morphisms submodule invariants pmqa defined exact sequence pmqa mba defines functor invariants left exact definition denote qcohg pxq quasicoherent sheaves see qcohg pxq artin stack denote qcohpxq sheaves qcohprx gsq qcohg pxq see mar chapter definition slices recall notion slices introduced definition definition say action admits respectively finitely presented flat slices categorical quotient category algebraic spaces exist resp finitely presented flat maps via morphism closed subgroup stabilizes point action induced subgroup called slice group remark roughly speaking respectively fppf topology action admits slices described action lifting inertia group point action thought neighborhood orbit neighborhood induced action lifting inertia group point called tubular neighborhood slice theorem actions finite smooth group scheme since finite smooth group scheme group scheme locally constant topology enough consider action constant group scheme let abstract finite group denote constant group scheme associated associated hopf algebra form recall first important facts data action equivalent data action see see theorem particular case also permits make transition algebra classic number theory algebraic geometry context understand later definitions operates transitively prime ideals see chap theorem inertia group scheme point constant group scheme associated algebraic inertia group ppq kppqu action induced ideal see iii example order alleviate writing use morphism integral see chap proposition prime ideal write csh strict henselization base change bbc also integral since want result locally topology without loss generality suppose following base local strictly henselian maximal ideal product local component pbm runs though set prime maximal ideals maximal ideal let prime ideal define ppq set maps ppq carries via notations obtain easily following isomorphism ppq ppq previous notations one obtains action rebuilt locally thanks action inertia group lemma chapitre canonical isomorphism compatible actions defined ppq bqp ppq moreover proof since ibp ppq let prime ideals put since action transitive moreover since suppose strictly henselian integral know bpi consequence written uniquely acts representation left translation permuting term direct sum particular obtain bqp proves isomorphism finally composite map ppq ppq induces isomorphism rewriting previous theorem terms gives exactly slices action finite group scheme theorem action finite group scheme admits slices proof first categorical quotient category algebraic space see theorem rest proof direct consequence previous theorem since already mentioned assume without loss generality constant form using previous notation take write chs strict henselization definition since finite take subextension specpchs containing image local ring slice group constant group scheme associated ppq specppb prime ideal maximal ideal local ring corresponding unramified case definitions definition say action unramified inertia group ppq trivial say action unramified unramified say action free galois morphism closed immersion lemma iii following assertions equivalent free unramified global slice theorem already following theorem theorem iii suppose finite flat following assertions equivalent unramified local slice theorem lemma suppose local maximal ideal following assertions equivalent free action ppq trivial proof first implication follows definition let prove let kpqq residue field quotient morphism since finite finite set specpkpqqq pqq primes since inertia trivial prime trivial prime since prime ideals conjugate finitely presented flat finite base change finitely presented flat finite base change action free words closed immersion see iii proposition surjection since bbr bbr finite nakayama lemma bbr bbr surjection hence action free lemma deduce easily following theorem local slice theorem free action theorem suppose finite flat let image via morphism following assertions equivalent inertia group scheme trivial finite base change finite flat morphism containing image action free words induced action denotes trivial group scheme proof previous lemma finite flat morphism containing image free since finite see theorem hence fppf morphism isomorphism left hand side acts factor translation right hand side acts first factor trivial linearly reductive group scheme definition notion tameness define class group schemes nice properties important following definition say linearly reductive group scheme exact cohomological properties cohomology linearly reductive group schemes interesting cohomological vanishing properties useful deformation involving group schemes even characterize linearly reductivity vanishing cohomology lemma see proposition suppose flat finite finitely presented specpkq field following assertion equivalent linearly reductive following lemma part proof lemma lemma linearly reductive exti plbk coherent sheaf suppose also smooth exti plbk coherent sheaf proof lemma cotangent complex lbk structural morphism field belongs dcoh pobk since field coherent sheaf locally free therefore coherent sheaf rhomplbk dcoh pobk since global section functor exact category cohpobk since linearly reductive obtain exti plbk coherent sheaf smooth lbk dcoh pobk exti plbk coherent sheaf liftings linearly reductive group schemes proposition know linearly reductive group scheme point lifted linearly reductive group neighborhood thanks result able prove lift group fppf locally subgroup following sense theorem let point finite flat group scheme finite linearly reductive closed subgroup scheme gkppq specpkppqq exists flat finitely presented morphism point mapping flat linearly reductive closed subgroup scheme whose pullback hkpqq isomorphic pullback kpqq proof let proposition exists morphism point mapping linearly reductive group scheme hkpqq kpqq set specpr one subgroup scheme gkppq defines representable morphism algebraic stacks bkpqq bkpqq gkpqq kpqq want prove existence representable morphism algebraic stacks filling following diagram bkpqq bkpqq gkpqq prove thanks grothendieck existence theorem algebraic stacks see theorem artin approximation theorem see existence depends existence formal deformation morphisms bun hun bun gun filling following diagram bkpqq kpqq gkpqq specpkpqqq theorem obstruction extend morphism lies lbkpqq gkpqq kpqq bkpqq obkpqq gkpqq trivial lemma follows exists arrow filling previous diagram leads existence representable morphism stacks let image via trivial furthermore functor induces homomorphism auts phq automorphism group scheme image since image pullback canonically trivial torsor automorphism group defines group morphism fppf morphism since representable morphism injective finally since proper separated closed tame quotient stack coarse moduli spaces abramovich olsson vistoli introduced notion tame stack recalling definition need additional terminology definition coarse moduli space quotient stack couple algebraic space morphism stacks morphism algebraic space factor algebraically closed field set isomorphism classes geometric points taking value remark notion coarse moduli spaces generalized general artin stack coarse moduli space quotient stack particular categorical quotient category algebraic spaces action precisely let algebraic space datum morphism equivalent datum morphism spaces particular canonical map induces map geometric quotient categorical category algebraic spaces coarse moduli space details see mar finite flat coarse moduli space see theorem suppose group scheme scheme finitely presented inertia group schemes finite quotient stack admits coarse moduli space proper moreover induces functor qcohprx gsq qcohpmq proof since finitely presented since surjective flat finite presentation also finite presentation finite presentation theorem hypotheses insure quotient stack admits coarse moduli space denote proper particular thus lem induced morphism sheaves qcohprx gsq qcohpmq well defined exactness functor invariants lemma map defined remark induces functor qcohprx gsq qcohpyq functor exact functor invariants exact proof mar lemma know map quasiseparated thus lem induced morphism sheaves qcohprx gsq qcohpmq well defined defined making diagram bellow commute thus induces following commutating diagram functors qcohprx gsq qcohpyq qqq qqq qcohg pxq functor equivalence categories see mar proposition moreover following commutative diagram qcohg pxq qcohpyq functors global section equivalences categories proves lemma theorem suppose noetherian finitely presented finitely presented inertia groups finite functor invariants exact map coarse moduli space proper proof theorem quotient categorical category algebraic spaces theorem insures quotient stack admits coarse moduli space denote proper since categorical quotients category algebraic spaces obtain unicity coarse moduli space definition tame quotient stack finally define tame quotient stack since need existence coarse moduli space end section suppose flat finitely presented finitely presented inertia groups geometric points action finite moreover denote coarse moduli space proper map see theorem definition say quotient stack tame functor qcohrx qcohpmq exact remark notion tameness defined similarly general stacks see definition finite flat tame functor invariants exact consequence lemma since coarse moduli space finite flat finite flat linearly reductive classifying stack tame tameness local fact lemma proposition morphism consider following diagram suppose resp coarse moduli space resp faithfully flat quotient stack tame quotient stack tame quotient stack tame quotient stack also tame proof diagram deduce functors isomorphic since flat flat well exact also exact assumption composite exact hence since faithfully flat also exact required first suppose open immersion let exact sequence orx set exact moreover since adjunction morphism isomorphism since exact assumption also surjective surjective well since open immersion surjective finally since isomorphic functors surjective consider morphism schemes since tameness property stack zariski local assume affine also affine functor exact assumption exact therefore exact functor property sequence exact exact follows exact required local definition tameness theorem theorem following assertions equivalent quotient stack tame inertia groups specpkq linearly reductive groups specpkq field inertia groups specpkq linearly reductive groups geometric point specpkq algebraically closed field inertia groups ppq specpkppqq linearly reductive groups point denote image via morphism exist fppf also chosen surjective morphism containing image linearly reductive group scheme hkpqq ppq acting finite finitely presented scheme isomorphism algebraic stacks proof let specpkq field inertia group quotient stack rgk scheme denote since square specpkq specpkq specpkq affine since specpkq affine let consider following commutative diagram specpkpqqq since seen affine qcohpbkig qcohprx gsq exact functor qcohprx gsq qcohpyq exact definition tameness since exact sequence considered exact sequence quasi coherent sheaves following exact sequence qig qig qig moreover left exact implies qig qig qig exact linearly reductive immediate theorem enough prove trivial algebraically closed field denoting composite specpkppqq subgroup trivial assumption thus trivial see theorem see lemma obtain following corollary corollary corollary stack tame morphism specpkq algebraically closed field geometric fiber specpkq specpkq tame definition say tame inertia group ppq linearly reductive remark previous theorem tame tame existence torsor state interesting consequence previous theorem permits define torsor tame quotient stack actions finite commutative group schemes proposition suppose finite commutative flat quotient stack tame point denoting image morphism exist finitely presented flat morphism containing image subgroup lifting inertia group proof notice first assumption theorem finite affine group scheme previous theorem know inertia linearly reductive lemma finitely presented flat morphism containing image linearly reductive group lifting inertia group subgroup moreover inertia group image quotient morphism action equal ppq ppq pig ppq hkppq teu theorem passing finitely presented flat neighborhood action free thus since finite see theorem remark would useful establish previous proposition general commutative case necessarily group scheme even define notion action torsor taking normal closure instead establish result slice theorem tame quotient stacks manage prove slice theorem actions finite commutative group scheme using following lemma lemma proposition let subgroup quotient natural translation action exists universal let morphism schemes preserving let defined fibered product two maps inclusion assume balanced product exists universal quotient isomorphism finally get following slice theorem extends theorem see theorem suppose commutative finite quotient stack tame action admits finitely presented flat slices slice group linearly reductive proof commutative since proposition passing finite finitely presented flat neighborhood subgroup defines torsor gives morphism fppf base change induced action using notation previous lemma converse due theorem since action admits finitely presented flat slices slice group linearly reductive ppq linearly reductive tame action affine group scheme tame quotient stack algebraic interlude following lemma quite important sense relates functor invariants functor coma consequence relates particular exactness functors important order compare two notions tameness see lemma lemma suppose hopf algebra flat let finitely presented natural functorial isomorphism coma proof commutativity right square diagram insures existence isomorphism well coma homr homr idb aqpmq indeed map isomorphism composite isomorphism defined inverse map defined canonical isomorphism homr pmqsq pbq pmq pmq pbq pmq tame actions chinburg erez pappas taylor defined article notion tame action recall definition useful properties definition say action tame unitary morphism means map ida morphism called total integral tame actions stable base change lemma action tame affine base change action also tame proof since action tame map induces naturally comodule map next lemma allow assume base equal quotient structural map properties quotient morphism lemma following assertions equivalent action cgq tame action tame action tame proof let total integral tame action cgq since via composite maps unitary map total integral action follows base change lemma denote total integral action cgq recall idc via idc map via idc consider composite comes algebra multiplication ida qppidc qpc bqq ida pbq since cqq since algebra thus map also compositions maps exactness functor invariants tame actions case constant group scheme lemma tameness action equivalent surjectivity trace map generalizes characterization tameness number theory see chapter theorem justifies choice terminology general case define projector plays role trace map case action constant group scheme see section lemma total integral map define projector called reynold operator prm prm pmq prm pmq proof aqp thus obtain pby since antimorphism definition antipodeq properties counity prm pmq pprm pmqq prm pmq pmqa prm prm moreover pmqa pmq hence prm pmq since proves prm reynold operator existence projector insures functor invariants exact tame actions following lemma permit relate previous two notions tameness lemma lemma action tame functor invariants exact proof let exact sequence bma using notation previous lemma following commutative diagram left exactness automatic right exactness follows previous diagram adding finite hypothesis able prove exactness functor invariant equivalent tameness action proposition suppose locally noetherian flat finite locally free following assertions equivalent action tame functor pnqa exact action cgq tame functor pnqca exact proof follows lemma follows mac follows lemma suppose exact since finite base change finite finite finite moreover locally noetherian also finite presentation algebras particular lemma since suppose flat following isomorphism comc comc since section surjective thus exactness obtain surjectivity finally isomorphisms imply surjectivity natural map comc comc insures existence map previous lemma permits conclude proof relationship two notions tameness first also prove easily tame actions defines always tame quotient stacks theorem suppose noetherian finitely presented finitely presented inertia groups finite action tame quotient stack tame proof follows directly lemma corollary manage prove equivalence two notions tameness defined paper actions involving finite group schemes flat theorem suppose finite locally free action tame quotient stack tame moreover noetherian flat converse true proof know coarse moduli space moreover qcohprx gsq qcohpyq exact lemma thus tame converse follows proposition remark replace hypothesis noetherian finite type fact theorem finite type also noetherian since supposed noetherian obtain easily following corollary corollary suppose noetherian finite flat linearly reductive trivial action tame remark say hopf algebra relatively cosemisimple submodules direct summands also direct summands hopf algebra relatively cosemisimple trivial action pspecprq tame see particular finite flat also equivalent linearly reductive following result seen analogue trace surjectivity proved constant case corollary suppose finite locally free locally noetherian flat following assertions equivalent action tame quotient stack tame functor pnqa exact reynold operator prm proof follows previous theorem follows lemma functor left exact enough prove exactness right let bima epimorphism induces morphism surjectivity moreover prm pmq pmqq prn therefore surjective references jarod alper good moduli spaces artin stacks proquest llc ann arbor thesis university abramovich olsson vistoli tame stacks positive characteristic ann inst fourier grenoble artin algebraic approximation structures complete local rings inst hautes sci publ bourbaki lecture notes mathematics vol masson paris chapitres algebra chapters brzezinski wisbauer corings comodules volume london mathematical society lecture note series cambridge university press cambridge algebraic number theory proceedings instructional conference organized london mathematical society nato advanced study institute support inter national mathematical union edited cassels academic press london chinburg erez equivariant characteristics tameness geneva chinburg erez pappas taylor tame actions group schemes integrals slices duke math conrad theorem via stacks preprint university stanford available http demazure gabriel groupes tome groupes commutatifs masson cie paris avec appendice corps classes local par michiel hazewinkel doi hopf extensions algebras maschke type theorems israel hopf algebras grothendieck murre tame fundamental group formal neighbourhood divisor normal crossings scheme lecture notes mathematics vol springerverlag berlin hartshorne algebraic geometry new york graduate texts mathematics kemper characterization linearly reductive groups invariants transform groups laumon champs volume ergebnisse der mathematik und ihrer grenzgebiete folge series modern surveys mathematics results mathematics related areas series series modern surveys mathematics berlin luna slices sur les groupes pages bull soc math france paris soc math france paris mar sophie marques tameness actions affine group schemes quotient stacks preprint available https olsson deformation theory representable morphisms algebraic stacks math olsson sheaves artin stacks reine angew raynaud anneaux locaux lecture notes mathematics vol springerverlag berlin serre corps locaux hermann paris publications nancago viii marques sophie visiting assistant professor new york university courant institute mathematical sciences mercer new york usa address marques
| 0 |
nov multivariate intensity estimation via hyperbolic wavelet selection nathalie akakpo laboratoire lpma umr pierre marie curie upmc paris centre recherches crm umi udem abstract propose new statistical procedure able way overcome curse dimensionality without structural assumptions function estimate relies type penalized criterion new collection models built hyperbolic biorthogonal wavelet bases study properties unifying intensity estimation framework inequality adaptation mixed smoothness shown hold besides describe algorithm implementing estimator quite reasonable complexity keywords hyperbolic wavelets biorthogonal wavelets mixed smoothness model selection density copula poisson process process contents introduction framework examples general framework examples estimation given pyramidal wavelet model wavelets hyperbolic wavelet basis pyramidal models type estimator pyramidal model quadratic risk pyramidal model wavelet pyramid model selection penalized pyramid selection combinatorial complexity choice penalty function back examples adaptivity mixed smoothness function spaces dominating mixed smoothness link structural assumptions approximation qualities minimax rate implementing wavelet pyramid selection algorithm computational complexity illustrative examples proofs proof proposition proof proposition address date november multivariate intensity estimation via hyperbolic wavelet selection proof proposition proof theorem proofs corollaries proof proposition proof theorem references introduction last decades many wavelet procedures developed various statistical frameworks yet multivariate settings based isotropic wavelet bases indeed advantage easily tractable univariate counterparts since isotropic wavelet tensor product univariate wavelets coming resolution level notable counterexamples underline usefulness hyperbolic wavelet bases coordinatewise varying resolution levels allowed recover wider range functions particular functions anisotropic smoothness much attention also paid curse dimensionality common way overcome problem statistics impose structural assumptions function estimate regression framework beyond additive models may cite work propose method additive model unknown link function use decompositions besides two landmark papers consider general framework composite functions encompassing several classical structural assumptions propose procedure white noise framework whereas propose general model selection procedure wide scope applications finally lepski see also consider density estimation adaptation possibly multiplicative structure density meanwhile field approximation theory numerical analysis renewed interest function spaces dominating mixed smoothness growing see instance due tractability multivariate integration instance spaces impose structure highest order derivative mixed derivative surprisingly statistical literature seems procedures deal spaces either white noise framework functional deconvolution model order fill gap paper devoted new statistical procedure based wavelet selection hyperbolic biorthogonal bases underline universality studying general intensity estimation framework encompassing many examples interest density copula density poisson intensity jump intensity estimation first define whole collection linear subspaces called models generated subsets dual hyperbolic basis type criterion adapted norm induced primal hyperbolic basis describe procedure choose best model data using penalized approach similar procedure satisfies inequality provided intensity estimate bounded besides reaches minimax rate constant factor logarithmic factor wide range spaces dominating mixed smoothness rate akin one would obtain univariate framework notice contrary allow greater variety spaces sobolev besov type smoothness also spatially nonhomogeneous smoothness purpose prove key result nonlinear approximation theory spirit may interest types model selection procedures see instance depending kind intensity estimate different structural assumptions might make sense considered explain respect structural assumptions fall within scope estimation dominating mixed smoothness yet emphasize need impose structural assumptions multivariate intensity estimation via hyperbolic wavelet selection target function thus way method adaptive time many structures besides implemented computational complexity linear sample size logarithmic factors plan paper follows section describe general intensity estimation framework several examples interest section define pyramidal wavelet models type criterion provide detailed account estimation given model section devoted choice adequate penalty perform model selection optimality resulting procedure minimax point view discussed section mixed smoothness assumptions algorithm implementing wavelet procedure illustrative example given section proofs postponed section let end remark notation throughout paper stand numerical constants positive reals depend values allowed change line line framework examples general framework let given hyperrectangle equipped borel lebesgue measure denote space square integrable functions equipped usual norm ktk scalar product article interested nonnegative measure admits bounded density respect lebesgue measure aim estimate function given probability space assume exists random measure defined values set borel measures classical convergence theorems condition implies nonnegative bounded measurable functions tdm close enough sense assume observe random measure made precise later observed set course examples general framework encompasses several special frameworks interest shall show example density estimation given observe identically distributed random variables common density respect lebesgue measure observed empirical measure given obviously satisfies example copula density estimation given observe independent identically distributed random variables values coordinate xij continuous distribution function recall sklar theorem see also instance exists unique distribution function uniform marginals xid multivariate intensity estimation via hyperbolic wavelet selection function called copula xid assume admits density respect lebesgue measure since joint distribution function random measure satisfying given xid marginal distributions usually unknown replace empirical distribution functions define xid example poisson intensity estimation let denote vold lebesgue meaqd sure observe poisson process whose mean measure intensity vold otherwise said finite family disjoint measurable subsets independent poisson random variables respective parameters vold vold therefore empirical measure vold satisfy assume constant throughout poisson process may nonhomogeneous example jump intensity estimation continuous time let fixed positive real observe process values otherwise said process starting stationary independent increments continuous probability trajectories see instance process may jumps whose sizes ruled jump intensity measure measure important example process compound poisson process univariate homogeneous poisson process values distribution mass independent case also measure assume measure admits density respect lebesgue measure given compact hyperrectangle aim estimate restriction purpose use observed empirical measure property processes states random measure defined poisson process mean measure dtdx satisfies multivariate intensity estimation via hyperbolic wavelet selection example jump intensity estimation discrete time framework example except observed given time step disposal random variables order estimate consider random measure unobserved replaced estimation purpose estimation given pyramidal wavelet model first step estimation procedure relies definition finite dimensional linear subspaces called models generated finite families biorthogonal wavelets describe models general hyperrectangle adequate models deduced translation scaling introduce type contrast allows define estimator within given wavelet model wavelets shall first introduce multiresolution analysis wavelet basis satisfying general assumptions concrete examples wavelet bases satisfying assumptions may found instance sequel denote positive constant depends choice bases fix coarsest resolution level one hand assume scaling spaces vect vect satisfy following hypotheses riesz bases linearly independent functions form riesz bases dimension exists nonnegative integer dim dim nesting density biorthogonality let localization let almost disjoint supports max supp supp supp supp norms max polynomial reproducibility primal scaling spaces exact order set polynomial functions degree hand wavelet spaces vect vect fulfill following conditions riesz bases functions linearly independent together form riesz basis holds multivariate intensity estimation via hyperbolic wavelet selection orthogonality biorthogonality let localization let almost disjoint supports max supp supp supp supp norms max fast wavelet transform let holds remarks properties imply function may decomposed properties imply dim property means particular resolution level wavelet represented linear combination scaling functions resolution level number components bounded independently level well amplitude coefficients well known contrary orthogonal bases biorthogonal bases allow symmetric smooth wavelets besides properties dual biorthogonal bases usually usually decomposition analysis wavelets one null moments whereas synthesis wavelets one greatest smoothness yet may sometimes need following smoothness assumptions analysis wavelets restrictive practice bound residual terms due replacement assumption lipschitz functions lipschitz norms satisfying still refer examples wavelet bases satisfying additional assumption hyperbolic wavelet basis sequel ease notation set given biorthogonal basis chosen according deduce biorthogonal wavelets tensor product precisely set define contrary statistical works based wavelets thus allow tensor products univariate wavelets coming different resolution levels writing families define biorthogonal bases called biorthogonal hyperbolic bases indeed wkd multivariate intensity estimation via hyperbolic wavelet selection way besides induce norms equivalent equality wavelet basis orthogonal noticed scalar product derived instance ihu pyramidal models wavelet basis dimension natural pyramidal structure wavelets grouped according resolution level hyperbolic basis provided define proper notion resolution level takes account anisotropy wavelet define global resolution level thus supports wavelets corresponding given global resolution level volume roughly exhibit different shapes define index set wavelets resolution level given maximal resolution level define family sets form may subset elements typically chosen impose sparsity expected smaller total number wavelets level decrease resolution level increases adequate choice proposed proposition thus choosing set amounts keep hyperbolic wavelets level deeper levels set define pyramidal model finite dimensional subspace form vect denote dimension setting see pyramidal models included type estimator pyramidal model let fix model random measure observed build type estimator values associated norm defined indeed setting deduce minimizes ihs introduce argmin multivariate intensity estimation via hyperbolic wavelet selection sequences reals hence consider contrast since observe random measure define best estimator within argmin quadratic risk pyramidal model let introduce orthogonal projection norm follows unbiased estimator unbiased estimator thanks pythagoras equality recover usual decomposition var first term bias term approximation error second term variance observed combining triangle inequality basic term estimation error inequality easily provides least akin residual term proposition var taken equal equality holds examples introduced section shall verify quadratic risks satisfies describes amount available data residual term weigh much upon estimation rate multivariate intensity estimation via hyperbolic wavelet selection example density estimation continued framework empirical coefficients form wavelets normalized bounded var hence satisfied instance example copula density estimation continued case xid xid example var besides prove section following residual terms proposition assumption log hence choosing log log yields example poisson intensity estimation continued case vold campbell formula var vold vold example jump intensity estimation continuous time observations continued case campbell formula var dtdx multivariate intensity estimation via hyperbolic wavelet selection example jump intensity estimation discrete time observations continued case empirical coefficients approximate counterparts form deduce previously var besides bound residual term thanks following proposition proved section proposition assumption provided small enough assuming stays bounded choosing deduce satisfied assumption log notice assumptions classical framework observations remark proposition extends multivariate model complex structure due use hyperbolic wavelets instead isotropic ones yet extension straightforward give detailed proof section wavelet pyramid model selection risk one pyramidal model suggests good model large enough approximation error small small enough estimation error small without prior knowledge function estimate choosing best pyramidal model thus impossible section describe procedure selects best pyramidal model data without using smoothness assumption provide theoretical results guarantee performance procedure underline properties linked structure collection models penalized pyramid selection observed deduce var following work introduce penalty function pen choose best pyramidal model data defined argmin pen order choose pyramidal model smallest quadratic risk penalty pen expected behave roughly estimation error within model provide penalty following section final estimator combinatorial complexity choice penalty function widely examplified instance choice adequate penalty depends combinatorial complexity collection models measured index log max common dimension pyramidal models ideally index independently sample size resulting model selection procedure reach optimal estimation rate following proposition describes combinatorial complexity collection pyramidal models multivariate intensity estimation via hyperbolic wavelet selection proposition let let common dimension models exists positive reals log remind defined section assumption possible values given proof postponed section way could prove matching log large enough whole family contains order models typically choose power sample size contains least exponential number models number models per dimension moderate enough combinatorial index bounded assume satisfied well following hypotheses let subfamily sup assumption conc exist positive reals countable subfamily satisfying sup positive constant exp assumption var exist nonnegative constant collection estimators max besides exist nonnegative constant nonnegative function measurable event var max assumption rem function assumption var nonnegative constant max log multivariate intensity estimation via hyperbolic wavelet selection assumption conc describes random measure concentrates around measure estimate assumption var ensures estimate variance terms last assumption rem describes close theorem assume assumptions conc var rem satisfied max choose log penalty form pen positive large enough max log min may depend may depend practice penalty constants calibrated simulation study may also replace penalty max extend theorem random using arguments similar back examples first two general remarks order let kft countable subfamily sup assumption conc usually proceeds talagrand type concentration inequality besides seen section general thus whenever known assumption var satisfied one may also estimate variance term propose following results proved section max var corollary density estimation framework see let max log pen positive large enough max min may depend may depend corollary copula density estimation framework see let max min log log define xid xjd multivariate intensity estimation via hyperbolic wavelet selection let pen assumption positive large enough log max min may depend may depend corollary poisson intensity estimation framework see let max vold log vold vold pen vold positive large enough max min vold vold may depend may depend corollary jump intensity estimation framework continuous time observations see let max log pen positive large enough max min may depend may depend corollary jump intensity estimation framework discrete time observations see let max min log pen assumption satisfied stays bounded positive large enough min max may depend may depend corollaries extend respectively works multivariate framework complex family models allowing nonhomogeneous smoothness refined penalty multivariate intensity estimation via hyperbolic wavelet selection adaptivity mixed smoothness remains compare performance procedure estimators purpose derive estimation rate smoothness assumptions induce sparsity hyperbolic wavelet coefficients compare minimax rate function spaces dominating mixed smoothness mixed sobolev space smoothness measured defined kswp swp classical sobolev space kwp former contains functions whose highest order derivative mixed derivative latter contains derivatives global order spaces coincide dimension otherwise obvious continuous embeddings swp besov spaces mixed dominating smoothness may defined thanks mixed differences generally order univariate difference operator univariate modulus continuity order defined sup denote univariate difference operator applied coordinate keeping ones fixed subset order mixed difference operator given set define mixed modulus continuity wre sup mixed space shp space functions kshp sup multivariate intensity estimation via hyperbolic wavelet selection finite convention term associated generally mixed besov space sbp space functions ksbp replaced case sbp shp comparison usual besov space may defined space functions kbp sup finite extending recent results confirm continuous embeddings sbp hold fairly general assumptions hand given define sup way replacing denote set functions appropriate conditions smoothness assume satisfied sequel sets may interpreted balls radius besov spaces dominating mixed smoothness sbp see instance mixed sobolev spaces easily characterized terms wavelet coefficients satisfy compact embeddings sbp min swp sbp max see section without loss generality shall mostly turn attention spaces sequel link structural assumptions following property collects examples composite functions mixed dominating smoothness built lower dimensional functions classical sobolev besov smoothness proof norms composite functions given section analogous property mixed sobolev smoothness instead mixed besov smoothness proved straightforwardly proposition let sbp let partition min iii let swp sbp sbp multivariate intensity estimation via hyperbolic wavelet selection sbp either product function sbp notice resp iii assumptions component functions enough ensure resp remark believe generalization iii besov fractional sobolev smoothness holds yet generalization would require refined arguments approximation theory spirit beyond scope paper structural assumption may satisfied multivariate density estimation framework whenever split independent coordinates recently considered case generalization iii may directly use multivariate intensity framework allow draw comparison combining iii interest copula density estimation mind wide nonparametric family copulas archimedean copulas see chapter densities form provided generator smooth enough see instance combining iii may interest intensity estimation indeed popular way build multivariate intensities based copulas studied see also chapter resulting intensities form copula besides common form appropriate smoothness assumptions last let emphasize linear tion mixtures instance functions sbp inherits smoothness consequently mixed dominating smoothness may thought fully nonparametric surrogate wide range structural assumptions approximation qualities minimax rate provide section constructive proof following nonlinear approximation result spirit theorem let max exists model approximation max remark kind result still holds assumption really useful smoothness case first term linear approximation error highest dimensional model collection order deduce section first term optimal sbp least max instance second term nonlinear approximation error within model dimension order deduce theorem second term order log also optimal constant factor sbp least max notice classical besov smoothness assumption best possible approximation rate linear subspaces would order thus mixed smoothness order dimension recover multivariate intensity estimation via hyperbolic wavelet selection approximation rate classical smoothness order dimension logarithmic factor let define sequel use notation exist positive reals corollary assume large enough max log sup proof order minimize approximately choose max instance max log yields announced remember similar result holds replacing equivalent though unusual corollary indeed related minimax rate proposition density estimation framework assume either min min inf sup log estimator proof one may derive theorem proof theorem link entropy number kolmogorov entropy kolmogorov log according proposition density estimation framework minimax risk order yields announced rate consequently density estimation framework penalized pyramid selection procedure minimax constant factor logarithmic factor otherwise let end comments estimation rates first remind imax rate assumption order thus mixed smoothness assumption order recover logarithmic factor rate smoothness order dimension obtained smoothness order classical smoothness assumption dimension besides multiplicative constraint proposition recover rate logarithmic factor generalized additive constraint iii proposition recover rate section logarithmic factor regarding neumann seminal work estimation mixed smoothness see section first adaptive wavelet thresholding proved optimal logarithmic factor another nonadaptive one proved optimal constant positive integer procedure thus outperforms time adaptive minimax optimal constant two classes many ones multivariate intensity estimation via hyperbolic wavelet selection implementing wavelet pyramid selection end paper quick overview practical issues related wavelet pyramid selection perform selection within large collection models typically number models exponential sample size must guarantee estimator still computed reasonable time besides provide simulation based examples illustrating interest new method algorithm computational complexity theorem supports choice additive penalty form pen detailed expressions several statistical frameworks given section penalized selection procedure amounts choose argmax crit crit since roughly estimate variance method though different thresholding procedure mainly retain empirical wavelet coefficients significantly larger variance remarkable thing due structure collection models penalty function penalized estimator determined without computing preliminary estimators makes computation feasible practice indeed proceed follows step determine argmax purpose enough compute sort decreasing order coefficients keep indices yield greatest coefficients step determine integer argmax crit global computational complexity thus log typically choose order resulting computational complexity order log log logd illustrative examples section study two examples dimension using haar wavelets first density estimation framework consider example coordinates independent conditionally categorical variable density may written probability vector characterizing distribution compact interval let denote beta density parameters shifted rescaled support uniform density example take multivariate intensity estimation via hyperbolic wavelet selection resulting mixture density shown figure choose log first compute estimator model provides estimator max use penalty pen sample size figure illustrates procedure first selects rough model figure add details wherever needed figure summing two yields pyramid selection estimator figure way comparison also represent figure widely used estimator bivariate gaussian kernel estimator known support option implemented matlab ksdensity function observe contrary kernel density estimator pyramid selection estimator recovers indeed main three modes particular sharp peak figure pyramid selection standard kernel example mixture multiplicative densities copula density estimation framework consider example copula either frank copula clayton copula conditionally binary variable precisely consider mixture copula density frank copula parameter density clayton copula parameter two examples archimedean copula densities shown figure resulting mixture figure use penalty previous example adapted course copula density estimation framework illustrate figure pyramid selection procedure sample size though theoretical conditions fully satisfied pyramid selection procedure still provides reliable estimator conclusion examples suggest haar pyramid selection already provides useful new estimation procedure encouraging pyramid selection based higher order wavelets whose full calibration based extensive simulation study framework subject another work multivariate intensity estimation via hyperbolic wavelet selection figure left frank copula density parameter right clayton copula density parameter figure pyramid selection example mixture copula density proofs shall use repeatedly classical inequality positive proof proposition prove indeed pyramidal model subset common residual terms assumption thanks assumptions according massart version inequality see positive exists event exp setting thus multivariate intensity estimation via hyperbolic wavelet selection hence exp finally order see proposition choosing log log proof rproposition bounded measurable function let denote var var shall bound using decomposition process big jump compound poisson process independent small jump process let fix small enough kxk denote characteristic triplet stands drift measure density respect lebesgue measure see section distributed independent processes following characteristics first process characteristic triplet drift measure process compound poisson process homogeneous poisson process intensity kxk density independent conditioning using aforementioned independence properties yields yields conditioning using independence kxk writing using leads kxk compact support let denote coordinate maximal distance reached instance deduce proof lemma see also equation exists exp log log multivariate intensity estimation via hyperbolic wavelet selection assumption lipschitz besides compact bounded away origin exists measure compactly supported satisfies finite since measure see instance theorem deduce theorem hence markov inequality finally fixing min inf kxk min max max combining yields proof proposition due hypotheses hence let fix number equal number partititions integer nonnegative integers hence last two displays classical binomial coefficient see instance proposition yield let fix model satisfies obviously multivariate intensity estimation via hyperbolic wavelet selection besides choice proposition holds number subsets satisfies let function log increasing deduce log setting log log one may take instance log log log proposition proof theorem notation preliminary results hyperbolic wavelet bases inherit underlying univariate wavelet bases localization property stated follows lemma let sequence max max instance proof using assumptions section get max max deduce proof proposition allows conclude define set sup sup multivariate intensity estimation via hyperbolic wavelet selection lemma let proof proof follows linearity inequality lemma let exists measurable event max exp let set proof observe consider countable dense subset thanks localization property lemma sup assumption conc ensures exists exp hence obtain convexity lemma var given lemma satisfies proof follows assumption var proof theorem let fix definition get pen pen pen pen using triangle inequality inequality get hence pen pen multivariate intensity estimation via hyperbolic wavelet selection let fix set log deduce lemma max besides given proposition choice leads log exp choosing instance pen integrating respect deduce assumption var assumption conc may depend order bound first notice triangle inequality lemma hence setting inequality entails let applying assumption conc get exp min setting exp proposition yields log may depend besides deduce assumption conc lemma exp nonnegative random variable fubini inequality implies max may depend log remembering conclude max log multivariate intensity estimation via hyperbolic wavelet selection may depend may depend may depend proofs corollaries proof corollary assumption conc straightforward consequence talagrand inequality stated instance inequality satisfied whatever unbiased estimator var besides existence follows lemma thus assumptions var rem satisfied taking depends proof corollary setting xid recover previous density estimation framework assumption conc still satisfied setting xid observe max using arguments proof proposition get log log except set probability smaller log building proof corollary conclude assumptions var rem satisfied depend proof corollary assumption conc straightforward consequence talagrand inequality poisson processes proved corollary satisfied whatever vold unbiased estimator vold var besides existence follows lemma thus assumptions var rem satisfied taking depends proof corollary proof similar corollary multivariate intensity estimation via hyperbolic wavelet selection proof corollary regarding assumption conc proof similar corollary let bounded measurable function let gdm gdm gdm gdm defined proof proposition notice course proof proposition shown bounded lipschitz functions max kgkl provided small enough besides satisfy bernstein inequalities bernstein inequality stated proposition former bernstein inequality stated proposition latter combining arguments yields corollary proof proposition set easy see thus wre soon contains least two elements therefore sbp kbp sake readability shall detail two special cases let first deal case ksbp kbp let assume set min easily besides deduce kgkp operators commute min multivariate intensity estimation via hyperbolic wavelet selection inequality arithmetic geometric means entails way consequently ksbp iii proof follows chain rule higher order derivatives composite function notice bounded proof follows extension theorem inequality see also chapter theorem see theorem proof theorem recall finite sequence besides proved course proof proposition hyperbolic basis admits unique decomposition form defining finite using aforementioned reminders case treated way multivariate intensity estimation via hyperbolic wavelet selection let fix define subset largest elements among consider approximation given set let first assume using lemma get besides follows therefore max case kind follows last completes proof references autin claeskens freyermuth hyperbolic wavelet thresholding methods curse dimensionality maxiset approach applied computational harmonic analysis florent autin gerda claeskens freyermuth asymptotic performance projection estimators standard hyperbolic wavelet bases electron nathalie akakpo claire lacour inhomogeneous anisotropic conditional density estimation dependent data electronic journal statistics yannick baraud estimator selection respect risks probability theory related fields yannick baraud lucien estimating composite functions model selection ann inst probab yannick baraud lucien revisited general theory applications working paper preprint june multivariate intensity estimation via hyperbolic wavelet selection andrew barron lucien pascal massart risk bounds model selection via penalization probab theory related fields jean bertoin processes volume cambridge tracts mathematics cambridge university press cambridge yannick baraud christophe giraud sylvie huet gaussian model selection unknown variance ann lucien model selection via testing alternative penalized maximum likelihood estimators annales statistiques massart adaptive compression algorithm besov spaces constructive approximation rida benhaddou marianna pensky dominique picard anisotropic functional deconvolution model convergence rates electron bourdaud winfried sickel composition operators function spaces fractional order smoothness rims kokyuroku bessatsu albert cohen ingrid daubechies pierre vial wavelets interval fast wavelet transforms appl comput harmon rama cont peter tankov financial modelling jump processes chapman financial mathematics series chapman boca raton arnak dalalyan yuri ingster alexandre tsybakov statistical inference compound functional models probability theory related fields wolfgang dahmen angela kunoth karsten urban biorthogonal spline wavelets moment conditions appl comput harmon ronald devore george lorentz constructive approximation volume grundlehren der mathematischen wissenschaften fundamental principles mathematical sciences berlin donoho cart connection ann dinh dung vladimir temlyakov tino ullrich hyperbolic cross approximation arxiv preprint dinh dung approximations using sets finite cardinality finite pseudodimension journal complexity nonparametric estimation models based discretesampling lecture series pages christian expansions transition distributions processes stochastic process wang heping representation approximation multivariate functions mixed smoothness hyperbolic wavelets math anal joel horowitz enno mammen estimation general class nonparametric regression models unknown link functions ann reinhard hochmuth approximation anisotropic function spaces math reinhard hochmuth wavelet characterizations anisotropic besov spaces appl comput harmon ingster suslina estimation detection functions space mathematical methods statistics multivariate intensity estimation via hyperbolic wavelet selection anatoli juditsky oleg lepski alexandre tsybakov nonparametric estimation composite functions ann jan kallsen peter tankov characterization dependence multidimensional processes using copulas journal multivariate analysis oleg lepski multivariate density estimation loss oracle approach adaptation independence structure ann massart tight constant inequality ann massart concentration inequalities model selection volume lecture notes mathematics springer berlin lectures summer school probability theory held july foreword jean picard millar path behavior processes stationary independent increments wahrscheinlichkeitstheorie und verw gebiete alexander mcneil johanna multivariate archimedean copulas functions symmetric distributions ann madani moussai composition multidimensional spaces mathematische nachrichten roger nelsen introduction copulas springer series statistics springer new york second edition michael neumann multivariate wavelet thresholding anisotropic function spaces statist sinica van kien nguyen winfried sickel isotropic dominating mixed besov spaces comparison arxiv preprint van kien nguyen winfried sickel pointwise multipliers sobolev besov spaces dominating mixed smoothness arxiv preprint michael neumann rainer von sachs wavelet thresholding anisotropic function classes application adaptive estimation evolutionary spectra ann potapov simonov tikhonov mixed moduli smoothness survey surveys approximation theory patricia adaptive estimation intensity inhomogeneous poisson processes via concentration inequalities probab theory related fields patricia vincent rivoirard near optimal thresholding estimation poisson intensity real line electron patricia vincent rivoirard christine adaptive density estimation curse support statist plann inference gilles rebelles adaptive estimation anisotropic density independence hypothesis electron gilles rebelles pointwise adaptive estimation multivariate density independence hypothesis bernoulli ludger jeannette woerner expansion transition distributions processes small time bernoulli sato processes infinitely divisible distributions volume cambridge studies advanced mathematics cambridge university press cambridge translated japanese original revised author sklar fonctions dimensions leurs marges publ inst statist univ paris schmeisser hans triebel topics fourier analysis function spaces publication john wiley sons chichester multivariate intensity estimation via hyperbolic wavelet selection florian claudia oracle inequality penalised projection estimation densities observations journal nonparametric statistics yuhong yang andrew barron determination minimax rates convergence ann
| 10 |
dec generalized entropy concentration counts kostas oikonomou labs research middletown email november abstract phenomenon entropy concentration provides strong support maximum entropy method maxent inferring probability vector information form constraints extend phenomenon discrete setting integral vectors necessarily summing show linear constraints simply bound allowable sums suffice concentration occur even setting requires new generalized entropy measure sum vector plays role measure concentration terms deviation maximum generalized entropy value terms distance maximum generalized entropy vector provide bounds concentration terms various parameters including tolerance constraints ensures always satisfied integral vector generalized entropy maximization compatible ordinary maxent also considered extension allows address problems formulated maxent problems keywords maximum generalized entropy counts concentration linear constraints inequalities norms tolerances contents introduction generalized entropy basic properties monotonicity concavity properties lower bounds maximization connection constraints scaling sensitivity optimal count vector constraints tolerances effect tolerances optimality scaling data bounds allowable sums optimal count vector concentration respect entropy difference realizations optimal count vector realizations sets smaller entropy scaling factor needed concentration bounds concentration threshold examples concentration respect distance maxgent vector realizations sets far maxgent vector scaling concentration around maxgent count vector examples conclusion proofs introduction maximum entropy method principle originally proposed jaynes appears standard textbooks engineering probability information theory commonly referred maxent principle essentially states information available probability vector form linear constraints elements among others preferred probability vector one maximizes shannon entropy constraints besides great wealth diversity applications maxent justified variety theoretical grounds axiomatic formulations concentration phenomenon interpretations references therein unification bayesian inference among justifications discrete setting appeal concentration lies conceptual simplicity essentially combinatorial argument first presented jaynes called concentration distributions entropy maxima concentration viewpoint developed presented generalizations improved results eliminated asymptotics studied additional aspects paper adopt discrete finite combinatorial approach show concentration phenomenon arises new setting vectors necessarily density among things requires introducing new generalized entropy measure new concentration phenomenon lends support extension maxent method call maximum generalized entropy maxgent basics entropy concentration easiest explain terms abstract balls bins paradigm labelled distinguishable bins indistinguishable balls allocated final content bins described count vector sums corresponding frequency vector summing suppose thep frequency vector must satisfy set linear equalities inequalities aij aij aij concentration phenomenon becomes large overwhelming majority allocations accord constraints frequency vectors close maximizes shannon entropy subject constraints extension longer given number balls therefore define unique frequeny vector must deal directly count vectors whose sums unknown example makes clear linear constraints placed counts coefficients assumption constraints limit sums count vectors lie finite range assumption show counts allowed become larger larger process scaling problem explained vast majority allocations satisfy constraints fact count vectors close maximizes generalized entropy precise statement concentration phenomenon needs additional preliminaries given end section main results new generalized entropy function defined arbitrary vectors reduces shannon entropy vectors summing properties studied scaling process also introduced demonstrate new concentration pheonomenon respect deviations maximum generalized entropy value theorem gives lower bound ratio number realizations maxgent vector set count vectors whose generalized entropies far maximum value theorem completes picture deriving large problem must ratio suitably large establish concentration respect norm distance count vectors maxgent vector present theorems analogous also theorem optimized version theorem theorems far large etc defined terms parameters introduced table none results involve asymptotic considerations give number numerical illustrations following example demonstrates basic issues referred simple call density frequency vectors would called discrete probability distributions possibly empirical operating probabilistic setting setting highlights differences usual frequency vector case proceed precise statement generalized entropy concentration example number indistinguishable placed three bins red green blue final content bins must satisfy thus total number balls may put bins small large assignment balls bins described sequence made letters corresponding count vector sequence length consistent constraints table lists count vectors satisfy constraints sums number realizations sequences result counts given multinomial coefficient terminology theory types size type class said likely final content bins table count vectors satisfying sum number realizations additional constraint section table would apply would reduce maxent problem example makes two points first seem possible find single frequency vector naturally associated problem without one think maximizing usual second one may think starting largest possible number balls case would lead greatest number realizations count vector realizations sums even vectors summing realizations one summing balls indistinguishable ignore distinguishing characteristics however modelling situations example indistinguishability essential ordinary entropy problem single distinction count frequency vectors really matter correspondence true next give precise statement generalized entropy concentration need define generalized entropy describe find vector maximizes specify derive bounds constraints describe ensure existence integral solutions count vectors constraints introduce parameters define concentration find vector largest number realizations problem like example first assume problem admit arbitrarily large solutions made precise necessary condition element appears next relax integrality requirement counts set continuous maximization problem generalized entropy real vector constraints expressed via real matrices vectors assume constraints satisfiable bound possible sums equivalent assuming bounded thus polytope concave maximization problem see solution refer maxgent problem maxgent vector optimal relaxed count vector since function concave strictly concave see fig immediate solution unique however show case boundedness assumption lies finite numbers determined solving linear programs min max technicality constraints may force elements reasons explained convenient eliminate elements end elements assumed positive reals finally derive integral vector refer optimal maxgent count vector procedure explained end interested vectors set introduce explained tolerances satisfaction constraints governed parameter turn describe concentration need two parameters specifying strength concentration describing size region occurs parameters summarized table lastly ordinary entropy frequency vectors concentration occurs increasing number balls count vectors replaced increasing sufficient consider relative tolerance satisfying constraints concentration tolerance number realizations relative tolerance deviation maximum generalized entropy value absolute tolerance deviation distance optimal relaxed count vector table parameters concentration results values constraints increase consider consists multiplying vectors scalar process call scaling scaling results larger larger count vectors admissible described detail give precise statement concentration phenomenon count vectors theorems compute number respectively called concentration threshold problem data scaled factor number result optimal count vector least times greater number assignments result count vectors entropy less farther norm significance problem available information embodied constraints otherwise admits large number probability vectors solutions concentration phenomenon provides powerful argument maxent method selects particular solution one maximum entropy preference likewise concentration results paper support maximization generalized entropy problems involving general vectors believe maxgent considered compatible extension maxent compatibility maxent problem reals constraints formulated apmaxgent problem form constraints plus constraint problems solution maximum tropy equal maximum generalized entropy also constraints maxgent problem either explicitly implicitly fix value problem reduced maxent problem reals extension consists fact maxgent addresses problems involving vectors formulated maxent problems saw example examples problems given maxent solves inference problem decision problem claim maximum entropy object one use matter use one mind related work term generalized entropy neither imaginative distinctive many generalized entropy measures general related relationship remains investigated function form log multinomial coefficient variable numerator appeared problem inferring real vector information form linear equalities considered skilling vectors termed positive additive distributions authors gave axiomatic justifications involve probabilities minimizing generalization relative entropy vectors generalization divergences discuss connection generalized entropy respect concentration recent developments discrete normalized case given continuous normalized case relative entropy examined viewpoint information geometry countable spaces also treated references provide explicit bounds ones knowledge concentration vectors studied structure presentation paper similar similar subject matter entropy concentration combinatorial viewpoint many results appear similar section iii generalizations results insofar generalization however main theorems actually subsume corresponding theorems cases theorems include optimizations specific count frequency vectors respectively generalized entropy section introduce generalized entropy function study properties relationships functions maximization linear constraints given real vector generalized entropy form extended vectors necessarily density vectors density normalized probability vector corresponding gives two ways look extended entropy plus sum times log sum times ordinary entropy normalized already normalized coincides fig plot figure note destroys strict concavity basic properties list important properties function second stirling log multinomial coefficient order using first two terms find interpretation given used derive likely matrices largest number realizations incomplete information related ordinary entropy density vectors extended entropy arbitrary vectors two ways specified unlike entropy normalized vectors bounded generalized entropy increases without bound elements become larger shown proposition one consequence close norm bounded expression involving positive unless one element case follows second form given count vector probability written prp divergence relative entropy two probability vectors frequency vector corresponding substituting obtain expression probability terms ordinary entropy frequency vector like ordinary extended concave domain unlike strictly concave see proposition maximum subject constraint density vector reduces maximum relationship maximizing maximizing extended consider maximizing first formpin subject imposing additional constraint treating parameter taking values given unique maximum since strictly subject achieve maxs maxx maximum value equal using second form see similar relationship maximizing maximizing function scaling homogeneity property easily seen second form important scaling property maximizes maximizes show proposition monotonicity concavity properties noted property increasing function sense proposition inequality strict places use property turn concavity extended ordinary entropy strictly concave addition strongly concave modulus defined generalized entropy also concave neither strictly concave strongly concave modulus however sublinear whereas properties collected following proposition proposition function concave one may also maximize without constraint would result mean strictly concave strongly concave modulus definition extended setting last property stronger implies concavity since required sum absence strict concavity means care needed maximization address lower bounds given point point close sense much smaller need answer proposition implies hypercube centered say attains maximum upper corner hypercube minimum lower corner specifically let denote let seen proposition using observation show lemma given coefficient positive unless equal case becomes lower bound depend restriction applies reference point variable see also remark lastly since lemma holds also norm replaced norm use lemma bound far maximum value close also comment remark bound compares bounds obtainable relationship ordinary entropy difference norm maximization let denote subset defined constraints point point solving occupies special location set consequently unique optimal solution maximization problem despite fact strictly concave function recall proposition part proposition set contain least one strict inequality point unique optimal solution problem figure illustrates first statement proposition figure polytope proposition lie heavy black line finally look form solution terms lagrange multipliers lagrangean problem vectors lagrange multipliers corresponding equality inequality constraints solution satisfy inequality constraints equality called binding active strict inequality known multipliers corresponding inequalities rest see vii thus denoting corresponding binding inequalities corresponding abi follows written expression determines elements density vector multipliers determine vector reason seen discuss tolerances constraints terms remark clear form express elements multipliers finite avoid introducing special cases sequel handle zeros assume convenience elements solution problem forced exactly constraints eliminated consideration either solution found already alluded thus whenever speak follows assume elements positive see example detailed discussion issue example returning example possible maximize analytically given constraints introducing real variables corresponding letting constraints solution turns bounds possible sums see maxgent solution problem never trivial sense find compare table connection density vectors relationship ordinary entropy divergence xky well known uniform xky reduces within constant minimization equivalent maximization look whether analogous properties first xky take elements equal obtain example minimizing xky however merely formal relationship respect given interpretation minimizing xky respect given fixed prior even summed neither axiomatic concentration justifications minimization would apply second concentration properties establish support maximization method inference vectors limited information another method suggested based minimizing information divergence vectors ukv pointed example reduces ukv sum inference problem problem iii infer function necessarily summing integrating given belongs certain feasible set functions defined linear equality constraints default model shown solution problem minimizes pkq recently minimization generalizations divergences found many applications area known matrix factorization see relationship minimizing maximizing generalized entropy proposition let linear equality inequality constraints vector let solution maxgent problem constraints given prior let solution minimum problem constraints prior makes two solutions coincide prior follows fact minimum solution problem prior constraints set seen expression satisfies inference minimizing equality constraints axiomatic basis pointed combinatorial concentration rationale advocating seem apply proposition shows adoption particular prior furnishes rationale except prior properly viewed independent solution posterior dependence may shed light difficulty finding concentration rationale general illustration example solved minimization assuming constant prior analytical solution possible form maxgent solution function question becomes value adopt constraints scaling sensitivity optimal count vector discuss necessity introducing tolerances constraints defining maxgent problem effect tolerances maximization turn scaling problem multiplying data vector important properties scaling lastly discuss optimal maxgent count vector constructed real vector solving problem sense default absence constraints method infer constraints tolerances pointed necessity introducing tolerances linear constraints establishing concentration ordinary entropy constraints involved real coefficients solutions rational frequency vectors particular denominator solutions need integral count vectors equality constraints may integral solution satisfied likewise inequalities therefore define set real satisfy constraints relative accuracy tolerance identical except elements replaced appropriate small positive constants tolerances values constraints structure recall generalized entropy maximized assumed problem three main points concerning introduction first existence integral solutions elaborated proposition second related first ensures concentration statement holds scalings problem larger threshold analogous concentration frequency rational vectors hold denominators larger third effect maximization subject proposition gives fundamental facts existence count vectors given vector close enough small count vector obtained rounding words every real vector integral vector close enough small depend number proposition define min point given particular vector add constraints problem decrease best stay proposition used recall infinity norm matrix maximum norms rows example fig shows network consisting nodes links links subject certain impairment quantity associated link impairment additive value path consisting links figure data impairment network suppose measured paths also known access links contribute certain amount shown fig structure matrices data vectors problem infer impairment vector measurement vector clearly values depend chosen units change various conditions whereas elements constants defining structure network independent units suppose take proposition vector satisfies constraints exactly rounded vector set defined effect tolerances optimality constraints polytope point however introducing tolerance turns equalities inequalities becomes apart change dimension also contains point assumes value greater maximum must taken account since concentration refers vectors following lemma shows amount value exceed due widening domain bounded linear function generalizes prop ordinary entropy lemma let vector lagrange multipliers corresponding solution maximization problem define frequency vector corresponding density vector corresponding upper bound least lemma says simply maximum term positive equals iff leaving aside possible special lemma says resulting allowable limited also even one equality constraint limits size allowable even scaling data bounds allowable sums establish fundamental property maximizing generalized entropy problem data scaled factor aspects solution scale factor maximizes linear proposition suppose relaxed count vector constraints also imply bounds let constant vector maximizes scaled constraints cbe cbi maximum value new bounds defined depend structure matrices data general problem bounding simple answer scaling variables linear program whose objective function positive linear combination variables converted one objective function simply sum variables special cases derive simple bounds proposition bounds sums equality constraints kbe bound increase also inequalities way density vectors equal vectors proportional suppose allp arei eeachi occurs least one constraint smallest element row respectively element otherwise recall occurs least one constraint necessary condition problem bounded proposition applies example find optimal count vector given relaxed optimal count vector construct count vector reasonable approximation integral vector solves problem sense sum close distance small norm properties needed require sum let vector obtained rounding elements nearest integer obtained process rounding adjusting definition defn given form density vector set construct adjusting follows let set otherwise add elements rounded subtract elements rounded resulting vector refer optimal count vector maxgent count vector even though unique sums differ much norm proposition optimal count vector definition approximations integral solution problem example simply achieves smaller norms point minimizes euclidean distance required sum another sophisticated definition pwould use solution integer linear subject linear program minz equivalent mina subject better definition would improve bound integral vector achieve norm smaller solution linear program ignores constraint minimizes term objective function individually concentration respect entropy difference clear concentration occur situation like one example fact global maximum enough section demonstrate concentration around indeed occur sense statement pertaining entropies done two stages theorem theorem consider count vectors sum satisfy constraints divide two sets according deviation generalized entropy given irrespective values discuss possible range assumed problem constraints imply bounds sum found solving linear programs integral vector satisfies constraints exactly must sum use slight modification definition defined may assume without loss generality otherwise count vectors sum known reduce case frequency vectors studied remark certain degree arbitrariness flexibility definitions setting says allowable sums count vectors belong say allowable vectors could argued introducing tolerance numbers allowed become functions however would introduce significant extra complexity definition makes concessions simplicity restricting somewhat allowable sums slightly adjusting value handle boundary case easily defined range allowable sums use disjoint unions sets irrespective note following relationship among numbers realizations optimal count vector sets realizations words single vector dominates set set dominates set likewise concentration statement says given number data scaled factor establish inequality finding lower bound upper bound theorem presents ratio bounds find concentration threshold ensures given theorem table describes notation process scaling problem data basic quantities derived quantities table data scaling process symbols left denote quantities scaling symbols right quantities derived scaled basic quantities realizations optimal count vector section find lower bound definition terms quantities related generalized entropy like number realizations frequency vector entropy number realizations count vector related generalized entropy given let elements follows immediately problem bounds hold even since elements remark take next want bound terms proposition assume lemma applies get returning remains find convenient lower bound since use obtain another simpler bound obtained noting maximum equal bound generally better become slightly worse exceptional situations putting form convenient scaling according table remark condition certainly possible formulate maxgent problems whose solutions elements smaller fact arbitrarily close thus invalidate however dealing large problems scaled concentration arise see theorem one way deal problem formulations take problem certain prescaling original one might say pathological problem nevertheless one wanted avoid issue entirely one could use weaker bound subject restriction see example remark remark compare bound derived count vectors one adapted bound density vectors proof proposition derived bound binary entropy function place based bound see problem improved version using norms multiplying sides using fact obtain one way compare bounds ask sides apart term behave scaling problem see increases tends goes realizations sets smaller entropy derive upper bounds number realizations sets combining lower bound establish first main result theorem going inequality ignored constraints using proceeding proof lemma dxk show appendix sum last line bounded better bound sum obtained proof lemma asymptotically tight fixed using improved bound turn set defined bounding sum line integral first line widened interval integration recall definition therefore sums defined combining arrive first main result lower bound ratio number realizations optimal count vector set count vectors generalized entropy theorem given structure matrices data vectors let optimal solution problem assume recall remark constants exp one use theorem problem already large enough require scaling one may substitute appropriate values see kind concentration achieved note concentration tolerance appear theorem scaling factor needed concentration happens lower bound theorem size problem increases section establish theorem first concentration result shows bound exceed given introducing bound theorem scaling factor facilitate scaling develop bounds functions appearing first since invariant scaling first product increases next writing shown function multiplying decreases maximum occurs thus finally cxi cxi since putting constant algebra derivative shown negative scaling factor applied original problem must also belongs first requirements translates largest two solutions equality version inequality hold second requirement really two parts first part need proposition ensured since proposition hold second part need proposition ensured cxi last implication follows need largest solution quadratic equation version given tolerances established compute lower bound concentration threshold scaling factor required concentration occur around point set extent specified second main result establishes statement concerning deviation value theorem conditions theorem define concentration threshold max defined data scaled factor count vector definition belongs set sets defined equation type generally two roots one small one large example roots note constraint information appears implicitly via various sets figuring theorem depicted figure figure outer ellipse set count vectors satisfy constraints within tolerance partitioned shown gray inner white ellipse relationship shown one possible likewise bounds concentration threshold useful know something threshold depends solution maxgent problem parameters without solve equations derive bounds regard convenience maxi maxi hence lower bound max since must bigger first term equals second intuitively expected bound says smaller scaling need looking expression see holds farther apart bounds possible sums accords intuition discuss example next maxi maxi max expressions upper bounds respectively shown upper bound says larger less scaling need likewise elements implications agree intuition illustrations bounds example bounds still require knowing solution maxgent problem concerning last expression recall assumption remark examples give two examples first continues example illustrates bounds concentration threshold points first sight surprising behavior threshold second example illustrates relationship concentration bounds example returning example find thus also table shows happens problem data scaled factor dictated given use special notation quantities appearing unscaled scaled problem whenever write etc scaling factor could implied table scaling problem example given respect discrete solution first row table example satisfies equality constraints tolerance kae min inequality constraints tolerance see scaling factor quite sensitive rather insensitive surmised one way interpret scaling change scale measurement data change units scaling larger factor means choosing refined units results show concentration increases intuitively expected respect bounds threshold first row table yield yield second row bounds give suppose problem data first row bounds say scaling needed second row theorem gives bounds give original problem threshold scaled threshold becomes apparently unlike rest problem proposition concentration threshold behave linearly scaling problem problem explanation first sight disconcerting behavior first theorem say minimum required scaling factor given problem second many approximations involved derivation many get better size problem increases example intuition says bounds possible sums admissible count vectors something concentration wide concentration difficult achieve suppose somehow maxgent vector derived remains fixed wider range allowed constraints larger scaling factor required dominate bound agrees due expression give simple situation difference increase remains fixed consider problem box constraints depicted fig maximum upper right corner box proposition reduce lower left corner box moves left upper right corner remains fixed shown figure thus widen bounds leaving unchanged figure reducing leaving unchanged problem new box constraints requires scaling original problem construction generalizes immediately dimensions see concentration respect distance maxgent vector section provide results analogous sets formulated terms distance elements optimal vector measured norm intuitive measure difference entropy three main results theorems analogues theorems theorem optimized version theorem require specifying various places reuse results methods presentation succinct given want consider count vectors lie whose distance norm lie farther norm situation less straightforward vectors first given two real norm difference never smaller difference norms make sense require norm second considering norms large numbers especially scaling problem consider region around reasons define min min complicated definition frequency vectors small number equal would say density vectors general says norm close bound consider disjoint unions sets given min min set count vectors sum two sets partition number lie definitions establish analogue given concentration threshold problem data scaled factor maxgent count vector set least times realizations vectors set one important difference tolerances chosen independently one another must obey certain restriction case frequency vectors lower bound see proposition details remark maximum domain tolerances constraints said tolerance widens domain may move vector maximizes may change maximum value looking concentration region size around point large expect region dominate count vectors number realizations since may even lie inside set proposition already lies boundary given concentration requires upper bound allowable see setting limitation magnitude respect perfectly fine set contains contain definition realizations sets far maxgent vector bound number realizations need show far sense far simplify notation section denote simply first need auxiliary relationship norm difference two real vectors norm difference normalized versions proposition let vector norm etc kxk kyk min kxk kyk want show follows taking lemma bounding divergence term terms norm using proposition norm min place lemma given notation lemma count vector sum min general max min bound divergence used pkq due closeness number thought measuring far away density vector also relevant authors study inf pkq subject refer lemma balance coefficient theorem provides exact value inf pkq function valid could used lemma expense additional condition notation also show qmax qmax largest element result incorporated lemma proceed find upper bound beginning min applying lemma similarly ignoring condition involving norm sum well intersection sum identical expression given beginning following development led compare consequently inequality implied last line derived appendix bound compared bound sense problem partition combining obtain lower bound ratio numbers realizations analogous theorem theorem given structure matrices data vectors let optimal solution problem assume recall remark constants defined lemmas respectively lower bound useful exponent positive elaborate also like theorem theorem says nothing large bound given problem job theorem scaling concentration around maxgent count vector investigate happens lower bound theorem problem data scaled factor end results concentration theorems table described scaling data affects quantities appearing bound except new scaling effect see lagrange multipliers remain definition lemma shows end result scaling imply scaling multiplies exponent theorem effect scaling given finally since conclusion data scaled factor theorem says also follows expression proof lemma terms multipliers defined compared recalling remark important consequence concentration occur tolerances must satisfy ensured choosing small enough given large enough given results paper immediately translate frequency vector case compared similar condition theorem concentration statement hold scaling factor inequality form hold greater larger two roots equality version also need set specifically must first ensured second definition need min proposition hold established desired analogue theorem proved statement terms distance maxgent vector theorem conditions theorem suppose tolerances satisfy defined lemmas let given let largest root equality version finally define concentration threshold max data scaled maxgent count vector definition belongs set specifically second inequality claim theorem follows first holds whether sets defined defined theorem constraint information appears implicitly theorem via bounds concentration threshold derived similarly finally fig depicts various sets involved definition threshold appearing theorem min min figure concentration around norm maxgent vector times realizations entire set shown gray relationship show one possible likewise definition theorem see increases constants behave opposite ways decreases increases one cares tolerances care specify particular opens possibility reducing choosing minimize largest theorem given suppose various quantities theorem equation root define max data scaled maxgent count vector definition belongs set situation simple lower bound concentration threshold max first expression used upper bound ratio small imbalanced distributions single dominant element case large approaches perfectly balanced ones bound says increases seen example examples first two examples illustrate theorems third illustrates removal solution mentioned boundary case maxgent vector sums maximum allowable example return example recall constraint means want small must correspondingly small commented table lists various values obtained theorem table scaling problem example given threshold behave smoothly max theorem example consider data table specified care particular long ensures chosen automatically theorem table shows concentration threshold significantly reduced table threshold given optimal selection compare table variation implied lower bound evident example fig shows four cities connected road segments assume vehicles travelling one city another follow direct route traffic city figure four cities connected bidirectional road segments arrows indicate constrained directions number vehicles city known puts upper bounds number leaves city also observations lower bounds lij number vehicles road segments information want infer many vehicles travel city city infer matrix counts suppose constraints vii vij last three reflect direct route assumption define vector maxgent method note knew vehicles city leave city could define frequency matrix dividing matrix thus formulate maxent problem maxgent solution matrix form sum maximum generalized entropy boundary case sum maximum possible problems involving matrices subject constraints type analytical solutions possible studied applying theorem optimal yields threshold using scaling factor results integral matrix sum matrix least times number realizations entire set defined gain appreciation means easy determine size set particular subset contains least comparison whole contains count vectors compute numbers barvinok software get lower bound using stronger constraint place hard express min conclusion demonstrated extension phenomenon entropy concentration hitherto known apply probability frequency vectors realm count vectors whose elements natural numbers required introducing new entropy function sum count vector plays role still like shannon entropy generalized entropy viewed combinatorially approximation log multinomial coefficient derivations carried fully discrete finite framework involve probabilities objects make claims fully constructible discrete combinatorial setting attempt reduce phenomenon entropy concentration essence believe concentration phenomenon supports viewing maximization generalized entropy compatible extension maxent method inference acknowledgments thanks peter comments previous version manuscript many useful discussions subject proofs proof proposition given reached sequence steps increases single coordinate value increases step partial derivatives positive derivatives points consist single element direct proof given case formal proof note directional derivative point direction move away direction increase precisely mean value theorem written line segment finally element strictly positive proof proposition establish concavity suffices show hessian negative find diag matrix whose entries given arbitrary must show first write diag define equivalent convex sum fixed function domain minimum constraint occurs least value function establishes given see exactly points iff fact hessian fails negative definite imply strictly concave negative definiteness sufficient necessary condition strict concavity seen strictly concave scaling homogeneity property consider distinct points strict concavity would require true proposition chapter says function strongly convex convex set modulus iff modified function convex applying function proof carried part would show given chosen modulus condition false point definition convex positively homogeneous function defined extended real numbers sublinear define setting negative statement applies finally sublinear function property proof lemma expand taylor series around since function open set two points set theorem set noting diag second equality proof proposition find know sum terms right negative chose expand around point sign terms known fixed define function function increasing see set becomes arithmetic mean use fundamental property power means weights summing see theorem desired result follows choosing show increases always power means technique similarly convex function therefore see min max follows establishes lemma coefficient equals iff elements equal theorem proof proposition suppose ball around contained ball contain points sufficiently small proposition maximizes words contain strict inequalities let another point part least one element must strictly less corresponding element proposition must proof proposition consider equality constraints first writing see satisfied maxi mini since thus kae kae rectangular matrix norm defined largest norms therefore ensure kae suffices require claimed turning inequality constraints write since inequality satisfied certainly hold maxi mini equivalent kai turn hold require types constraints final condition stronger necessary case inequalities finally part proposition follows part since rectangular matrixpa compatible vector holds maxi maxi maxi kai proof proposition write elements form expression involving vectors matrices abi elements determined substituting constraints thus kth equality constraint leads equation form expression involving similarly binding inequality constraint solution system equations form unchanged bbi multiplied constant establishes first claim claim maximum follows property list coming bounds fact scale property general linear programs solution linear program subject solution subject similarly maximum proof proposition part given kae kbe omitting superscript simplify notation hence kbe since simply sum part satisfying satisfy well divide inequality system smallest element element otherwise leave inequality since appears constraint add resulting inequalities pbyi sides defined proposition proof proposition first adjustment performed always possible must least elements rounded floors ceilings clear adjustment makes sum suppose density vector sums sum rounded version differs thus bound first show adjustment causes elements differ corresponding elements rest differ maxd next since sums lastly follows last statement fact sums finally bound follows proof lemma brevity proof denote simply given vector set therefore abi abi bbi since satisfies equalities binding inequalities substituting maximum generalized entropy expressed terms lagrange multipliers data bbi implies quantity least large claimed arbitrary sequence count vector probability therefore rest proof analogous proposition abi therefore noting positive negative bbi around removed using max finally count vector prp given expression property comparing using claim lemma follows proof inequality let sum found closed form noticing split even odd sums hypergeometric however resulting expression complicated purposes obtain tractable bound matches highest power sum need auxiliary fact relating gautschi inequality gamma function see follows applying recursively find line follows using denominator first line pulling last term sum reversing order terms applying term get going pthe line last ignored exponential factor term expansion last line substituted ratio sum last expression tends proofs inequality first term upper bound inequality type show inequality satisfied expression motivated method successive substitutions get satisfies inequality substituting inequality get therefore hold max assumed always true claim established turning case suppose otherwise fall case suffices find satisfies third term upper bound write hold taken guaranteed assumption upper bound proof proposition ease notation let kxk kyk first show min exchanging derivation also follows establishes implies kxk kyk min kxk kyk taking contrapositive min kxk kyk kxk kyk claim proposition follows proof inequality improvement bounding sum second line simply pulling bounding rest integral splitting sum around point since summand increasing function last line written desired result follows neglecting second exponential two summands proof theorem minimize max setting equal substituting defines get equation theorem let stand function function decreases condition theorem must hold decrease less root condition boils arrive condition theorem find simple lower bound therefore first line used fact product last two factors expression follows line shown first factor line increasing function minimum occurs follows condition existence root satisfied stated theorem since ensured take max theorem finally quite likely given seen references apostol mathematical analysis berend kontorovich minimum complements balls ieee transactions information theory also http boucheron lugosi massart concentration inequalities nonasymptotic theory independence oxford university press boyd vandenberghe convex optimization cambridge caticha entropic inference foundations physics brazilian meeting bayesian statistics also http cichocki cruces amari generalized divergences application robust nonnegative matrix factorization entropy information theory coding theorems discrete memoryless systems cambridge edition least squares maximum entropy axiomatic approach inference linear inverse problems annals statistics maxent mathematics information theory hanson silver editors maximum entropy bayesian methods int workshop santa new mexico kluwer academic cover thomas elements information theory wiley edition giffin caticha updating probabilities data moments knuth editor bayesian inference maximum entropy methods science engineering aip conf proc entropy concentration empirical coding game statistica neerlandica also http hardy littlewood inequalities cambridge university press convex analysis minimization algorithms jaynes concentration distributions entropy maxima rosenkrantz editor jaynes papers probability statistics statistical physics reidel jaynes probability theory logic science cambridge university press oikonomou explicit bounds entropy concentration linear constraints ieee transactions information theory march also http oikonomou analytical forms likely matrices derived incomplete information international journal systems science march also http olver lozier boisvert clark editors nist handbook mathematical functions cambridge university press oikonomou sinha network design cost analysis optical vpns proceedings ofc anaheim march optical society america ordentlich weinberger refinement pinsker inequality ieee transactions information theory may papoulis pillai probability random variables stochastic processes graw hill sason entropy bounds discrete random variables via maximal coupling ieee transactions information theory shore johnson axiomatic derivation principle maximum entropy principle minimum ieee transactions information theory skilling classic maximum entropy skilling editor maximum entropy bayesian methods kluwer academic verdoolaege woods bruynooghe cools computation manipulation enumerators integer projections parametric polytopes technical report report leuven march zhang estimating mutual information via kolmogorov distance ieee transactions information theory
| 10 |
approach bioheat transfer problems magnetic hyperthermia kenya murase department medical physics engineering division medical technology science faculty health science graduate school medicine osaka university yamadaoka suita osaka japan short title approach magnetic hyperthermia address correspondence kenya murase med eng department medical physics engineering division medical technology science faculty health science graduate school medicine osaka university yamadaoka suita osaka japan tel fax murase abstract purpose study present approach analytical solutions pennes bioheat transfer equation apply calculation temperature distribution tissues hyperthermia magnetic nanoparticles magnetic hyperthermia validity method investigated comparison analytical solutions obtained green function method point shell heat sources numerical solutions obtained method sources good agreement radial profiles temperature calculated method obtained green function method also good agreement method method except central temperature source approximately difference also found equations describing solutions point shell sources obtained method agreed obtained green function method results appear indicate validity method conclusion presented approach bioheat transfer problems magnetic hyperthermia study demonstrated validity method analytical solutions presented study useful gaining insight heat diffusion process magnetic hyperthermia testing numerical codes complicated approaches performing sensitivity analysis optimization parameters affect thermal diffusion process magnetic hyperthermia keywords magnetic hyperthermia magnetic nanoparticle pennes bioheat transfer equation method green function method introduction hyperthermia one promising approaches cancer therapy commonly used heating method clinical setting capacitive heating use radiofrequency electric field however major technical problem hyperthermia difficulty heating targeted tumor desired temperature without damaging surrounding tissues electromagnetic energy must directed external source penetrate normal tissue hyperthermia modalities including ablation ultrasound hyperthermia reported efficacies modalities depend size depth tumor disadvantages include limited ability target tumor control exposure hyperthermia using magnetic nanoparticles mnps magnetic hyperthermia developed still development overcoming disadvantages mnps generate heat alternating magnetic field result hysteresis relaxational losses resulting heating tissue mnps accumulate development precise methods synthesizing functionalized mnps mnps functionalized surfaces high specificity tumor tissue developed heating elements magnetic hyperthermia furthermore renewed interest magnetic hyperthermia treatment modality cancer especially combined traditional therapeutic approaches anticancer drugs photodynamic therapy aspects magnetic hyperthermia received much recent attention bioheat transfer equation proposed pennes basis understanding kinetics tumor tissue heating solution equation important treatment planning design new clinical heating systems various investigations attempted obtain analytical solutions pennes bioheat transfer equation durkee antich solved cartesian spherical geometry based method separation variables green function method vyas rustgi obtained analytical solution using green function method describe temperature distribution due laser beam gaussian profile andra solved constant heat source embedded infinite medium without blood perfusion using laplace transform deng liu derived analytical solutions bioheat transfer problems generalized spatial transient heating skin surface inside biological bodies using green function method bagaria johnson modeled diseased healthy tissues two finite concentric spherical regions included blood perfusion effect regions obtained analytical solutions model separation variables recently giordano derived fundamental solutions pennes bioheat transfer equation rectangular cylindrical spherical coordinates although green function method convenient way describe thermal problems often applied solving bioheat transfer equation described rare handling become complicated besides green function method analytical solutions pennes bioheat transfer equation obtained use method considered easier implement green function method best knowledge however studies used approach purpose study present approach analytical solutions pennes bioheat transfer equation calculation temperature distribution tissues magnetic hyperthermia investigate validity comparison green function method method several heat source models materials methods pennes bioheat transfer equation estimate temperature distribution vivo solved pennes bioheat transfer equation given qmet temperature tissue thermal conductivity tissue density blood cpb specific heat blood blood perfusion rate temperature arterial blood qmet rate metabolic heat generation energy dissipation density specific heat tissue respectively study assumed volume flow blood per unit volume constant uniform throughout tissue means constant furthermore properties qmet assumed constant therefore reduced qmet considered temperature tissue steady state prior heating core body temperature maintained balance metabolic heat generation blood perfusion noted qmet assumed equal often seen literature describe spherical coordinates becomes method applying integral transform fourier sine transform yields see appendix sin given using formula lim sin reduced steady state reduced sin reduced illustrative examples considered four heat sources point shell sources point source case given dirac delta function point heating energy source given becomes substituting using formula sin obtain steady state sin reduced noted also obtained eqs singularity represented factor equations reveals highly localized effect point source shell source case given given becomes substituting sin sin sin cos cos using formulae cos obtain sin sin substituting yields sin steady state eqs become respectively note eqs also obtained eqs respectively source case given maximum value energy dissipation center radius associated far center heating affecting tissue source given becomes substituting yields steady state sin reduced sin eqs become respectively source case given given becomes sin cos substituting yields sin cos sin becomes sin cos green function method point source green function pennes bioheat transfer equation radial flow infinite domain spherical coordinates given giordano using function obtain temperature point source see appendix noted integral diverges infinity steady state analytical solution integral thus solution obtained greens function method point source becomes shell source using green function given giordano obtain temperature shell source see appendix obtain steady state solution integral analytical thus solution obtained greens function method shell source becomes similarly integral analytical solution thus solution shell source becomes method also solved using method scheme see appendix sources comparison used method appendix outer radius domain analysis taken spatial time intervals taken respectively energy dissipation magnetic nanoparticles rosensweig developed analytical relationships computations energy dissipation mnps subjected alternating magnetic field amf theory given permeability free space equilibrium susceptibility amplitude frequency amf respectively effective relaxation time given neel relaxation brownian relaxation time respectively given following relationships average relaxation time response thermal fluctuation viscosity medium boltzmann constant temperature kvm anisotropy constant mnp taken hydrodynamic volume mnp larger magnetic volume mnp diameter model assumed thickness sorbed surfactant layer actual equilibrium susceptibility dependent magnetic field assumed chord susceptibility corresponding langevin equation given coth hvm cos domain magnetization suspended particle volume fraction mnps study considered magnetite mnps parameters magnetite taken follows taken close typical magnetite dosage per gram tumor reported clinical studies figure shows relationship magnetite fixed varied khz khz interval khz whereas fig shows case fixed khz varied interval noted unit converted use relationship shown fig largely depends maximum value increases increasing illustrative example considered case khz case point source assumed mnps located within sphere radius relationship volume region mnps located taken shell source width shell assumed respectively resulting thus assumed sources used eqs numerical studies numerical studies performed following conditions values properties blood tissue assumed follows cpb study values eqs taken results figure shows comparison radial profiles temperature calculated method calculated green function method point source three time points shown fig good agreement figure shows comparison radial profiles temperature calculated method calculated green function method shell source four time points shown fig good agreement figure shows comparison radial profiles temperature calculated method calculated method source four time points shown fig good agreement figure shows comparison radial profiles temperature calculated method calculated method source four time points shown fig although difference approximately observed good agreement except central temperature discussion study presented approach bioheat transfer problems magnetic hyperthermia derived transient analytical solutions pennes bioheat transfer equation several heat source models using approach furthermore investigated validity approach comparison analytical solutions obtained green function method point shell sources numerical solutions obtained method sources best knowledge analytical solutions obtained approach reported previously largest difference observed central temperature obtained method obtained method source fig difference approximately excluding case good agreement method green function method method figs indicating validity method previously described solutions obtained greens function method point shell sources given eqs respectively eqs agree eqs derived method respectively furthermore solution shell source obtained greens function method also agrees obtained method results also appear indicate validity method shell source used study model consisting thin shell mnps outer surface spherical solid tumor whose outer region extends infinity represents normal tissue pointed giordano model realistic model distribution provides approximately constant therapeutic temperature inside tumor model also good agreement method green function method green function method convenient way solving differential equations pennes bioheat transfer equation mathematically green function solution differential equation instantaneous point source temperature distribution various heat sources calculated use green function method necessary compute integral product green function function describing heat source shown eqs general integral becomes double integral respect temporal spatial variables point shell sources described dirac delta function shown eqs relatively easy compute double integral however always easy compute double integral heat sources whose function described dirac delta function sources hand method presented study appears much easier implement green function method method presented study kernel integral transform taken sin general kernel chosen depending boundary conditions boundary condition first kind kernel sin whereas cos boundary condition second kind study parameter see appendix always zero boundary condition first kind thus used sin kernel integral transform study analytical solutions presented study based several assumptions first domain analysis assumed infinite although assumption considered valid deep tumors surrounded normal tissue method applied case relatively superficial tumors second properties blood tissue assumed tumor normal tissue third shape tumors distribution mnps assumed spherically symmetric although analytical solutions derived study applied cases complex geometries heterogeneous medium provide useful tools testing numerical codes complicated approaches performing sensitivity analysis parameters involved problem conclusion presented approach bioheat transfer problems magnetic hyperthermia study demonstrated validity method analytical solutions presented study useful gaining insight process magnetic hyperthermia testing numerical codes complicated approaches performing sensitivity analysis optimization parameters affect thermal diffusion process magnetic hyperthermia appendix following parameter introduced reduced furthermore perform following variable transformation becomes apply integral transform fourier sine transform obtain defined sin sin denotes variable assumed take values infinity continuously noted taken zero obtain solving respect yields value assume temperature equal obtain using following inverse fourier transformation sin obtain sin finally use eqs temperature obtained appendix green function infinite domain given point source point source model given case solution given using following relationship sin sin formula cos obtain thus obtain shell source case given case solution given thus obtain using formula lim sin yields thus obtain appendix solve used following method scheme first divide spatial time domains small intervals denote temperature nodal point time reduced denotes energy dissipation nodal point time thus computed used following rule avoid dividing zero lim lim obtain numerical stability following condition satisfied boundary conditions taken zero center outer boundary initial conditions temperature assumed references abe hiraoka takahashi egawa matsuda onoyama morita kakehi sugahara studies hyperthermia using radiofrequency capacitive heating device thermotron combination radiation cancer therapy cancer oura tamaki hirai yoshimasu ohta nakamura okamura radiofrequency ablation therapy patients breast cancers two centimeters less size breast cancer seip ebbini noninvasive estimation tissue temperature response heating fields using diagnostic ultrasound ieee trans biomed eng gilchrist medal shorey hanselman parrott taylor selective inductive heating lymph nodes ann surg jordan scholz johannsen wust nodobny schirra schmidt deger loening lanksch felix presentation new magnetic field therapy system treatment human solid tumors magnetic fluid hyperthermia magn magn mat murase oonoki takata song angraini ausanai matsushita simulation experimental studies magnetic hyperthermia use superparamagnetic iron oxide nanoparticles radiol phys technol rosensweig heating magnetic fluid alternating magnetic field magn magn mat neuberger schopf hofmann hofmann von rechenberga superparamagnetic nanoparticles biomedical applications possibilities limitations new drugdelivery system magn magn mat ito shinkai honda kobayashi medical applications functionalized magnetic nanoparticles biosci bioeng balivada rachakatla wang samarakoon dani pyle kroh walker leaym koper tamura chikan bossmann troyer magnetic hyperthermia melanoma mediated iron oxide magnetic nanoparticles mouse study bmc cancer pennes analysis tissue arterial blood temperatures resting human forearm appl physiol gao langer corry application green function fourier transforms solution bioheat equation int hyperthermia durkee antich lee exact solutions multiregion bioheat equation solution development phys med bio vyas rustgi green function solution tissue bioheat equation med phys andra ambly hergt hilger kaiser temperature distribution function time around small spherical heat source local magnetic hyperthermia magn magn mat deng liu analytical study bioheat transfer problems spatial transient heating skin surface inside biological bodies biomech eng bagaria johnson transient solution bioheat equation optimization magnetic fluid hyperthermia treatment int hyperthermia giordano gutierrez rinaldi fundamental solutions bioheat equation application magnetic fluid hyperthermia int hyperthermia ozisik value problems heat conduction international textbook company scranton pennsylvania carslaw jaeger heat solids oxford clarendon press oxford maenosono saita theoretical assessment fept nanoparticles heating elements magnetic hyperthermia ieee trans magn jordan scholz wust fahling krause wlodarczyk sander vogl felix effects magnetic fluid hyperthermia mfh mammary carcinoma vivo int hyperthermia spivak publish perish houston smith solution partial differential equations exercises worked solutions oxford university press london figure legends fig relationship energy dissipation diameter magnetic nanoparticles mnps magnetite amplitude alternating magnetic field fixed frequency varied khz khz interval khz relationship magnetite fixed khz varied interval unit converted use relationship fig comparison radial profiles temperature calculated method obtained green function method point source solid dashed dotted lines show results calculated method respectively whereas closed circles squares triangles show results obtained green function method respectively fig comparison radial profiles temperature calculated method obtained green function method shell source solid long dashed dashed dotted lines show results calculated method respectively whereas closed circles squares triangles diamonds show results obtained green function method respectively fig comparison radial profiles temperature calculated method obtained method source solid long dashed dashed dotted lines show results calculated method respectively whereas closed circles squares triangles diamonds show results obtained method respectively fig comparison radial profiles temperature calculated method obtained method source solid long dashed dashed dotted lines show results calculated method respectively whereas closed circles squares triangles diamonds show results obtained method respectively fig fig fig fig fig fig
| 5 |
lasalle invariance principle dynamical systems concise tutorial oct wenjun mei francesco bullo center control computation university california santa barbara summary method establish lyapunov stability equations systems lasalle invariance principle originally proposed become fundamental mathematical tool area dynamical systems control theoretical research engineering practice dynamical systems least extensively studied systems example model predictive control typically studied via lyapunov methods however peculiar absence standard literature standard treatments lyapunov functions lasalle invariance principle nonlinear systems textbooks nonlinear dynamical systems focus systems example classic textbook khalil nonlinear systems relegates systems exercises end chapter textbook vidyasagar present lasalle invariance principle chapter book lasalle author establishes lasalle invariance principle difference equation systems however useful lemmas given form exercises proof provided document provide proofs lemmas proposed needed derive main theorem lasalle invariance principle dynamical systems organize materials manner first introduce basic concepts definitions section dynamical systems invariant sets limit sets section present prove useful lemmas properties invariant sets limit sets finally establish original lasalle invariance principle dynamical systems simple extension section section provide references extensions lasalle invariance principles reading document intended educational tutorial purposes contains lemmas might useful reference researchers key words lasalle invariance principle difference equations nonlinear dynamics limit set lyapunov function basic concepts dynamical system motion limit set invariant set reviewing basic concepts dynamical systems first introduce frequently used notations let set natural numbers denote set integers positive integers respectively set real numbers denoted euclidean space denoted let column vector let column vector use denote empty set sequence mean without causing confusion sometimes omit subscript refer sequence discrete dynamical system given map following equations system referred difference equations system equation together additional condition defines problem difference equations system sequence solution problem similarly map defines order difference equations system corresponding problem following form lemma equivalence system order difference equations system equivalent difference equations system proof let define thereby define obtain first order difference equations system concludes proof simplicity rest document whenever refer difference equations system assume unless specified addition matter whether emphasized map always assumed continuous document definition discrete system dynamical system map discrete system property continuous map discrete dynamical system holds group property remark property implies uniqueness solution equations initial condition forward direction group property leads uniqueness solution direction limit set invariant set invariant set limit set two important concepts dynamical systems limit set characterizes asymptotic behavior difference equations system limit behavior compact invariant set implies existence limit sets subsection present definitions invariant set limit set definition invariant sets given map set define set set positively invariant negatively invariant invariant set invariantly connected closed invariant union two disjoint closed invariant sets set largest invariant set set remark following statements inferred definition set countably many one isolated fixed points invariantly connected largest invariant set implies presenting definition limit set first introduce preliminary notions definition motion periodicity fixed point extension motion given map vector motion refers sequence denoted motion periodic exists least referred period motion point called fixed point motion period set referred extension motion remark extension motion unique map definition distance convergence point set define distance inf norm defined motion converges set lim interior closure boundary define open ball around radius set set interior point exists denote int set interior points define closure set define boundary set int present definitions limit point limit set definition limit point limit set given motion limit point motion exists subsequence ambiguity map also refer limit point set limit points referred limit set denoted exists sequence given set limit set set denoted defined exists sequence properties invariant sets limit sets section present prove important properties limit sets invariant sets difference equations systems properties used proof lasalle invariant principle properties invariant sets first present lemmas properties invariant sets lemma invariantly connected set periodic motion given continuous map suppose invariant set finite elements set invariantly connected periodic motion proof suppose periodic motion period definition invariant closed also straightforward check union two disjoint closed invariant set therefore invariantly connected suppose invariantly connected consequence fixed point map otherwise either union two disjoint closed invariant sets since fixed point invariant must exists least one positive integer denote least integer since set finite elements least common multiple finite positive integer denoted therefore implies periodic motion conclude proof lemma closures invariant sets continuous map following statements hold closure positively invariant set positively invariant closure bounded invariant set invariant proof suppose positively invariant set map due continuity exists int implies positively invariant leads contradiction therefore thus concludes proof statement according statement closure bounded invariant set positively invariant suppose bounded invariant leads following result exists since bounded bounded exists sequence exists obtain sequence since compact set exists sequence subsequence ynk converges moreover since map continuous lim ynk lim ynk lim xnk therefore exists obtained therefore concludes proof statement lemma invariant set extension motion continuous map set invariant set motion starting extension proof suppose invariant set since positively invariant leads hand since also negatively invariant exists let since exists following argument let construct extension motion extension suppose motion starting extension since leads implies addition since exists therefore invariant set concludes proof lemma properties largest invariant set continuous map set largest invariant set following statements hold union extensions motions remain exists extension motion compact compact proof extension motion staring denoted one easily check set invariant according remark implies hand since invariant exists exists following argument conclude extension motion concludes proof statement statement straightforward result statement proceed prove statement sequence since compact moreover since leads due continuity lim lim therefore motion satisfies moreover since invariant exists obtain sequence since compact exists subsequence ynk ynk due continuity map lim ynk lim ynk lim xnk let applying argument obtain continue argument get extension motion statement proved leads therefore compact set properties limit sets lemma closed forms limit sets given continuous map limit set given definition satisfies similarly set limit set given definition satisfies proof first prove equation suppose since exists letting implies since argument holds hand since inf let must exist let must exist following argument construct subsequence implies therefore according definition concludes proof equation similarly equation proved following argument inf lemma invariance asymptotic properties limit set continuous map following statements hold limit set closed positively invariant motion bounded set nonempty compact invariant invariantly connected smallest set approaches proof sketch tis proof found page let according lemma closed set since closure set countably many points therefore intersection countably many closed sets closed moreover definition exists let due continuity map implies also limit point therefore obtain concludes proof statement prove statement since bounded exists sequence converges definition therefore concludes proof statement definition bounded bounded turn implied bounded according statement closed therefore compact concludes proof statement exists let since bounded exists sequence converges due continuity lim lim lim nkr shown exists therefore already according statement therefore invariant concludes proof statement prove statement contradiction suppose union two disjoint sets closed invariant consequence due continuity exists satisfying adopt generalized definition open ball around set set similar argument exists moreover since must exist sequence enters infinite times must exist positive integer otherwise subset limit set however since therefore exists satisfying bounded closed set since infinitely many compact set compact set must contain least one limit point contradicts assumption since concludes proof statement prove statement first point approaches prove contradiction approaches suppose converges exists letting obtain subsequence however since bounded exists sequence nkr implies nkr nkr contradicts nkr therefore proceed prove smallest set approaches set approaches also approaches therefore need discuss case closed set suppose sequence addition exists sequence suppose closed sequence already obtain since contradicts therefore thus concludes proof statement following lemma presents important properties limit set set proof follows line argument proof lemma lemma invariance asymptotic properties limit set continuous map set following statements limit set hold closed positively invariant bounded compact invariant approaches smallest set approaches remark unlike limit set point necessarily invariantly connected even bounded example consider following difference equation system one easily check set fixed points union two disjoint closed invariant sets lemma limit set compact positively invariant set continuous map suppose compact positively invariant set following statements hold compact invariant largest invariant set proof since compact continuous compact since positively invariant therefore definition concludes proof statement since bounded according statement lemma compact invariant proves statement prove contradiction largest invariant set suppose exists set since addition meanwhile contradicts concludes proof statement lasalle invariance principle extension preparation work section section ready present main theorem original lasalle invariance principle dynamical systems proof found page theorem lasalle invariance principle let set consider difference equations system defined map continuous suppose exists scalar map satisfying solution following problem satisfies bounded exists largest invariant set proof let compact since continuous lower bounded moreover since exists since exists due continuity xnk therefore lim lim leads moreover since bounded according lemma invariant therefore implies thus obtain since largest invariant set finally since approaches classic lasalle invariance principle stated theorem requires present simple extension classic lasalle invariance principle extension establish converge solution defined uniformly bounded finite time theorem extension lasalle invariance principle consider following difference equations system continuous set suppose exists map satisfying continuous exists compact set solution equation satisfies exists largest invariant set proof since solution according lemma since continuous uniformly lower bounded addition since exists exists sequence since continuous moreover leads therefore according lemma invariant therefore turn implies thereby moreover since invariant therefore since approaches concludes proof advanced versions lasalle invariance principle section provide incomplete list references extensions advanced versions lasalle invariance principle interest reading section chapter author discusses vector lyapunov functions propose sufficient condition convergence systems fixed points hale extends lasalle invariance principles autonomous systems infinite dimensions shevitz paden discusses lasalle invariance principle systems extensions lasalle results switched systems provided hespanha bacciotti mancilla alberto consider invariance principle dynamical systems generalized lyapunov functions first difference positive bounded regions results lyapunov functions invariance principles difference inclusions systems found research articles kellett teel bullo see lemma well book bullo see theorem refer book goebel systematic treatment hybrid dynamical systems references alberto calliero martins invariance principle nonlinear discrete autonomous dynamical systems ieee transactions automatic control bacciotti mazzi invariance principle nonlinear switched systems systems control letters bullo carli frasca gossip coverage control robotic networks dynamical systems space partitions siam journal control optimization bullo distributed control robotic networks princeton university press url http karatas bullo coverage control mobile sensing networks ieee transactions robotics automation goebel sanfelice teel hybrid dynamical systems modeling stability robustness princeton university press hale dynamical systems stability journal mathematical analysis applications doi hespanha liberzon angeli sontag nonlinear notions stability switched systems ieee transactions automatic control kellett teel smooth lyapunov functions robustness stability difference inclusions systems control letters khalil nonlinear systems prentice hall edition lasalle stability dynamical systems siam extension lasalle invariance principle switched systems systems control letters shevitz paden lyapunov stability theory nonsmooth systems ieee transactions automatic control vidyasagar nonlinear systems analysis siam
| 3 |
distributed fusion labeled densities via label spaces matching mar bailu wang wei suqi lingjiang kong xiaobo yang university electronic science technology china school electronic engineering chengdu city china email kussoyi paper address problem distributed tracking labeled set filters framework generalized covariance intersection gci analyses show label space mismatching phenomenon means realization drawn label spaces different sensors implication quite common practical scenarios may bring serious problems contributions firstly provide principled mathematical definition label spaces matching lsdm based information divergence also referred criterion handle propose novel distributed fusion algorithm named gci fusion via label spaces matching first step match label spaces different sensors end build ranked assignment problem design cost function consistent criterion seek optimal solution matching correspondence label spaces different sensors second step perform gci fusion matched label space also derive gci fusion generic labeled multiobject lmo densities based foundation labeled distributed fusion algorithms simulation results gaussian mixture implementation highlight performance proposed algorithm two different tracking scenarios ntroduction compared centralized tracking methods distributed tracking dmmt methods generally benefit lower communication cost higher fault tolerance increasingly attracted interest tracing community correlations estimates different sensors known devising dmmt solutions becomes particularly challenging optimal fusion problem developed computational cost calculating common information make solution intractable practical applications alternative use suboptimal fusion technique namely generalized covariance intersection gci exponential mixture densities emds pioneered mahler highlight gci capable fuse gaussian formed distributions different sensors completely unknown correlation based gci fusion rule distributed fusion probability hypothesis density phd phd filters explored however aforementioned filters one hand trackers target states indistinguishable hand almost solution optimal bayssian filter even though special observation model standard observation model assumed recently notion labeled random finite set rfs introduced address target trajectories uniqueness proposed class generalized labeled glmb densities conjugate prior also closed chapmankolmogorov equation standard observation model bayesian inference moreover relevant stronger results filter directly used tracking produce trajectories formally also outperform aforementioned filters except standard observation model labeled set filter also achieved good results generic observation model papi proposed density approximation labeled lmo density developed efficient filter generic observation model also provides detailed expression universal lmo density product joint existence probability label set joint probability density states conditional corresponding labels due advantages labeled set filters meaningful investigate generalization distributed environment fantacci derived solutions gci fusion marginalized labeled lmb posteriors highlight performance relevant dmmt algorithms based assumption different sensors share label space however analyses show assumption hard satisfied many real world applications word label spaces sensors always mismatch sense realization drawn label spaces different sensors implication practical scenarios referred label space mismatching happens direct fusion labeled posteriors different sensors exhibit counterintuitive behavior fusion performed objects different labels making fusion performance poor therefore lack robustness practice one perform fusion labeled posteriors different sensors directly glmb distribution also simply named distribution malher book first time letter abbreviation label space mismatching means double miss matching get rid bad influences two promising thoughts employed one perform gci fusion unlabeled version posteriors different sensors firstly proposed match label spaces different sensors perform gci fusion matched label space paper focuses latter contributions provide principled mathematical definition label space matching based information divergence definition also provides criterion judge whether label spaces matching moreover make criterion practicality derive specified expression set marginal density case proposed distributed fusion algorithm namely gci fusion lmo densities via label spaces matching short first step match label spaces different sensors end ranked assignment problem built seek optimal solution matching correspondence cost function based criterion perform gci fusion lmo densities matched label space addition derive gci fusion generic lmo density based assumption label spaces matching foundation many labeled dmmt numerical results performance proposed fusion algorithm gaussian mixture implementation verified background notation paper inhere convention states denoted small letter states denoted capital letter distinguish labeled states distributions unlabeled ones bold face letters adopted labeled ones observations generated states denoted small letter multitarget observations denoted capital letter moreover blackboard bold letters represent spaces state space represented label space observation space collection finite sets denoted denotes finite subsets elements labeled single target state constructed augmenting state label labels usually drawn form discrete label space distinct index space set positive integers state labeled state observation modelled finite set states finite set labeled states finite set observations generated singletarget states respectively use exponential notation function convention admit arbitrary arguments like sets vectors integers generalized kronecker delta function given otherwise inclusion function given ifx otherwise singleton notation used instead also notice labeled state rfs distinct labels set labels labeled rfs given projection defined distinct label indicator labeled density arbitrary labeled rfs lmo density represented expression given lemma lemma given labeled density positive integer define joint existence probability label set joint probability density states conditional corresponding labels thus lmo density expressed bayesian filter finite set statistics fisst proposed mahler provided rigorous elegant mathematical framework detection tracking classification problem unified bayesian paradigm fisst framework optimal bayesian filter propagates rfs based posterior density conditioned sets observations time time following recursion note bayesian filter also appropriate labeled set posterior labeled set integrals defined involving markov transition function andr likelihood function denotes set integral defined dxn gci fusion rule gci proposed mahler specifically extend fisst distributed environments consider two nodes sensor network time nodes maintain local posteriors rfs based densities gci proposed mahler fused distribution geometric mean exponential mixture local posteriors parameters determining relative fusion weight distributions derived following distribution minimizes weighted sum divergence kld respect given set distributions emd arg min dkl dkl dkl denotes kld dkl log note integral must interpreted set integral iii abel pace ismatching henomenon romising olutions gci formula generally computationally intractable set integrals need integrate joint state spaces considering cardinality number objects fortunately tractable derive closedform solutions gci fusion many simplistic labeled densities including lmb densities simplify formula largely however closedform solutions derived based assumption label spaces different local labeled set filters matching assumption really harsh practice making solutions restrictive realworld dmmt section firstly analyze causes label spaces mismatching phenomenon terms two popular birth procedures provide two novel methods solve challenge problem note gci fusion rule also appropriate labeled set posterior labeled set integrals defined involving phenomenon essential meaning realization drawn label spaces different sensors implication underlying implication posterior spatial distributions object different sensors tiny discrepancy phenomenon quite common labeled dmmt may originate time steps recursion filtering fusing last subsequent time steps many cases naturally birth procedure decisive influence matching label spaces different sensors hence following analyze causes terms two popular birth procedures adaptive birth procedure abp birth procedure widely used targets based observations associated persisting targets due randomness observations really difficult guarantee births different local set filters labeled using object label addition observation sets provided different sensors incorporating noisy observations objects stochastic missdetections stochastic clutters also contributed persisting objects instance sensor loses object due later sensor keep locking object always mismatching label object arise priori knowledge based birth procedure pbp birth procedure often used scenarios priori positions object births entrance marketplace airport etc generally object label two dimension time birth unique index distinguish objects priori born positions pbp provide reference object index contribute little birth time hence still exists chance mismatching births since easily effected uncertain measurement noise clutter variance prior born position addition due persisting objects may wrongly dominated clutters truncated due following time steps persisting objects also happen sometimes analyses suggest abp pbp difficult ensure different sensors share label space note pbp suffer less abp use prior information reference word ensure matching label spaces sensors ideal detecting environment sensor dose clutters estimate accuracy sensor enough high required promising solutions break away bad influence propose two solutions first method gci fusion performed unlabeled state space via transforming labeled rfs densities unlabeled versions therefore fusion method robustness glmb family proved unlabeled versions generalized multibernoulli gmb distributions gci fusion gmb distributions also proposed second method firstly match label spaces different sensors perform gci fusion labeled densities macthed labeled state space shown fig approach referred gci fusion label space matching paper mainly focuses fusion method labeled rfs density labeled rfs density label space matching share label space gci fusion fused density fig label spaces different sensors matched means time firstly perform gci fusion labeled state space gci usion via abel pace atching based second solution label spaces matching solution gci fusion lmos two key points need addressed clearly describe concept label space matching firstly give mathematical definition label spaces matching based information divergence also referred criterion foundation fusion method solving built ranked assignment problem matching relationship objects different sensors get matched label spaces finally condition different label spaces matched solution gci fusion universal lmo densities derived mathematical description label space matching order clearly describe label space matching formulate rigorous mathematical model shown definition definition also provides criterion judge whether label spaces different sensors matching call criterion criterion definition consider scenario sensor sensor observe spatial region suppose state space label space multiobject posteriors sensor sensor respectively rfs probability density represented union state space random finite subset said matching probability density given threshold describes distance two distributions condition demands cardinalities elements label spaces sensor sensor values note random finite subset related object label probability density object sensor hence condition demands densities object sensors slight difference ensures object label sensors matching incorporates statistical information object thus reasonable judge matching relationship label based probability densities word match label spaces object matching constrain satisfied different sensors remark parameter given threshold slighter values harsher criterion ideal case means objects different sensors share density distance usually chooses divergences including kld alisilvey measure different density key point using criterion compute probability density global density also called set marginal density respective preliminarily give concept set marginal density shown definition generalized computing method shown lemma specified computing method joint rfs set marginal density extened labeled density section derive specified expression set marginal density labeled random finite subset respective proposition proposition guarantees practicability criterion definition let rfs random finite subset denoted set density function called set marginal density respect lemma let rfs random finite subset denoted set marginal density respect denoted derived denotes set derivative remark lemma makes convenient get set marginal density random finite subset labeled rfs indeed local statistical properties labeled rfs learned set marginal density also relations label spaces correlations among different rfss densities known via analyzing relevances corresponding set marginal densities according proposition set marginal density single object space derived following proposition given multiobject posterior sensor set marginal density corresponding subset space labeled bernoulli distribution parameters shown dxn remark proposition indicates set marginal density labeled bernoulli distribution class bernoulli densities congenital advantage get tractable results information divergence generally making computation simplistic thus enhance practicability criterion largely label space matching via ranked assignment problem section showed phenomenon quite common practical scenarios actually different sensors observing spatial region tracks sensor one definite correspondence tracks another sensor ideal case consistent however due influence stochastic noise exist great uncertainty matching correspondence problem seeking solution matching correspondence essentially optimization problem section firstly provide mathematical representation matching correspondence different sensors using mapping function based criterion given definition build ranked assignment problem design principle cost function seek solution optimal matching correspondence information divergence employs divergence generalized form kld free parameter definition fusion map function implies set fusion maps called fusion map space denoted label space accomplished solving following ranked assignment problem enumerating fusion map represented assignment matrix consisting entries every row column summing either track sensor assigned track sensor row means track sensor false track corresponding track sensor misdetected column means track sensor false track corresponding track sensor misdetedted conversion assignment matrix given cost matrix optimal assignment problem matrix cost assigning track sensor track sensor according definition two label spaces matched distance arbitrary two single object densities bernoulli density indicating true object two sensors respectively tiny enough specifically equals hellinger affinity thus cost selection criterion becomes equality hellinger distance used describe distance two densities also consistent criterion proposition shows single object density follows labeled bernoulli distribution thus using formula renyi divergence two bernoulli distributions easily shown log log dxn log fusion map describes one possible hypothesis matching relationship different label spaces set marginal density provided proposition cost combined costs every true track sensor track sensor succinctly written frobenius inner product number fusion maps grows exponentially number objects due uncertainty need seek optimal estimation order perform gci fusion matching optimal assignment problem seeks assignment matrix minimizes cost function arg min denotes assignment matrix best matching hypothesis mapping case solving equation using murty algorithm true matching hypothesis specified consensual label space given tracks one sensor corresponding matched tracks another sensor leaved considering uncertainty remark optimal establishes optimal solution matching correspondence two label spaces hence consensual label space obtained makes assumption different sensors share label space come true gci fusion labeled density fusing lmo densities via gci rule main challenge gci formula computationally intractable due integrates joint however condition different label spaces matching hold problem gci fusion lmo densities great simplified need consider possible matching correspondence different label spaces proposition derived generic lmo density based matching label spaces proposition let labeled posterior sensor label spaces matching distributed fusion via gci rule given lmb respectively proposed fusion algorithm given algorithm algorithm proposed fusion inputs receive posteriors nodes step calculate set marginal density respect according proposition step perform fusion adopting iteration method initial obtain consensual label space according perform fusion according output fused posterior end return fused posterior form aussian ixture mplement detail computation cost function ranked assignment problem fusion special formed lmo densities lmb present work density conditional existence sensor represented form especially special formed lmo densities lmb densities solutions shown assumption different label spaces share birth space corresponding gci fusion referred gci fusion since calculation cost function involves exponentiation gms general provide preserve form suitable approximation exponentiation proposed adopted thus turns log det det moreover implementation gci fusion special formed lmo densities lmb assumption different sensors share label space refer denote identity zero matrices second sampling period standard deviation process noise probability target survival probability target detection sensor independent probability detection sensors observation model also linear gaussian standard deviation measurement noise number clutter reports scan poisson distributed clutter report sampled uniformly whole surveillance region parameters implementation chosen follows truncation threshold prune threshold merging threshold maximum number gaussian components nmax performance metrics given term optimal subpattern assignment ospa error born dies born dies communication line sensor sensor coordinate fig scenario simple distributed sensor network two sensors tracking two targets bernoulli birth distribution time depending measurement proportional probability assigned target updated time min max given expected number target birth time max maximum existence probability new born target ospa performance proposed fusion evaluated two tracking scenarios implemented using approach proposed section since paper focus problem weight selection choose metropolis weights convenience notice may impact fusion performance lmb filter adopted local filters efficiency lmb filter demonstrated targets travel straight paths different constant velocities number targets time varying due births deaths following target observation models used target state variable vector plannar position velocity denotes matrix transpose transition model linear gaussian specified coordinate erformance ssessment scenario demonstrate effectiveness fusion performance fusion compared assumption different sensors share label space two experiments abp pbp used respectively purpose simple scenario involving two sensors two objects considered shown fig duration scenario experiment preceding analyses show lsdm phenomenon arises frequently abp adopted local filters prove effectiveness proposed fusion fusion compared fusion abp situation adaptive birth procedure proposed employed scenario specifically existence probability time fig abp ospa errors fusion algorithms order adaptive birth runs fig illustrates performance fusion significant better fusion since method abp depends observations randomness labels birth targets time also randomness corresponding observations result fusion shows abp leads frequently necessity match label spaces different sensors ensure consensual performance gci fusion collapse result also evidences viewpoint seen removed perform really excellent exactly fusion outstanding performance gcilsm also gets benefit cost function ranked assignment problems coordinate experiment experiment analyzes problem pbp situation preceding analyses show pbp also suffers even though obtain priors births experiment performance fusion compared using pbp communication line sensor sensor sensor true estimated tracks sensor coordinate birth label fig scenario distributed sensor network three sensors tracking five targets born dies born dies born dies born dies born dies birth label coordinate coordinate scenario test performance proposed fusion challenging scenarios sensor network scenario involving five targets considered shown fig experiment proposed fusion compared fusion mentioned section gci fusion phd filter fusion use abp introduced scenario adaptive birth distribution introduced duration scenario true estimated tracks ospa coordinate time fig pbp procedure state estimation sensor single run state estimation sensor single run state estimation fusion single run ospa errors fusion algorithms order runs figs show estimations local filters fusion respectively single run seen run fusion fails perform fusing track local filters accurately estimate track due prior information births provide initial positions fail provide initial time object initialized different time step different local filters hence labels object different local filters mismatching obviously leading fusion algorithm completely lose object performance comparison fusion also shown fig expected performance fusion remarkable advantages towards fusion fusion getting worse target births deaths result consistent single monte carlo run results confirm fusion able handle fusion estimated cardinality ospa coordinate coordinate coordinate birth label birth label true tracks estimated tracks sensor true cardinality time time fig ospa distance order implementation adaptive birth cardinality estimation runs ospa distance cardinality estimation fig illustrate performance differences among three fusion methods seen performance gcilsm almost performances converge also performs slightly worse objects born explanation fusion considered possible matching correspondences label spaces different sensors jointly fusion utilizes optimal estimation matching correspondence moreover tiny performance loss fusion toward fusion also demonstrates superiority optimal estimation matching correspondence words ranked assignment problem built section match label spaces different sensors accurately consistently addition fig also reveals gcigmb fusion outperform fusion ospa error cardinality result also demonstrates effectiveness fusion vii onclusion paper investigates problem distributed multitarget tracking dmmt labeled density based generalized covariance intersection firstly provided principled mathematical definition label spaces matching based information divergence referred criterion proposed novel distributed fusion algorithm firstly match label spaces different sensors build ranked assignment problem seek optimal solution matching correspondence objects different sensors based criterion gci fusion performed matched label space moreover derive gci fusion generic labeled lmo densities gaussian mixture implementation proposed also given effectiveness better performance demonstrated numerical results present stage impact objects closely spaced fusion clearly thus work study fusion considering objects proximity eferences chong mori chang distributed multitarget multisensor tracking tracking advanced applications artech house chapter julier bailey uhlmann using exponential mixture models suboptimal distributed data fusion proc ieee nonlinear stat signal proc workshop mahler distributed data fusion unified approach proc spie defense sec gaussian mixture probability hypothesis density filter ieee trans signal vol ristic clark adaptive target birth intensity phd cphd filters ieee trans aerosp electron vol cantoni analytic implementations cardinalized probability hypothesis density filter ieee trans signal vol jul schmidt spooky action distanc cardinalized probability hypothesis density filter ieee trans aerosp electron vol cantoni cardinality balanced multitarget filter implementations ieee trans signal vol pham suter joint detection estimation multiple objects image observation ieee trans signal vol hoseinnezhad based road constraints proc ieee int fusion jul amirali hoseinnezhad alireza robust multibernoulli sensor selection tracking sensor networks ieee signal process vol amirali hoseinnezhad alireza sensor control via minimization expected estimation errors ieee trans aerosp electron vol jul hoseinnezhad suter bayesian integration audio visual information tracking using cbmember filter proc int conf speech signal process icassp prague czech republic may hoseinnezhad visual tracking background subtracted image sequences via filtering ieee trans signal vol clark julier mahler robust sensor fusion unknown correlations proc sens signal process defence sspd clark julier distributed fusion phd filters via exponential mixture densities ieee sel topics signal vol apr battistelli chisci fantacci farina graziano consensus cphd filter distributed multitarget tracking ieee sel topics signal vol mar guldogan consensus bernoulli filter distributed detection tracking using doppler shifts ieee signal process vol jun mahler statistical information fusion norwell usa artech house wang hoseinnezhad kong yang distributed fusion filter based generalized covariance intersection review ieee trans signal jun labeled random finite sets conjugate ieee trans signal vol jul phung labeled random finite sets bayes tracking filter ieee trans signal reuter dietmayer labeled multibernoulli filter ieee trans signal vol jun beard bayesian tracking merged measurements using labelled random finite sets ieee trans signal papi kim particle tracker superpositional measurements using labeled random finite sets arxiv preprint fantacci papi marginalized filter http accessed papi fantacci beard generalized labeled approximation densities arxiv preprint fantacci consensus labeled random finite set filtering distributed tracking http accessed wang kong yang distributed tracking via generalized random finite sets proc int conf inf fusion jul mahler advances statistical information fusion artech house wang kong joint random finite set scenario review ieee trans signal process kong enhanced approximation labeled density based correlation analysiss submmitted proc int conf inf fusion murty algorithm ranking assignments order increasing cost operations research vol schumacher consistent metric performance evaluation filters ieee trans signal
| 3 |
dec testing gorenstein property olgur celikbas sean abstract answer question celikbas dao takahashi establishing following characterization gorenstein rings commutative noetherian local ring gorenstein admits integrally closed ideal finite gorenstein dimension accomplished detailed study certain test complexes along way construct test complex detect finiteness gorenstein dimension projective dimension introduction throughout paper denotes commutative noetherian local ring unique maximal ideal residue field celebrated theorem auslander buchsbaum serre tells regular finite projective dimension burch corollary extended proving regular integrally closed ideal auslander bridger introduced generalization projective dimension see definition analogous regular setting finiteness characterizes gorensteinness local setting goto hayasaka studied gorenstein dimension integrally closed ideals analogous burch result established following see question yoshida stated discussion following let integrally closed ideal assume contains satisfies serre condition gorenstein aim paper remove hypothesis contains satisfies serre condition accomplish following result hence obtain complete generalization burch aforementioned result see also corollary generalization theorem let integrally closed ideal depth gorenstein date january mathematics subject classification key words phrases integrally closed ideals projective dimension semidualizing complexes test complexes sean supported part grant nsa olgur celikbas sean argument quite different goto hayasaka since uses complexes part introduction focus case modules defined next note modules case definition let denote either projective dimension let finitely generated module following condition holds finitely generated torr straightforward show also example shows converse statement fails general part proof theorem also answer following questions see corollaries question let module module must must gorenstein affirmative answers question additional hypotheses also majadas gives affirmative answer version question uses restrictive version test modules theorem follows next significantly stronger result theorem let extir gorenstein turn follows much general theorem corollary results detecting dualizing complexes conclude introduction summarizing contents paper section consists background material use throughout paper contains technical lemmas later use section develop foundational properties various objects answer question section contains theorems highlighted derived categories semidualizing complexes throughout paper work derived category whose objects chain complexes homological differential references include notation consistent particular rhomr derived derived tensor product two isomorphisms identified symbol projective dimension flat dimension denoted pdr fdr subcategory consisting homologically bounded complexes subcategory consisting homologically finite complexes finitely generated denoted dbf homologically finite semidualzing natural morphism rhomr isomorphism example semidualizing homr extir testing gorenstein property particular semidualizing dualizing semidualizing finite injective dimension homomorphic image local gorenstein ring dualizing complex converse holds work kawasaki particular cohen structure theorem shows completion dualizing complex dualizing complex semidualizing dual rhomr also semidualizing let flat local ring homomorphism let semidualizing semidualizing closed fibre gorenstein dualizing complex dualizing dualizing complexes introduced grothendieck harshorne general semidualizing complexes originated special cases general version premiering notion summarized next started work auslander bridger modules foxby yassemi recognized connection derived reflexivity general situation given let semidualizing dbf write derived rhomr natural morphism rhomr rhomr isomorphism case write instead complex dualizing every dbf derived particular gorenstein every dbf let flat local ring homormorphism given dbf one see auslander bass classes defined next arrived special cases general case described let semidualizing auslander class consists natural morphism rhomr isomorphism bass class consists rhomr natural morphism rhomr isomorphism dualizing complex given dbf one arhomr uses imply rhomr semidualizing rhomr rhomr let flat local ring homormorphism given one following two lemmas proved like respectively olgur celikbas sean lemma let dbf pdr let semidualizing following conditions equivalent iii rhomr following conditions equivalent iii rhomr lemma let flat local ring homomorphism gorenstein let dbf homology module finitely generated one one next result essentially theorem key theorem lemma let integrally closed ideal depth let finitely generated torr pdr particular module proof assume torr suppose pdr theorem states pdtest assume without loss generality hence theorem conclude claim one containment standard reverse containment let show suffices show integral since integrally closed end use determinantal trick suffices show whenever since desired fact implies desired completes proof claim fact depth implies element words contradicting claim let integrally closed ideal one assumes stronger assumption depth lemma one gets following strong conclusion given finitely generated torr pdr complexes section let semidualizing introduce main object study paper definition let dbf let denote either projective dimension complex following condition holds dbf torr testing gorenstein property let standard truncation argument shows module complex see proof examples modules given lemma note includes standard example see also appendix given dbf pdr finite thus complex also particular complex every finite finite projective dimension complexes complexes examples rings include regular rings cohenmacaulay rings minimal multiplicity see examples test modules test modules test modules mysterious see example example assume dualizing complex natural candidate complex rhomr indeed definition particular natural candidate complex see corollary however fail complex indeed jorgensen construct artinian local ring finitely generated module satisfies extir since local artinian dualizing complex namely injective hull residue field claim torr shows end recall following homr torr exti fact faithfully injective implies torr ring regular every dbf pdr hence trivial complex complex regular equivalently complex finite projective dimension similarly semidualizing complex dualizing equivalently complex finite projective dimension see particular complex gorenstein equivalently complex finite projective dimension continue discussion ascent descent test complexes theorem let flat local ring homomorphism let dbf complex show proof assume complex let dbf dbf flatness complexes moreover following isomorphisms olgur celikbas sean complex using conclude desired special case part argue part using place note conditions following three items hold natural henselization maps completion remark let flat local ring homomorphism assume closed fibre let dbf dbf let generating sequence set koszul complex follows following isomorphisms conclude note indeed already suffices show every homology module finitely generated know finitely generated moreover annihilated thus finitely generated since module finite finitely generated finitely generated theorem let flat local ring homomorphism let dbf assume closed fibre gorenstein proof one implication covered theorem reverse implication assume complex show complex let dbf let generating sequence set remark since complex one follows lemma deduce desired special case part one main results see also corollary theorem theorem let flat local ring homomorphism let dbf assume induced map finite field extension induced map finite complex complex proof one implication covered theorem reverse implication assume complex case complete show complex let let generating sequence set follows testing gorenstein property remark complex pdr follows pds implies pds case general case case implies complex complex complex theorem desired next example discussions ryo takahashi shows hypothesis necessary conclusion theorem example let field natural map finite free map since free let since regular module however module since regular see hand know whether regular closed fibre theorem sufficient note next question let flat local ring homomorphism let dbf assume regular complex must complex next corollary answers question corollary let set module gcb module module desired conclusions follow theoproof since rems next corollary answers question able improve result significantly next section see theorem subsequent paragraph corollary let module gorenstein module proof corollary says using conclude gorenstein hence end section building module see example proposition let flat local ring homomorphism set finitely generated torr torj let semidualizing set gorenstein complex gorenstein complex olgur celikbas sean proof first note since flat furthermore every isomorphisms particular one torr tori let finitely generated tora flat tora generally tori torj thus isomorphism previous paragraph implies desired corollary shows suffices show note induced map flat local complex also isomorphisms gorenstein closed fibre thus may replace induced map assume rest proof complete let dualizing see dualizing set rhoma rhomr noting proof let dbf need show see complex since dbf conclude fda follows special case hence part example let field consider local notice free hence flat also natural map local gorenstein closed fibre proposition let denote maximal ideal since assumptions proposition satisfied furthermore know since artinian local injective hull dualizing module thus dualizing conclude showing also module moreover note length type construct exact sequence following form testing gorenstein property indeed condition type says minimally generated elep ments let minimal presentation consider corresponding short exact sequence ker minimality presentation follows ker since conclude ker space need verify lena ker equality follows additivity length via condition lena len since flat apply functor sequence obtain next exact sequence associate long exact torr shows tori particular torr claim show let finitely generated torr display implies tori since desired claim show suppose way contradiction show contradicting let finitely generated torr display implies torr since pdr thus giving advertised contradiction establishing claim claim finitely generated torr torj shows almost since proposition follows claim end follow construction chapter build torr tori let minimal generating sequence homa instance work define formula since simple map monomorphism let coker long exact sequence exta associated sequence shows set make things concrete one uses specific functions suggested previous paragraph following minimal free presentation following minimal free presentation olgur celikbas sean flat implies thus homr torr extr fact faithfully injective implies torr since regular pda tori therefore conclude pda particular implies tora follows tor tora tora tora tora thus flat implies torr tora completes claim example detecting dualizing gorenstein properties next result yields theorems highlighted introduction note condition result assume priori dualizing complex however result shows condition implies dualizing complex theorem let semidualizing let rhomr assume one following conditions holds ring dualizing complex rhomr one dualizing proof assume dualizing complex set rhomr semidualizing let generating sequence consider koszul complex set rhomr injective hull note dbf indeed complex rhomr homologically finite since total homology module rhomr annihilated finite dimensional vector space matlis duality total homology module rhomr rhomr also finite dimensional vector space rhomr rhomr rhomr assumption rhomr implies rhomr rhomr rhomr conclude rhomr rhomr testing gorenstein property since complex implies conclude rhomr rhomr since faithfully injective argue proof conclude rhomr see also assumption dbf lemma shows assumption implies conclude isomorphic shift apply shift assume desired dualizing complex assume completion complexes semidualizing faithful flatness complex theorem also faithful flatness condition rhomr implies rhomrb rhomr rhomrb isomorphism addition assumption implies conclude thus follows condition satisfied note condition implies definition dualizing thus condition dbf implies dualizing implies fact dualizing desired give several consequences theorem compare next result corollary let let semidualizing rcomplex rhomr dualizing proof assume desired conclusion follows theorem note corollary let rhomr gorenstein proof use corollary theorem let gorenstein proof immediate corollary extir note hypotheses theorem weaker corollary indeed example exhibits module olgur celikbas sean module furthermore noted exist examples finitely generated modules extir corollary let semidualizing dualizing proof follows corollary since rhomr remark light corollary worth noting rings semidualizing complexes dualizing infinite projective dimension particular complexes neither corollary first examples constructed though published foxby see also also worth noting converse corollary fails general corollary let integrally closed ideal depth let semidualizing dualizing proof note lemma apply corollary recall next result initially obtained goto hayasaka extra conditions see also theorem let integrally closed ideal depth gorenstein proof apply corollary use theorem lemma finish section giving two examples show integrally closed depth hypotheses theorem necessary example let field let example since artinian proper ideal integrally closed see particular principal ideal generated integrally closed note gorenstein also fact form implies complete resolution example let field set let ideal generated set normal domain gorenstein see theorem let ideal generated integrally closed see furthermore pdr depth acknowledgments parts work completed celikbas visited north dakota state university november visited university connecticut april grateful kind hospitality generous support ndsu uconn mathematics departments also grateful jerzy weyman supporting visit irena swanson ryo takahashi helpful feedback work thank naoki taniguchi shiro goto pointing example also thank referee valuable corrections suggestions manuscript testing gorenstein property references auslander anneaux gorenstein torsion commutative commutative par pierre samuel vol paris auslander bridger stable module theory memoirs american mathematical society american mathematical society providence auslander buchsbaum homological dimension local rings trans amer math soc avramov foxby homological dimensions unbounded complexes pure appl algebra locally gorenstein homomorphisms amer math ring homomorphisms finite gorenstein dimension proc london math soc avramov iyengar miller homology local homomorphisms amer math burch ideals finite homological dimension local rings proc cambridge philos soc celikbas dao takahashi modules detect finite homological dimensions kyoto math celikbas gheibi sadeghi zargar homological dimensions rigid modules preprint christensen gorenstein dimensions lecture notes mathematics vol springerverlag berlin complexes auslander categories trans amer math soc electronic christensen foxby holm derived category methods commutative algebra preprint corso huneke katz vasconcelos integral closure ideals annihilators homology commutative algebra lect notes pure appl vol chapman boca raton evans griffith syzygies london mathematical society lecture note series vol cambridge university press cambridge foxby gorenstein modules related modules math scand frankild reflexivity ring homomorphisms finite flat dimension comm algebra frankild taylor relations semidualizing complexes commut algebra golod generalized perfect ideals trudy mat inst steklov algebraic geometry applications goto determinantal ideals define gorenstein rings sci tokyo kyoiku daigaku sect goto hayasaka finite homological dimension primes associated integrally closed ideals proc amer math soc electronic goto hayasaka finite homological dimension primes associated integrally closed ideals math kyoto univ hartshorne residues duality lecture notes seminar work grothendieck given harvard appendix deligne lecture notes mathematics berlin huneke swanson integral closure ideals rings modules london mathematical society lecture note series vol cambridge university press cambridge olgur celikbas sean iyengar local homomorphisms applications frobenius endomorphism illinois math david jorgensen liana independence total reflexivity conditions modules algebr represent theory takesi kawasaki macaulayfication noetherian schemes trans amer math soc majadas test modules flat dimension algebra appl matsumura commutative ring theory second cambridge studies advanced mathematics vol cambridge university press cambridge translated japanese reid semidualizing modules divisor class group illinois math lower bounds number semidualizing complexes local ring math scand wicklein adic foxby classes preparation serre sur dimension homologique des anneaux des modules proceedings international symposium algebraic number theory tokyo nikko tokyo science council japan takahashi local rings comm algebra vasconcelos divisor theory module categories publishing amsterdam mathematics studies notas notes mathematics verdier sga berlin lecture notes mathematics vol des des preface luc illusie edited note georges maltsiniotis yassemi math scand university connecticut department mathematics storrs usa address ndsu department mathematics box fargo usa current address department mathematical sciences clemson university martin hall box clemson usa address ssather url http
| 0 |
sep deep learning based cryptographic primitive classification gregory hill xavier bellekens division computing mathematics abertay university dundee scotland email gregorydhill division computing mathematics abertay university dundee scotland email cryptovirological augmentations present immediate incomparable threat last decade substantial proliferation widespread consequences consumers organisations alike established preventive measures perform well however problem ceased reverse engineering potentially malicious software cumbersome task due platform eccentricities obfuscated transmutation mechanisms hence requiring smarter efficient detection strategies following manuscript presents novel approach classification cryptographic primitives compiled binary executables using deep learning model blueprint dynamic convolutional neural network dcnn fittingly configured learn control flow diagnostics output dynamic trace rival size variability contemporary data compendiums hence feeding model cognition methodology procedural generation synthetic cryptographic binaries defined utilising core primitives openssl multivariate obfuscation draw vastly scalable distribution library cryptoknight rendered algorithmic pool aes blowfish rsa synthesis combinable variants automatically fed core model converging accuracy cryptoknight successfully able classify sample algorithms minimal loss index learning convolutional neural network cryptovirology ransomware binary analysis ntroduction idea cryptovirology first introduced describe offensive nature cryptography security threats comprises set revolutionary attacks combine strong symmetric asymmetric cryptographic techniques unique viral technology fierce proliferation rather troubling number reasons designed infect encrypt available hosts category malware disastrous consequences many afford reclaim private data financial loss typically quite substantial despite fact guaranteed recovery ultimately without backup little done preventative frameworks proven effectively halt unusual activity closely monitoring file system input output administrators always likely follow best practices case unknown weaknesses still exploited attacker goal cryptovirological landscape evolved recent years distinct growth noted overall number targeted attacks variants study ransomware samples observed found distinct number variants cryptographic capabilities analysis instances found utilise standard customised cryptography generational enhancements specifically terms key generation management tailored cryptosystems particularly limit scale effective analysis infamous variants known employed documented algorithms number deviations significant instances malware variants employed custom cryptography algorithms symmetric block cipher gost common exclusively encoding mechanisms field malware analysis seeks determine potential impact malicious software examining controlled environment investigators find flaws otherwise unknown current identification technologies sourcing keys blocking infection reverse engineering potentially malicious executable several issues addressed possible problems include accuracy analysis quality application obfuscation lifetime findings analysis binary typically considered two viewpoints static dynamic static analysis performed environment therefore examination relatively safe however potential morphism restricts accuracy results alternatively dynamic analysis sequentially assesses binary throughout execution provide significantly accurate results contend obfuscatory measures properly handled samples could prove somewhat hazardous manuscript focuses latter methodology cryptographic algorithm identification facilitates malware analysis number ways case assessing ransomware strains yields starting point investigation essential analytical time restricted uncertainty surrounding application custom undocumented established cryptosystem analysts struggle maintain complete awareness field makes task ideal automation effectively model cryptographic execution previous research relied number assumptions observations features necessarily always depict cryptographic code provide baseline analysis instance cryptographic algorithms naturally involve use bitwise integer arithmetic logical operations activities frequently reside loops example block ciphers typically loop input buffer decrypt also postulated encrypted data likely higher information entropy decrypted data deep learning studies intricate artificial neural networks anns multiple hidden computational layers effectively model representations data multiple layers abstraction representations amplify aspects input important discrimination techniques used amongst others identify network threats encrypted traffic network convolutional neural network cnn specialised architecture ann employs convolution operation least one layers variety substantiated cnn architectures used great effect computer vision even natural language processing nlp empirically distinguished superiority semantic matching compared models cryptoknight developed coordination methodology introduce scalable learning system easily incorporate new samples scalable synthesis customisable cryptographic algorithms entirely automated core architecture aimed minimise human interaction thus allowing composition effective model tested framework number externally sourced applications utilising linked functionality experimental analysis indicates cryptoknight flexible solution quickly learn new cryptographic execution patterns classify unknown software manuscript presents following contributions unique convolutional neural network architecture fits data map application timeinvariant cryptographic execution complimented procedural synthesis address issue task disproportionate latent feature space realised framework cryptoknight demonstrably faster results compared previous methodologies extensively elated ork cryptovirological threat model rapidly evolved last decade number notable individuals research groups attempted address problem cryptographic primitive identification discuss consequences findings address intrinsic problems heuristics heuristical methods often utilised locate optimal strategy capturing appropriate solution measures previously shown great success cryptographic primitive identification joint project eth google detailed automated decryption encrypted network communication memory identify location time subject binary interacted decrypted input execution trace dynamically extracted memory access patterns control flow data able identify necessary factors required retrieve relevant data new process implementation successfully able identify location several decrypted webpages memory fetched using curl openssl successfully extracted decrypted output kraken malware binary entropy metric found negatively affect recognition simple substitution ciphers typically effect information entropy also found affect analysis gnupg utilised dynamic binary analysis generate control flow graph evaluation using three heuristics chains heuristic measured ordered concatenation mnemonics basic block comparing known signatures heuristic extent former method assessing combination instructions constants verifier heuristic confirmed relationship input output permutation block trialled curl tool detected rsa advanced encryption standard aes traced secure sockets layer ssl session tested malware sample gpcode operation successfully detected cryptosystem extracted keys however trace took fourteen hours extra eight hours analysis crypto intelligence system conglomerates number heuristical measures counter problem situational dependencies cryptography reduced rice theorem suggested different heuristics provide accurate readings evaluating detection methods used matenaar found presented least false positives tests however presenting comparisons problem heuristics shown always generalise suitably machine learning attempting address difficulty past methodologies one thesis studied suitability machine learning automated highly efficient thresholds often require manual adjustment manage identification new algorithmic samples hosfelt sought emphasise ease model retraining analysing performance support vector machine kernels naive bayes decision tree clustering study met varying success ultimately suffered limited sampling latent feature space preventing adequate scaling complex data applications may use cryptography addition functions may unintentionally obfuscate control flow elderan similarly assessed suitability automated dynamic analysis ransomware regularized logistic regression classifier utilised conjuncture dataset ransomware good applications thus producing area curve auc results case positive methodology required large number malicious samples run generate sufficient distribution obfuscation tool name cryptohunt recently developed identify cryptographic implementations binary code fig framework architecture despite advanced obfuscation implementation tracked dynamic execution reference binary instruction level identify transform loop bodies boolean formulas formula designed successfully abstract particular primitive remain compact describe emblematic features unlike sole verification performed semantic depth prominently revealed distinguishable features regardless obfuscation aligot also designed obfuscatory resilience instead chose focus tools performed well variety samples required reference implementations manual integration data flow analysis two papers studied representational patterns cryptographic data dynamic analysis closely monitoring application methodology aimed pinpoint cryptographic algorithm matched similar pattern alternatively assessed avalanche effect unique discriminatory feature small change input would dramatically alter output although effective none methods would likely adapt unique obfuscations unique fairly effective approach identification symmetric algorithms binary code based subgraph isomorphism static analysis lestringant resolved cryptographic algorithm data flow graph dfg normalising structure without breaking semantics proposed subgraph isomorphism step assessed signatures contained within normalised dfg targeted sample pool xtea message digest aes implementation reached accuracy unfortunately formula relied manual selection appropriate signatures distinguish applicable algorithms three instances generation elementary would realistically scale dimensions sought paper iii overview given subject application methodology aims automatically verify existence cryptographic signatures unknown binary code intention provide solution easily generalise presented new conditions without manually adjusting number thresholds full system figure comprised three stages procedural generation guides synthesis unique cryptographic binaries variable obfuscation alternate compilation assumptions cryptographic code aid discrimination diagnostics dynamic analysis synthetic reference binaries build image execution dcnn fits matrices ease training immediate classification new samples ynthesis construct reasonably sized dataset enough variation satisfy abstraction cryptographic primitives enough simply small number applications little diversity terms operational outliers example extracting features execution manually implemented binary may give appropriate feature vector extraction process provide variation repeating labels outside environmental setup methodology leverages procedural generation include elements provide obfuscation without directly altering intended control flow three main algorithmic categories symmetric asymmetric hashing interpretation correlate related components dynamically construct unique executable artefacts openssl open source cryptographic library provides application programming interface api accessing algorithmic definitions review documentation revealed number similarities intended implementation function specificities either variable differ primitive constant true category approach exclusively examined experimentation also integrated via appropriate headers application first imports libraries provide expectant functionality later compiled case either openssl standard library primitive naturally requires contrastive functionality variable within scope application main body however symmetric algorithm requires specification key initialization vector asymmetric algorithms require certificate declaration whereas hashing definitions expect either rules categorically constant therefore definitions specified type next plaintext sequence loaded memory directly file ciphertext memory allocated also constant sample employ unique algorithm differing declarations reading plaintext key code dramatically fluctuate instances rarely compiled identically multivariate output provide assurance generalisation eature xtraction cryptographic execution therefore reference binary may employ associated functions point within trace unintentional obfuscation control flow negatively affect discriminatory performance granularity needs high underlying problem previous work following approach opts draw appropriate features reference binary using dynamic instrumentation via intel pin api disassembly runtime instruction data section outlined measurements principally assess activity importance relation assumptions cryptographic code obfuscation two primary transformation mechanisms highlighted first technique discusses abstraction relevant data groups decrease perceptible mapping example multidimensional array may concatenated single column either expanded accessed necessary second technique concerns splitting variables disguise representation colloquially known data aggregation data splitting methods partially obfuscate data flow without subtracting application distinct activity outside distinct obfuscation inclusion structured loops arithmetical bitwise operations create discriminatory irregularity otherwise translucent process analogously many training images computer vision contain noise suitably detract trivial classification subject process aims replicate uncertainty interpretation formatting respective variable artefact allow ease parsing similarly structured markup tree allow interpretation unique cryptographic applications alternate obfuscation stochastically generated keys ivs plaintext add additional variation image algorithm outlines procedure algorithm cryptographic synthesis input cryptographic constants variables output application code new sample select obfuscation aggregation split normal write file import statements abstracted keys encryption routines inject randomised arithmetic return relative location compiling resultant collection cryptographic applications data variability increased alternate compilers optimisation options resultant object basic blocks loops basic block bbl sequential series instructions executed particular order exclusively defined one branch entry one branch exit bbl ends one following conditions unconditional conditional branch direct indirect return caller instruction evaluated linear trace criteria met marked tail following instruction delimited head subsequent sequence similarly identified tail stack execution stores relevant data instruction two boolean expressions indicate predetermined blocks bbl dynamically revealed instructions unfortunately observed monitor indirect branches many high level languages share distinctly strict definition loop contrarily common interpretations amorphous code loose extending previous definitions hence delineate loop upon immediate bbl output algorithm algorithm instruction sequencing bbl detection input hooks output path execution callback head false last instruction tail head true end instruction branch call return tail true else tail false end write get entropy memory write end return stack instructions conventional architectures use common instruction format interpretable opcode operand zero three operands separated commas two specify destination source example aes performs single round encryption flow calls instruction disassembled aesenc directly operating first operand case performs round aes encryption using round key although interesting example advanced encryption standard new instructions architecture presents problem later generalisation cryptographic acceleration prevents detailed analysis alternate object code typically quite distinctive especially cryptographic code primitive may employ number operations order important dwell specificities exact semantics instruction carved linear trace weight ratio bitwise operations upon prominent operators pool cryptographic routines discriminatory emphasis blowfish rsa aes entropy characterized associated uncertainty finite discrete probability distribution measured using pnshannon entropy suppose distribution measured quantity hence defined upon detecting memory write respective location contents replicated casting value distribution allow immediate calculation related memory deleted prevent unnecessary exhaustion bbls absolute entropy increase decrease scored relation prior activations opposing registers summated figure bbl odel founded earlier research proposed definition dcnn treats input manner similar sentence word embeddings defined corresponds total weight particular operation multiplied entropic score feature vector therefore column sentence matrix let say exclusively assessing pure arithmetic impact would make would equal model combines number one wide convolutions dynamic pooling folding map variablelength input topologically presented figure model combines number one wide convolutions dynamic pooling folding map input mathematically convolution operation two functions argument case vector weights fig entropy scoring relative entropy scoring bbl within trace sample set algorithms vector inputs yield new sequence kernel convolved sequence onewide convolution takes form thus preserving number defined embeddings equation describes selection distinct subset relevantly depicts orders progression based total number convolutional layers current layer projected sentence length predefined final pooling parameter ktop particular layer selection delimited max ktop value sequence length base pooling operation selects subsequence pkmax active features completely unreactive positional variation preserves original perspective also distinguish repetitious features since identifiable cryptographic routines may execute point sequential trace operation fits perfectly another significant phase procedure simplifies way model perceives complex dependencies rows veritable feature independence removed summation every two rows shrinking nested convolutional layer dynamic pooling folding layer halves respective matrix model automatically scale new variants number may need accuracy justed better suit sample pool manual tuning inefficient costly process frequently offers advantage empirically show randomly chosen trials efficient optimization manual grid search domain calculating viable constants fraction time taking shuffled cartesian product subsets allows ambiguated selection distinct constants trial small number epochs epoch fig accuracy tanh activations pooling reduced feature space due number embeddings model utilised two folding operations simultaneously prior penultimate pooling layer convolution final convolution built two additional feature maps pooled stipulated topmost magnitude linear transformation output size applied softmax fully connect model filter widths specified first last convolutional layers respectfully remaining hidden layers shared width total model employed convolutions category symmetric asymmetric hashing convolution pooling folding fully connected fig dynamic convolutional neural network dcnn architecture dcnn illustrated case model intended seven word input sentence embedding two convolutional layers two dynamic pooling layers within two feature maps vii xperimentation table presents range popular algorithms selected experimental analysis primitives widely utilised legitimate fraudulent purposes cryptoknight configured build two feature maps first convolution dropout probability round pooling activation eleven hidden wide convolutions interspersed algorithm aes blowfish rsa table cryptographic algorithms based frequency analysis simplified opcodes within sample pool mnemonics add sub inc dec shr shl xor pxor test lea selected weighting therefore final design matrix embedding contained variable number vectors corresponding numeration associated weightings basic block subject binary distribution size drawn used training remained testing epochs model successfully converged accuracy minimal loss figure shows test accuracy epochs figure displays simultaneous loss additional collection drawn validation table diagnoses model associated confusion additional samples collected five open source implementations presented table iii github rivest cipher https aes blf rsa loss aes blf rsa table validation results predicted actual epoch representational rsa instances identified time testing aes overlooked due model trained cryptographically accelerated binaries classification rate varied optimisations distribution sizes able correctly classify blowfish samples tests algorithm fig loss accuracy blowfish source table iii open source implementations epoch fig entropy blowfish resultant collection binaries leveraged pure linked cryptographic functionality assess method analysing gnupg software directed encrypt empty text document using aes trace time minute seconds cryptoknight predicted utilisation rsa aes viii iscussion traditional cryptographic identification techniques inherently expensive heavily rely human intuition cryptoknight built reduce associated interaction refined sampling latent feature space procedurally synthesised distribution allowed dcnn map proportional linear sequences finer granularity conventional architectures without overfitting cryptoknight converged accuracy without extensive optimisation model ultimately fit synthetic distribution veritable ease performance par impediment dynamic binary instrumentation made clear highlighted extensive twenty two hour trace analysis time cryptoknight analysis time also varied quite extent hence sample binaries analysis took maximum around one minute consequently large collections saw exponential draw times indefinable length since manual analysis often takes invariably longer adequate draw time footprint arguably marginal trained proposed framework would beneficial part analyst toolkit quickly verify cryptographic instances dcnn intended map cryptographic execution despite control data flow obfuscation intrinsic problem initial work approach used successful formulation still preliminary issues address predefined operator embeddings explicitly define entire feature set therefore new samples perhaps deviate traditional operation may unidentifiable framework immediately generalise additional algorithm correlating predominant operators interchanging embeddings enlarging scope enhance cognition fundamental part cryptoknight supervised design classify known samples new cryptographic algorithms must added generation pool process framework strived simplify unavoidable limitation proposed architecture also makes custom cryptography difficult classify reference implementations would feasibly exist import integrating unsupervised component core model could facilitate detection signatures alternatively advanced synthesis could negate need procedural generation entirely reduce presently expensive time requirement aid classification customised cryptography element learns application invariant primitives would also prove beneficial cryptoknight exclusively trained functions similar model could decompile binaries accuracy traditional methods typically manage simplistic control flow entropy metric assumes cryptographic function associated uncertainty higher conventional interaction case negatively affected recognition simple substitution ciphers however unlikely affect cryptoknight way due scoring mechanics demonstrably high accuracy subjective tests without metric figure proves entropy metric impact classification rate converging difference problems cryptographic acceleration played important role detection native aes implementations intel extension proposed boosting relative speed encryption decryption microprocessors intel amd describe instruction set regard breakthrough performance increase six instructions prefixed aes directly perform cipher operations streaming simd extensions sse xmm registers however natural progression cipher could fully observed onclusion despite advanced countermeasures cryptovirological threat significantly increased last decade incentivising aforementioned research research demonstrated cryptographic primitive classification compiled binary executables could achieved successfully using dynamic convolutional neural network also demonstrated implementation achieved accuracy without extensive optimisation moreover implementation fundamentally flexible previous work marginalising error prone human element framework successfully detected every implementation blowfish including externally sourced native written compositions maintained distinctively high accuracy synthetic implementations future work includes detection cryptographic functions parallel execution subject binary eferences young yung cryptovirology security threats countermeasures proceedings ieee symposium security privacy may snow cryptxxx ransomware apr online available https chiu player entered game say hello wannacry may online available http kharraz robertson balzarotti bilge kirda cutting gordian knot look hood ransomware attacks international conference detection intrusions malware vulnerability assessment springer scaife carter traynor butler cryptolock drop stopping ransomware attacks user data distributed computing systems icdcs ieee international conference ieee nhs ransomware preventable may online available https beek mcafee labs threats report intel security sep online available https lutz towards revealing attackers intent automatically decrypting network traffic master thesis eth switzerland joint project eth zurich google willems holz automated identification cryptographic primitives binary programs berlin heidelberg springer berlin heidelberg online available http ibm bucbi ransomware online available https lestringant fouque automated identification cryptographic primitives binary code data flow graph isomorphism proceedings acm symposium information computer communications security ser asia ccs new york usa acm online available http moser kruegel kirda limits static analysis malware detection computer security applications conference acsac annual ieee luk cohn muth patil klauser lowney wallace reddi hazelwood pin building customized program analysis tools dynamic instrumentation acm sigplan notices vol acm ming cryptographic function detection obfuscated binaries via symbolic loop mapping proceedings ieee symposium security privacy wang chang cipherxray exposing cryptographic operations transient secrets monitored binary execution ieee transactions dependable secure computing vol lecun bengio hinton deep learning nature vol hodo bellekens hamilton dubouilh iorkyase tachtatzis atkinson threat analysis iot networks using artificial neural network intrusion detection system international symposium networks computers communications isncc may hodo bellekens iorkyase hamilton tachtatzis atkinson machine learning approach detection nontor traffic proceedings international conference availability reliability security ser ares new york usa acm online available http fruehwirt schrittwieser weippl using machine learning techniques traffic classification preliminary surveying attackers profile proc int conf privacy security risk trust lecun generalization network design strategies connectionism perspective hodo bellekens hamilton tachtatzis atkinson shallow deep networks intrusion detection system taxonomy survey corr vol online available http lecun kavukcuoglu farabet convolutional networks applications vision circuits systems iscas proceedings ieee international symposium ieee chen convolutional neural network architectures matching natural language sentences advances neural information processing systems pearl heuristics intelligent search strategies computer problem solving grbert automatic identification cryptographic primitives software matenaar wichmann leder cis crypto intelligence system automatic detection localization cryptographic functions current caballero poosankam kreibich song dispatcher enabling active botnet infiltration using automatic protocol reverseengineering hosfelt automated detection classification cryptographic algorithms binary programs machine learning corr vol online available http sgandurra mohsen lupu automated dynamic analysis ransomware benefits limitations use detection arxiv preprint calvet fernandez marion aligot cryptographic function identification obfuscated binary programs proceedings acm conference computer communications security acm puhan jianxiong xin zehui decrypted data detection algorithm based dynamic dataflow analysis international conference computer information telecommunication systems cits july zhao detection analysis cryptographic data inside software berlin heidelberg springer berlin heidelberg online available http drape intellectual property protection using obfuscation tubella gonzalez control speculation multithreaded processors dynamic loop detection proceedings fourth international symposium computer architecture feb moseley grunwald connors ramanujam tovinkere peri loopprof dynamic techniques loop detection profiling proceedings workshop binary instrumentation applications wbia measures entropy information proceedings fourth berkeley symposium mathematical statistics probability vol kalchbrenner grefenstette blunsom convolutional neural network modelling sentences corr vol online available http bergstra bengio random search optimization journal machine learning research vol feb akdemir dixon feghali fay gopal guilford ozturk wolrich zohar breakthrough aes performance intel aes new instructions white paper june
| 9 |
heuristic online goal recognition continuous domains sep mor vered bar ilan university israel veredm abstract goal recognition problem inferring goal agent based observed actions inspiring recognition planning prp planners dynamically generate plans given goals eliminating need traditional plan library however existing prp formulation inherently inefficient online recognition used motion planners continuous spaces paper utilize different prp formulation allows online goal recognition application continuous spaces present online recognition algorithm two heuristic decision points may used improve significantly existing work specify heuristics continuous domains prove guarantees use empirically evaluate algorithm hundreds experiments navigational environment cooperative robotic team task introduction goal recognition problem inferring unobserved goal agent based sequence observed actions hong blaylock allen baker lesh etzioni fundamental research problem artificial intelligence closely related plan activity intent recognition sukthankar traditional approach plan recognition use library known plans achieve known goals sukthankar ramirez geffner introduced seminal recognition approach avoids use plan library completely given set goals plan recognition planning prp approach uses planners blackbox fashion dynamically generate recognition hypotheses needed use prp continuous domains online fashion observations made incrementally raises new challenges original formulation relies synthesizing two optimal plans every goal plan reach goal manner compatible observations plan reach goal least partially deviating complying likelihood goal computed difference costs gal kaminka bar ilan university israel galk optimal solutions two plans overall planning problems solved two goal however online recognition set incrementally revealed changes thus two new planning problems solved every new observation total calls planner instead addition using planner generate plan may partially previous observations must currently impossible given state art present general heuristic algorithm online recognition continuous domains solves planning problems best algorithm relies alternative formulation use two key decision points appropriate heuristics reduce number calls planner new observation first decision whether generate solve new planning problem remain former calculated plans best case may reduce number overall calls planner calls second decision whether prune unlikely goal candidates incrementally reducing thus making fewer calls planner describe algorithm detail examine several heuristic variants utilizing continuousspace planners without modification evaluate different variants hundreds recognition problems two tasks standard motion planning benchmark simulated robots utilizing coordination related work sukthankar provide survey recent work goal plan recognition assuming library plans recognition goals though successful many applications methods limited recognizing known plans alternative methods sought geib sadeghipour offer methods utilize library planning planrecognition hong presents online method use library lacking ranking recognized goals baker present bayesian framework calculate goal likelihoods marginalizing possible actions keren keren investigate ways ease goal recognition modifying domain ramirez geffner proposed prp formulation plans twice offline recognition build earlier formulation probabilistically rank hypotheses allows efficiently compute likelihood different goals given incrementally revealed observations embed formulation definition plan recognition continuous spaces also varies original recognizer observes effects rather actions investigations prp exist masters sardina provided simpler formula achieving identical results half time still discrete environments sohrabi also observe effects though discrete environments also sought eliminate planner calls using planner offline manner sample plans explaining observations ramirez geffner extend model include pomdp settings partially observable states martin pereira refrain using planner instead using information cost estimates landmarks resp significantly speed recognition approaches complement vered kaminka present online recognizer prove special case algorithm present significant step beyond introducing heuristics significantly improve accuracy goal recognition continuous spaces begin giving general definition goal recognition problem continuous spaces section proceed develop efficient online recognizer utilize heuristics improve efficiency section discuss heuristics detail section problem formulation define online goal recognition problem continuous spaces quintuple world observed motion takes place defined standard motion planning lavalle initial pose agent set goals goal point discrete set observations observation specific subset work area point trajectory potentially infinite set plan trajectories beginning ending one goal positions goal exists least one plan end point intuitively given problem solution goal recognition problem specific goal best matches observations goal trajectories ending matched observations formally seek determine ramirez geffner thm shown necessarily goal solution goal recognition problem iff cost optimal plan achieve denoted ideal plan equal cost optimal plan achieves including observations plan refer vered kaminka build establish ranking goals define ratio cost score cost mgg rank goals higher score gets closer show experimentally ratio works well continuous domains thus use ignoring priors simplicity normalizing constant score next step compute plans ideal plan initial pose goal optimal plan includes observations described vered computing straightforward application planner synthesis bit complex candidates must minimize error matching observations take advantage opportunity afforded equal footing observations plans continuous environments observation trajectory point continuous space plan likewise trajectory space plans modeled effects thus generating plan perfectly matches observations done composing two parts plan prefix denoted built concatenating observations single trajectory masters sardina shown plan prefix may generated possible trajectories plan suffix denoted generated calling planner generate trajectory last observed point prefix ending point last observation goal using denote trajectory concatenation plan trajectory first observed point notice necessarily perfectly matches observations since incorporates given goal sequence observations planner called twice generate generate used construct cost contrasted using scoring procedure denoted match uses ratio described depend generated every goal needs resynthesized component parts incrementally revealed establishes baseline calls planner vered generalize procedure improve baseline heuristic online recognition algorithm identify two key decision points baseline recognition process described used improve efficiency recompute plans necessary new observation may change ranking captured recom function prune eliminate goals impossible extremely unlikely deviate much ideal plan captured function good recom heuristic reduces calls planner avoiding unnecessary computation new observations good heuristic reduces calls planner eliminating goals considered future observations using appropriate heuristics functions reduce number calls made planner consequently overall recognition section presents algorithm next section examine candidate heuristics algorithm begins lines computing ideal plan goals also sets default plan suffix suffix guarantees valid though necessarily optimal plans created even treme case computation ever done main loop begins line iterating observations made available reach first decision recompute suffix line begin giving general outline recom function takes current winning trajectory current goal latest observation matches observation heuristically determines see next section whether may cause change ranking top goal suffixes goals lines recomputed lines unless pruned lines otherwise lines current suffix goals modified based without calling planner algorithm nline oal ecognition planner goal planner default value plan suffix new available recom else planner else prefix score recomputing straightforward call planner made per discussion section generate optimal trajectory initial point last point might contain single point goal modifying recomputation suffix deemed necessary added prefix existing must updated continues leads towards baseline algorithm calls planner point step approximate planner call avoid cost done removing denoted parts inconsistent respect observation beginning old suffix old begins old without ended ideally begin last point new new new observation continue much possible old thus prefix old denoted pref line made redundant needs removed directly pref exactly trajectory beginning point general expect directly thus define ending point prefix geometrically closest point pruning intuitively newest observation leads away goal may want eliminate goal considered permanently removing risky decision mistake cause algorithm become unsound return correct result even given observations hand series correct decisions incrementally reduce singleton mean number calls planner best case approximate finally algorithm reaches line valid suffix available goals concatenates latest observation prefix line creates new plan concatenating prefix suffix line means new score used estimate lines potentially new goal selected line recognition heuristics algorithm generalization algorithm described vered varying heuristic functions used specialize behavior exactly thm change behavior different ways theorem recom algorithm generate exactly number planner calls algorithm vered proof sketch setting recom always true always false initially single call planner made calculate new call generate made goals every observation since calls skipped goals would pruned accordance behavior algorithm reported vered let turn examine avoid unnecessary planner calls ideal scenario would alg never compute new suffix goal line alg initial suffixes set ideal plans recom always false new planner calls made incrementally modified alg line accommodate observations approach offers significant savings thm best case observations closely match originally calculated paths produce good recognition results however realistically observations may contain certain amount noise observed agent may perfectly rational moreover could observed agent perfectly rational noise observations yet approach fail due cases multiple optimal plans differ exact optimal cost cases possible planner used recognizer generate ideal plan differs plan carried observed agent theorem recom exactly calls planner proof straightforward omitted space recompute realistically expect observations perfectly match predictions need heuristic evaluates false new observation alter top ranked goal saving redundant calls planner evaluates true otherwise suggestion heuristic continuous domains measure shortest distance dist plans dist shortest assume observed agent still heading towards goal need planner keeping current rankings prune finally introduce pruning heuristic observing rational agents continuous domains inspired studies human estimates intentionality intended action kaminka studies shown strong bias part humans prefer hypotheses interpret motions continuing straight lines without deviations corrections heading movements rational agent moving away past goal point considered unlikely target figure illustration goal angles used pruning heuristic capture heuristic takes geometric approach calculate angle created old newly received observation previously calculated plan calculated using cosine formula cos vector created previous new observation vector created previously calculated plan new observation figure presents illustration heuristic approach new observation measure angle created new observation previous observation ends previous plans shown dashed lines angle bigger given threshold deduce previous path heading wrong direction prune goal defining different sized threshold angles relax strengthen pruning process needed evaluation empirically evaluated online recognition approach suggested heuristics hundreds goal recognition problems measuring efficiency approach terms overall number calls planner performance approach terms convergence correct ranking chosen goal additionally implemented approach simulated rosenabled robots measuring efficiency algorithm compared two separate approaches one containing full knowledge observed agents intentions containing knowledge reasoning mechanism online goal recognition navigation domain implemented approach proposed heuristics recognize goals navigation worlds used trrt random trees planner guarantees asymptotic nearoptimality available part open motion planning library ompl along ompl cubicles environment default robot call planner given time limit sec cost measure length path pruning heuristic used threshold angle set points spread cubicles environments generated two observed paths point others total goal recognition problems observations obtained running rrt planner pair points time limit minutes per run rrt chosen optimized planner guarantees asymptotic optimality longer optimal path problem contained observed points performance measures use two measures recognition performance time measured number observations end recognizer converged correct hypothesis including failed higher values indicate earlier convergence therefore better number times ranked correct hypothesis top rank indicates general accuracy frequently recognizer ranked correct hypothesis top reliable hence larger value better efficiency measures order evaluate overall efficiency approach also used two separate measures number times planner called within recognition process overall time sec spent planning though two parameters closely linked wholly dependant reduction overall number calls planner also necessarily result reduction planner total amount time allowed planner run may vary according difficulty planning problem therefore create considerable differences effects different heuristic approaches ran trrt based recognizer problems comparing different approaches results displayed table columns baseline refers algorithm vered making planner calls second approach recomp refers method recomputation meaning planner utilized beginning process calculate ideal path goals third approach recompute measures effect recompute aims reduce overall number calls planner section fourth approach prune measures effect prune aims reduce overall number goals eliminating unlikely goal candidates section last approach measures effects utilizing combination pruning recompute heuristics efficiency table column displays average results approach mean total planner measured seconds calling planner recognition process recompute approach takes average sec baseline time average sec pruning heuristic reduces average time sec recompute heuristic reduces average time sec utilizing heuristics achieved reduction sec improvement substantial baseline approach second column displays average results terms number calls made recognizer planner recompute approach average extremely efficient calls number goals baseline average calls recomputation pruning heuristics similar success reduction calls using heuristics number calls reduced average calls reduction baseline approach conclusion see employing heuristics makes big impact successfully reduces overall number calls planner recomputation heuristic outperformed pruning heuristic overall number calls utilizing heuristics reduce number calls made planner baseline approach efficient method proved recompute approach calculating plans show improvement efficiency costs considerably performance performance table column measures average convergence correct result percent higher values better see reuse planner recompute produces convergence approach make use incrementally revealed observations within recognition process deviation initially calculated path considerable impact recognition results converting online baseline algorithm able double convergence percent incremental observation taken account reuse planner therefore greater weight ranking goals applying pruning recomputation heuristics improve overall convergence eliminating goals ranking process proved easier less goals compare furthermore early elimination goals pruning process able also eliminate noise goals might introduce ranking process paths deviated optimal recomputation heuristic increases pruning improvement baseline approach utilizing heuristics see high convergence level obtained pruning heuristic maintained column measures percent times correct goal ranked first higher value better reflect overall reliability ranking procedure results mostly agree convergence results planner reuse recompute performs poorly low baseline doubles success well recomputation heuristic achieves pruning heuristic increases results improvement baseline approach applying heuristics success level pruning method obtained employing heuristics made big impact overall performance successfully increasing convergence overall correct rankings pruning heuristic outperformed recomputation heuristic measures combination heuristics maintains high success rate leading improvement measures sensitivity recognition difficulty online continuous domains hardness recognition problem could possibly effect recognizer performance efficiency wanted evaluate sensitivity results shown hardness recognition problems therefore added another goal points potential goals recognition problem total recognition problems extra points specifically added close proximity preexisting points navigating towards one appears human eyes possible table columns examines efficiency different online recognition approaches harder clustered goals problems omitted recompute heuristic instances behavior heuristic straightforward results consistent results original scenario baseline approach least baseline recompute prune recompute goals efficiency plannercalls performance conv rank goals efficiency plannercalls performance conv rank table comparison approaches across scattered clustered goal scenarios efficient higher larger number calls planner rest efficient approach still approach utilizing pruning heuristic recompute heuristic together recompute heuristic still efficient pruning however measure number calls made planner see clustered goals scenarios pruning heuristic slightly outperforms recompute heuristic table columns examines performance different online recognition approaches harder clustered goals problems harder problems best performance achieved terms convergence pruning heuristic convergence end terms amount times correct goal ranked first approach combining pruning recompute heuristics slightly outperformed pruning approach worst performance achieved baseline approach terms criteria measured convergence ranked first congruence performance results scattered goal scenario table measures deterioration efficiency performance comparison scattered goal scenario deterioration measured terms deterioration percent hence deterioration means planner took twice long average harder problems therefore lower values better terms efficiency clearly see least deterioration number calls planner occurred baseline approach proving approach reliable deterioration respectively biggest deterioration terms occurred combination heuristics deterioration considerably caused substantial deterioration recompute approach deteriorated pruning heuristic deteriorated considerable less terms deterioration terms number calls made planner worst deterioration occurred approach deterioration deterioration heuristics considerably less pruning heuristic recomputation heuristic terms performance deterioration see resilient approach terms performance well efficiency proved baseline terms convergence ranked first deterioration convergence ranked first biggest deterioration convergence occurred approach efficiency results however terms ranked first biggest deterioration occurred pruning heuristic part due fact clustered goals make pruning process considerably less efficient goals close pruned online goal recognition robots final set experiments show applicability approach implemented alg cooperative robotic team task used ros quigley control simulated robots gazebo using default ros motion planner recognition process simulated soccer field two robots operating members team figure observed robot given initial goal travel proceeding execute plan straightforward manner observing robot strategically place position assist robot team member observed robot navigated goal strategic place assist offense would navigate goal vice versa likewise also goals observed robot always started initial point middle field experimented different starting points observing robot two points behind observed robots position parallel sides figure init points one point past observed robot middle field init point ran runs initial position goals total problems compared online goal recognizer ogr baseline form two different approaches giving full knowledge intended goal observing robot ahead time allowing observing robot navigate directly towards giving zero knowledge intended goal thus forcing observing robot wait team member reach desired goal navigate towards complementary location evaluate different approaches measured overall time seconds simulated robot ran reaching target goal lower time efficient robot results displayed table results show goal recognition approach substantially improves zero knowledge approach requiring precalculations needed plans generated via planner understandably approach falls short full knowledge approach generates hypotheses fly following observations leads deviations optimal direct route summary presented efficient heuristic online goal recognition approach utilizes planner recognition process baseline recompute prune deterioration efficiency performance plannercalls conv rank table deterioration performance efficiency scattered clustered goal scenarios figure experiment setup via rviz ogr table online goal recognizer full zero knowledge generate recognition hypotheses identified key decision points effect overall number calls made planner introduced generic online goal recognition algorithm along two heuristics improve planner performance efficiency navigation goal recognition evaluated approach challenging navigational goals domain hundreds experiments varying levels problem complexity results demonstrate power proposed heuristics show powerful combination leads reduction substantial calls recognizer makes planner planner comparison previous work showing increase recognition measures demonstrated algorithm realistic simulation simple robotic team task showed capable recognizing goals using standard robotics motion planners references baker chris baker rebecca saxe joshua tenenbaum bayesian models human action understanding advances neural information processing systems pages baker chris baker joshua tenenbaum rebecca saxe goal inference inverse planning proceedings annual meeting cognitive science society blaylock allen nate blaylock james allen fast hierarchical goal schema recognition proceedings national conference artificial intelligence pages kaminka elisheva bonchekdokow gal kaminka towards computational models intention detection intention prediction cognitive systems research geib christopher geib lexicalized reasoning proceedings third annual conference advances cognitive systems hong jun hong goal recognition goal graph analysis journal artificial intelligence research keren sarah keren avigdor gal erez karpas goal recognition design agents pages lavalle steven lavalle planning algorithms cambridge university press lesh etzioni neal lesh oren etzioni sound fast goal recognizer proceedings international joint conference artificial intelligence martin yolanda martin maria moreno david smith fast goal recognition technique based interaction estimates international joint conference artificial intelligence pages masters sardina peta masters sebastian sardina goal recognition proceedings conference autonomous agents multiagent systems pages international foundation autonomous agents multiagent systems pereira ramon fraga pereira nir oren felipe meneguzzi heuristics goal recognition quigley morgan quigley ken conley brian gerkey josh faust tully foote jeremy leibs rob wheeler andrew ros robot operating system icra workshop open source software volume page kobe japan geffner miquel hector geffner plan recognition planning international joint conference artifical intelligence pages geffner miquel hector geffner probabilistic plan recognition using classical planners international joint conference artificial intelligence geffner miquel hector geffner goal recognition pomdps inferring intention pomdp agent proceedings international joint conference artificial intelligence pages sadeghipour kopp amir sadeghipour stefan kopp embodied gesture processing integration perception action social artificial agents cognitive computation sohrabi shirin sohrabi anton riabov octavian udrea plan recognition planning revisited international joint conference artificial intelligence pages ioan mark moll lydia kavraki open motion planning library ieee robotics automation magazine december sukthankar gita sukthankar robert goldman christopher geib david pynadath hung bui editors plan activity intent recognition morgan kaufmann vered mor vered gal kaminka sivan biham online goal recognition mirroring humans agents proceedings annual conference advances cognitive systems slightly modified version appears proceedings ijcai workshop interaction design models haidm
| 2 |
tropical land use land cover mapping brazil using discriminative markov random fields data ron hagensiekera ribana roscherb johannes rosentretera benjamin jakimowc waskea freie berlin institute geographical sciences malteserstr berlin germany bonn institute geodesy geoinformation nussallee bonn germany berlin geography department unter den linden berlin germany sep rheinische abstract remote sensing satellite data offer unique possibility map land use land cover transformations providing spatially explicit information however detection processes land use patterns high variability challenging task present novel framework using data machine learning techniques namely discriminative markov random fields priors import vector machines order advance mapping land cover characterized changes study region covers current deforestation frontier brazilian state land cover dominated primary forests different types pasture land secondary vegetation land use dominated processes activities data set comprises imagery acquired course dry season well optical data rapideye landsat reference results show land use land cover reliably mapped resulting spatially adjusted overall accuracies five class setting yet limitations differentiation different pasture types remain proposed method applicable data sets constitutes feasible approach map land use land cover regions affected temporal changes keywords markov random fields mrf import vector machines ivm lulc mapping deforestation amazon sar corresponding author tel email addresses ron hagensieker ribana roscher johannes rosentreter benjamin jakimow waske preprint submitted international journal applied earth observation geoinformation september introduction brazilian amazon largest area tropical rain forest shared single country last decades become increasingly threatened large scale deforestation forest degradation expansion agriculture davidson lapola affect earth ecosystems ecosystem services far beyond boundaries original region influence climate directly local even regional scales foley vitousek thus detailed knowledge information land use land cover lulc offers valuable input decision support environmental monitoring systems remote sensing satellite data offers unique possibility generate consistent lulc maps large areas temporally high resolution mapping lulc change amazon predominantly achieved analyzing remote sensing data inpe wulder hansen however limitation analysis remote sensing data imposed dependency conditions rare tropical regions general met wet season rufin synthetic aperture radar sar data overcome problems various studies demonstrate potential mapping lulc changes pfeifer bovolo bruzzone also context deforestation related processes sarker reiche englhart mapping approaches become even attractive due recent missions increased repetition rates higher spatial resolution well better data availability virtue copernicus data policy aschbacher constellation guarantee cloud free coverage within days respectively repetition rate constellation days days might affected clouds although classification accuracy sar data limited direct comparison data various approaches exist increase mapping accuracy include integration interferometry schlund contextual spatial information derived texture parameters segmentation cutler sarker schlund waske van der linden utilization data reiche stefanski waske braun although limitations short wavelength sar data classification dense vegetation well documented kumar patnaik various studies highlighted potentials data lulc mapping schlund uhlmann kiranyaz khatami sonobe utilization temporal data modern classification algorithms spatial context data sets generally adequate classes characterized clearly defined temporal signatures caused differences phenology crops land use management seasonal cycles blaes mcnairn single classification multitemporal data set might useful study sites without changes waske braun stefanski might limited study sites temporally changes land cover activities arbitrary points time recent studies shown great potentials tackle problems time series analysis multispectral data zhu woodcock sar speckle quick succession processes still pose difficult challenges using methods especially long time series often available context data analysis main drawback often assumption non changing land cover investigation period consequently temporally dynamic lulc activities transitions clean shrubby pasture neglected various studies emphasize usage adequate classification approach ensure high mapping accuracy liu waske benediktsson waske braun especially integration spatial information means classification spatial features texture lead gain accuracy addition markov random fields mrfs promising approach integrate spatial context moser moser serpico liu mrfs employed model prior knowledge neighborhood relations within image called spatial relations also extended describe relations area different acquisition dates temporal relations since early approaches based mrfs utilized remote sensing various purposes bouman shapiro xie tran solberg liu use locally variant transition models account spatial heterogeneity applied model subsets two landsat scenes recently wehmann liu adapted integrated kernel proposed moser serpico used iterative conditional modes icm optimization technique spatiallyvariant transitions classifying landsat data hoberg apply conditional random fields regularize annual remote sensing imagery different high resolution scales ikonos rapideye landsat course five years emergence efficient probabilistic classifiers last decade standard mrfs extended discriminative mrfs kumar hebert turn increasingly useful optimize land cover classifications moser serpico tarabalka voisin liu highlight advantages utilizing probabilistic support vector machines svms platt maximum likelihood classifier however although many remote sensing studies highlight positive capabilities mrfs studies aim using mrfs mapping data sets cai wehmann liu olding example map forest cover change liu data sets available mrfs also used optimize corresponding maps considering predefined neighboring pixels stored transition matrices present novel framework classification time series using discriminative mrfs import vector machine ivm probabilistic discriminative classifier scene separately classified using ivm afterwards mrfs utilized independent step classification map chose ivms commonly used probabilistic svms since proven offer reliable probabilistic output zhu hastie roscher mrf optimization choose loopy belief propagation lbp icm method shown repeatedly yield higher accuracies szeliski andres studies utilized lbp field remote sensing novelty integrate lbp setting presented framework aims classification individual acquisition thus enables mapping high frequency lulc patterns contrast related studies use mrf model sar data detect transitions within one season loopy belief propagation lbp inference overall goal research focused two objectives map lulc tropical setting processes adapting recent mrf methods assess potential lulc mapping using image data short wavelength sar specific objective map lulc brazil transformations forest pasture major driver deforestation pasture management study region tends fall one two categories processes intensively managed pasture land pasto limpo processes episodically managed pasture land high degree successive dynamics pasto sujo pasture management general characterized processes resulting sudden changes lulc study area data study area study area lies northern part novo progresso municipality southern state brazil intersected highway southwest accompanied figure composite three acquisitions red june green september blue november true color etm september background shows diverse lulc properties table scenes utilized study scenes collected area using incidence angle date polarization fishbone structures indicative deforestation ahmed coy klingler major driver deforestation study area transformation forests pasture land climate study region characterized wet dry season dry season june september sees abrupt land cover changes form large scale burning clear cuts wet season defined gradual regrowth yet deforestation rates wet season rise figure photograph illustrating fluent transitions interactions different land cover types study region remote sensing data data base study consists five strip map scenes spatial resolution table images ordered complex format comprising different polarization incidence angle cover swath roughly pixels data calibrated processed according common procedures see section preprocessing context study includes necessary steps random sampling training test data performed random sampling training ivm mrf regularization taken land cover maps generated validated average measures calculated preprocessing scenes conducted using sentinel geospatial data abstraction library gdal development team scenes processed separately following order multilooking range looks azimuth looks yielding ground resolution terrain radiometric correction terrain correction srtm resampling pixel spacing data projected utm zone radiometric normalization applied using srtm texture measurements widely used increase mapping accuracy sar data sarker dekker cutler gray level matrices glcm calculated moving window size symmetric directions offset one probabilistic quantization conducted levels ten texture parameters separately derived available polarization available scene contrast dissimilarity homogeneity angular second moment asm energy maximum probability entropy glcm mean glcm variance glcm correlation included additional features improve ivm classification information texture parameters see haralick sarker respect findings sarker nyoungui experiments abstain combining texture metrics speckle filtering since use texture measures per layer total features per scene classification process reference data reference data includes multispectral rapideye landsat data situ data well land cover data various brazilian agencies prodes terraclass prodes programa table number sample points available training distinguished class extracted polygons date class burnt pasture clean pasture shrubby pasture water forest desflorestamento effort brazilian space agency inpe generate annual maps documenting deforestation primary forests inside legal amazon minimum mapping unit inpe targeting sites prodes considers deforested terraclass effort determine lulc classes affected areas almeida overall coverage available swaths constitutes study area figure sufficiently covered reference information forests well clean shrubby pasture present study area occurrence water burnt pasture overall scarce address issue polygons manually distributed entire area afterwards polygon assigned one class label date covered address changes lulc necessary polygons split avoid class ambiguity within different temporal instances coherent pasture area partially burnt polygon gets split generation reference data supported visual interpretation rapideye well landsat landsat oli data time period addition landsat rapideye imagery fire products derived modis also considered moreover photographs field campaign conducted august available sampling conducted two authors close cooperation harmonized classification schemes inpe instituto nacional pesquisas espaciais following lulc classes considered clean pasture shrubby pasture burnt pasture water forest clean pasture also called pasto limpo describes pasture land intensively worked includes regular tillage burning land support cattle ranching shrubby pasture also called pasto sujo intensively managed thus affected bush encroachment coarser appearence shrubby pasture generally allows visual separation clean pasture high resolution images burnt pasture includes clean well shrubby pasture areas recently burned characterized open soil vegetation residues areas easily identified using false color composites forest beside primary forests includes areas secondary vegetation regeneration usually sar forests characteristic appearance images high resolution multispectral imagery table gives figure classification scheme multispectral water forest pasto burnt pasto limpo pasto sujo class overview number available training samples class date underlined burning season usually starts around end july hence burned pasture areas could identified period water bodies also scarce two lakes entire study area included figure visualizes classes considered classification scheme considered lulc classes match comparable studies using data brazilian tropical settings respectively garcia schlund time period study falls dry season june september corresponding multispectral remote sensing data could interpreted sufficiently well yet challenges remain study region two dominating pasture types identified pasto sujo shrubby pasture pasto limpo clean pasture almeida vieira adami types generally used cattle ranching region pasto sujo characterized bushes occasional early stages succession however transition types gradual consequently hard interpret remote sensing imagery alone even ground resolution offered rapideye transitions pasto sujo early stages secondary vegetation hard distinguish due gradual nature process however relevant study site since significantly less areas affected allow solid separation classes consider time series identify pasture management addition include information offered terraclass reliably separate different types secondary vegetation pasture land primary secondary forests well secondary vegetation combined one class various studies preliminary tests indicate limitation separating two classes methods proposed framework consists four steps preprocessing random sampling iii classification single scene using ivm optimization mrf model final validation performed averages independent runs using random sampling spatially disjoint train test polygons pixels sampled training polygons solely used ivm parameterization grid search model training pixels sampled test polygons enable independent validation throughout paper use following notation let training set comprising feature vectors corresponding class labels distributed figure temporal green spatial yellow neighbors given pixel blue image lattice later address image samples given coordinate probability estimates pnk pnk pnk import vector machines ivm discriminative probabilistic classifier based kernel logistic regression first introduced zhu hastie roscher shown ivms provide reliable probabilities probabilistic svms since ivms probabilities balanced whereas svms generally overestimate maximum probabilities account complex decision boundaries classes ivm generally benefit integrating kernel function study utilize radial basis function rbf kernel parameterized kernel width standard remote sensing purposes parameterization achieved analogously standard svm practices using grid search estimate cost parameter encompassing description ivm see zhu hastie roscher markov random field study use post classification mrf neighborhood relations pixels illustrated figure parameterization achieved transition matrices matrices indicating spatial temporal transition probabilities five classes description mrf adapt terminology similar moser melgani serpico therefore denoting pixel features corresponding class label reformulate probabilities energy terms energy equivalent minimization identical maximization consider function spatial neighborhood usp applying two pixels direct spatial neighbors function assign weights neighboring classes usp case function yields matrix used favor certain neighboring constellations function generally defined potts model result identity matrix encourages generation homogeneous areas standard mrf model given summation weight parameter regulate importance spatial component case consider images temporal neighbors spatially congruent cells neighboring acquisition times pixel temporal successor applies pixel temporal predecessor applies temporal energy hence given analogous spatial case utemp matrices defining temporal transitions observed land cover trajectories opposition spatial weighting require multiple non symmetrical matrices respect trajectories regard future past overall energy function defined integrating temporal vicinity yields function combines weight parameters used adjust importance temporal spatial weights passing scheme transition matrices lbp inference algorithm utilizing message passing pearl shown approximate maximum values sufficiently well murphy choose lbp based methods figure passing schedule applied study one pass layers corresponds one iteration lbp action description step generation fallback copy current energy layer blue necessary future calculation messages passed red layer step messages passed previous fallback next energy layer current layer also factoring unaries upon receiving procedure performed using set moving windows step discarding previous fallback backing next energy layer see step iterate layer general applicability specifically defined symmetrical binary factors boykov applied environments kolmogorov zabin icm iterated conditional modes another algorithm commonly used achieve inference especially remote sensing using data sets liu wehmann liu low computational cost generally outperformed lbp terms accuracy szeliski andres reason formulate implementation lbp using moving windows applied image stacks arbitrarily large image stacks sufficiently well figure illustrates neighborhood one pixel factor graph analogous mrf neighborhood described section using potts function define common practice remote sensing literature moser since focus study lies examination mrf linking classifications follow practice potts function represented identity matrix supports assignment neighboring pixels class general sensible formulate asymmetric message passing two spatial dimensions pixel assume properties left right neighbor specifically potts model way reflect tobler assumption figure implemented study variable nodes illustrated yellow spatial neighbor green nodes temporal circles mark corresponding factor nodes unary energy spatial autocorrelation promoting idea close objects alike distant objects contrast spatial transitions utilization potts model temporal transitions lead serious distortions cause equalization subjected probability maps would prohibit land cover changes occuring assume spatially directional patterns area thus rely potts model pixel pass different messages temporal successor opposed predecessor adjustment possible assign probabilities possible types class transitions therefore express temporal transitions two asymmetric transition matrices matrix illustrates messages pixels pass scene time neighbor defines messages pixels pass differentiation important considering burnt pasture prohibit primary forests yet might endorse pasture integrate transition matrices interface inject expert knowledge regularization model empirically derived matrices based weak assumptions land cover trajectories assumptions include water forest regarded consistent classes yet pasture areas kind interaction especially concerns transition pasture land burnt pasture land explicitly tolerated furthermore forest explicitly prohibit predecessing areas due forest including secondary vegetation offer model tolerance regard misclassifications hence formalization land cover trajectories relatively straight forward necessarily based elaborate knowledge previous tests showed similar outcome concerning modification parameters yet using strict transitions could lead undesired results suppress dynamics entirely scope study utilize different transition matrices due varying time gap five acqusitions revisit rate eleven days available imagery shows one gap eleven days two gaps days well one gap days neighboring acquisitions hence linearly modify transitions adjust varying temporal resolution since increasing time change expected occur following summarizes relevant assumptions made specification transition matrices pasture areas potentially burnt burning likelihood high transition back clean pasture shrubby pasture transitions shrubby pasture clean pasture permitted clean pasture considered stable yet may transition shrubby pasture forest following observations terraclass study region assume slow shift clean pasture shrubby pasture overall forest consistent class get removed yet especially shrubby pasture develop forest class also includes secondary vegetation water used describe bodies water permanently filled within dry season class small tolerance evade counteract inconsistent transitions may caused misclassifications classification validation three different types classification compared baseline ivm classification spatialonly mrf referred iii mrf referred many studies rely supervised classification using svm random forest classifiers various studies show ivm perform least equally well terms accuracy roscher braun therefore original ivm classification considered adequate baseline classification reference polygons exclusively comprise either training test samples avoid spatial autocorrelation training purposes samples per polygon randomly selected using minimum sampling distance meters systematic sampling ensures adequate number training samples selected five classes clean pasture shrubby pasture burnt pasture water forest validation conducted considering current terms good practice laid olofsson samples clustered polygons improve spatial variability training test samples pixels assessment unit sampling strategy necessary ideal conditions independent random sampling difficulties obtaining large scale reference data challenging environment olofsson error matrices derived serve basis estimation overall accuracies user accuracies producer accuracies corresponding confidence intervals addition calculate area measures confidence intervals acquisition date estimate development burnt pasture land entire dry season classification validation conducted ten times using different training test sets results averaged results show benefit high repetition rate high ground resolution proposed framework outperforms common classification approaches terms area adjusted mapping accuracy olofsson table illustrates average area adjusted five scenes using three different methods irrespective acquisition date accuracy significantly improved mrf consistently outperforming ivm results consistently outperforming lesser degree weakest classification ivm could clearly improved percentage points compared classification results achieved average could improved percentage points using percentage points using compared ivm classification recommended olofsson additionally calculated variance measures results yet confidence intervals table area adjusted overall accuracies different dates shown values means iterations acquisition date polarization ivm ivm forest burnt clean shrub water figure user producer accuracy classes date generally falling well percentage point address measurements figure summarizes average three approaches ivm ivm yields lowest accuracies generally shows highest balanced accuracies approaches especially reliable concerning classification forest areas achieving especially high class given approach three pasture classes classified significantly lower accuracies forest areas shrubby pasture clean pasture overall underrepresented depending scene method clean pasture generally yields accuracies approximately also classification class particularly problematic concerning last scene around weak classification results different pasture classes generally caused confusion within different pasture types reflects findings comparable studies utilize sar data schlund general shows higher accuracies compared classification results achieved approaches capable mapping burnt pasture starting notable general concern regarding mrf smoothing effects could cause suppression sporadic events due low number burnt pasture areas end july able reliably calculate accuracies burnt pasture areas every date two burnt fields exist first two acquisitions allow adequate classification validation however also accordance typical land management region insofar activities usually start later season nevertheless class kept utilizes class scenes consistent classification scheme entire period although additional burnt pasture areas occur july ten areas entire study area classification accuracy remains low despite consisting samples possibly due temporal consistency distinct signature water mapped especially well water encompasses areas entire study site yet yields pas higher baseline approach weakness appears get enhanced approach yields remarkable drop water dates mrf appears ensure designation behavior underlines capabilities increase mapping accuracy temporally sporadic classes burnt pasture remarkably also proves value regarding mapping classes static yet spatially small scaled contrastingly using mrf classes tend get suppressed frequently regarding water mapped convincingly accuracies using mrf approaches yet ivm classification shows much less reliable accuracies figure comparison different classifications inside subsetted area figure water pas clean ivm pas burnt pas scrub forest reference area pasture types burnt clean shrubby date figure growth burnt areas dry season error bars indicative confidence interval visual assessment classification maps underlines positive effect approaches figures large number speckle induced misclassification attributed maps classified using ivm texture parameters effect suppressed extent yet individual clusters misclassification still located entirely homogeneous suppresses noise considerably yet maintaining general spatial patterns lulc fine spatial structures appear get suppressed despite conservative ivm estimates using land cover maps derived figure illustrates clear trends still derived using proposed data methods figure illustrates high percentage shrubby pasture land early dry season course dry season amount continuously shrinking burning pasture starts growing exponentially end july end dry season area clean pasture land comparable shrubby pasture figure final classification results using approach discussion main objective study adaptation recent methods mapping dynamic lulc tropical setting shown generally positive study proposed approach using spatialtemporal mrf expert knowledge generally able capture short term lulc dynamics challenging map using standard classification techniques although validation confirms limitations sar data differentiating different pasture types especially proposed approach enables generation meaningful time series homogeneous lulc maps using sar data consequently reliable prompt mapping lulc change achieved independent cloud cover atmospheric inference results show use proposed approach outperforms standard ivm classifications utilizing texture parameters well common spatial mrf terms classification accuracy visual inspection burnt pasture areas early dates shows bright overall heterogeneous backscatter within class similarity pasture classes images landsat rapideye images unambiguously indicate burnt pasture possible reasons could organic debris wet conditions contrary many burnt areas subsequent scenes occurrence large scale burning identified clearly areas low backscatter regarding data potential transfer approach wet season characterized higher saturation backscatter intensity another challenge separation pasture types already appears difficult dry season integration temporal context via mrf might allow reliable separation pasture forest areas wet season additional testing showed furthermore utilization temporal trajectories alone despite generally effective utilization spatial context used significantly elevate accuracies particular weak classification could benefit approach variance classification outcomes reduced different scenes findings accordance results recent studies able improve classification accuracy via implementation mrf wehmann liu liu wehmann liu use regionally optimized transition matrices state art integrated kernel based moser serpico achieve high classification accuracies long time periods proposed method aims detection short term land cover change sar imagery utilizes lbp inference well ivm classification visual assessment classification results confirms positive effect mrf classification accuracy although maps provided figure difference map classifications ivm light colors indicate agreement two maps dark colors indicate class ambiguities class final classification presented conventional ivm classification show general land use patterns results affected typical sarinherent noise even homogeneous areas appear noisy despite texture parameters included classification procedure boundaries individual land cover land use classes may appear blurred hard identify drawback significantly reduced methods lbp tries minimize transition energy homogenizing adjacent pixels areas become overall concentrated edges along different lulc classes clearly identified benefits also attributed classification interior areas application mrf suppresses outliers thus results confirm edge preserving capabilities mrf even challenging spatial class transitions forest shrubby pasture regard accuracies mrf offers preferable results ivm approach figure illustrates differences classification ivm approaches underlining potentials solving confusion forest shrubby pasture colorized highlight disagreements classifications pale colors signifying consenting classifications opaque colors indicating classes assigned legend see figure especially obvious increasing vegetation density confusion also rises clean pasture classified congruently approaches classification clean pasture shrubby pasture remains challenging data constitutes adequate data source forest mapping forest higher compared accuracies achieved classes accordance accuracies comparable studies schlund garcia perform specific analysis differences polarized data sets table shows internal polarized scenes especially benefit integration also neighboring scenes benefit disproportionately thus assume synergetic effects transferable mrf yielding promising outlook integration various data sources regarding low requirements concerning parameterization implementation moving windows consider introduced method transferable study regions adaptation transition matrices allows method fitted static environments also address multiannual time series data despite ambitious goals study perform land cover mapping densely vegetated dynamic tropical study region using data documented limitations concerning separability different pasture types able achieve improvements standard classifications method incorporates adjacency information potential shortcomings exist ground resolution coarse relative mapped land cover case fragmented structures might get suppressed adjustment would also required assumptions land cover trajectories variant setup example two scenes dry season carry different transition probability regard burning two scenes wet season easily solved different transition models within study included slight modifications transition matrices account different intervals acquisitions conclusion results show clearly integration mrfs advantageous baseline classification approach spatial mrf methods especially classification forest areas yields high accuracies able successfully implement lbp optimization regularization high resolution images tropical context furthermore able give adequate estimates pattern land use dynamics burned pastures importantly suggested approach able handle process small scale despite smoothing effects suppress fine structures separation different types pasture pasto sujo pasto limpo remains challenging task short wavelength classification burnt pasture early season highlights limitations model arise underlying classification accuracy already limited approach well suited regularize small classification errors using contextual information able sufficiently address misclassifications complex transitional environments weak classification accuracies sometimes relatively low class accuracies necessarily limitation proposed method rather caused data well characteristics particularly study sites characterized land use patterns high variability proposed approach using mrf expert knowledge appears feasible using expert knowledge land cover trajectories could positively influence model performance bypass computationally demanding techniques estimation mrf parameters derived multiple classifications change maps generally strongly affected weak initial classifications proposed method formalized transferable large possibly image stacks future studies aim integrate regularization dynamics dynamics deforestation agricultural trends using imagery acknowledgments study carried part sensecarbon research project funded sensecarbon part thank countless developers behind freely accessible software utilized study python gdal supporting efforts direct thanks niklas potthoff paul wagner rolf rissiek bernd melchers merry crowson christian lamparter well anonymous reviewers references references adami gomes coutinho esquerdo venturieri uso cobertura terra estado entre anos xvii brasileiro sensoriamento remoto ahmed souza riberio ewers jan temporal patterns road network development brazilian amazon reg environ change url http almeida vieira cobertura vegetal uso terra francisco brasil com uso sensoriamento remoto boletim museu paraense goeldi naturais almeida coutinho esquerdo adami venturieri diniz dessay durieux gomes sep high spatial resolution land use land cover mapping brazilian legal amazon using modis data acta amaz url http shimabukuro rosenqvist sanchez jul using alos palsar data detecting new fronts deforestation brazilian amaznia international journal remote sensing url http andres kappes hamprecht empirical comparison inference algorithms graphical models higher order factors using opengm pattern recognition url http andres kappes hamprecht empirical comparison inference algorithms graphical models higher order factors using opengm pattern recognition springer aschbacher may european earth monitoring gmes programme status perspectives remote sensing environment url http blaes vanhalle defourny efficiency crop identification based optical sar image time series remote sensing environment bouman shapiro multiscale random field model bayesian image segmentation image processing ieee transactions bovolo bruzzone dec approach change detection multitemporal sar images ieee transactions geoscience remote sensing url http boykov veksler zabih fast approximate energy minimization via graph cuts pattern analysis machine intelligence ieee transactions braun weidner hinz apr classification feature spaces assessment using svm ivm rvm focus simulated enmap data ieee journal selected topics applied earth observations remote sensing cai liu friedl may enhancing modis land cover product modeling algorithm remote sensing environment url http coy klingler frentes pioneiras eixo desafios socioambientais fronteiras cutler boyd foody vetrivel estimating tropical forest biomass combination sar image texture landsat data assessment predictions regions isprs journal photogrammetry remote sensing davidson artaxo balch brown bustamante coe defries keller longo amazon basin transition nature dekker texture analysis classification ers sar images map updating urban areas netherlands geoscience remote sensing ieee transactions englhart keuck siegert may aboveground biomass retrieval tropical forests potential combined sar data use remote sensing environment url http foley jul global consequences land use science url http garcia dos santos mura kux potencial imagem para mapeamento sudoeste brasileira acta amazonica gdal development team gdal geospatial data abstraction library version open source geospatial foundation url http hansen potapov moore hancher turubanova tyukavina thau stehman goetz loveland kommareddy egorov chini justice townshend global maps forest cover change science url http haralick shanmugam dinstein textural features image classification systems man cybernetics ieee transactions hoberg rottensteiner queiroz feitosa heipke conditional random fields multitemporal multiscale classification optical satellite imagery geoscience remote sensing ieee transactions inpe projeto prodes monitoramento floresta amazonica brasileira por satelite khatami mountrakis stehman may remote sensing research supervised pixelbased image classification processes general guidelines practitioners future research remote sensing environment url http kolmogorov zabin energy functions minimized via graph cuts pattern analysis machine intelligence ieee transactions kumar hebert discriminative random fields discriminative framework contextual interaction classification proceedings ninth ieee international conference computer vision url http kumar patnaik discrimination mangrove forests characterization adjoining land cover classes using temporal synthetic aperture radar data case study sundarbans international journal applied earth observation geoinformation lapola martinelli peres ometto ferreira nobre aguiar bustamante cardoso costa pervasive transition brazilian system nature climate change plaza feb spectral spatial classification hyperspectral data using loopy belief propagation active learning ieee transactions geoscience remote sensing liu kelly gong approach monitoring forest disease spread using high spatial resolution imagery remote sensing environment liu song townshend gong using local transition probability models markov random fields forest change detection remote sensing environment mcnairn champagne shang holmstrom reichert integration optical synthetic aperture radar sar imagery delivering operational annual crop inventories isprs journal photogrammetry remote sensing melgani serpico markov random field approach contextual image classification geoscience remote sensing ieee transactions moser serpico contextual image classification support vector machines markov random fields geoscience remote sensing symposium igarss ieee international ieee moser serpico combining support vector machines markov random fields integrated framework contextual image classification geoscience remote sensing ieee transactions moser serpico benediktsson mapping markov modeling information remote sensing images rufin griffiths barros siqueira hostert jan mining dense landsat time series separating cropland pasture heterogeneous brazilian savanna landscape remote sensing environment url http murphy weiss jordan loopy belief propagation approximate inference empirical study proceedings fifteenth conference uncertainty artificial intelligence morgan kaufmann publishers nyoungui tonye akono jan evaluation speckle filtering texture analysis methods land cover classification sar images international journal remote sensing url http olding olivier salmon jul markov random field model decision level fusion image segments ieee international geoscience remote sensing symposium igarss url http olofsson foody herold stehman woodcock wulder may good practices estimating area assessing accuracy land change remote sensing environment url http pearl reverend bayes inference engines distributed hierarchical approach aaai pfeifer kor nilus turner cusack lysenko khoo chey chung ewers apr mapping structure borneos tropical forests across degradation gradient remote sensing environment url http platt probabilistic outputs support vector machines comparisons regularized likelihood methods advances large margin classifiers citeseer yeh lin mar novel algorithm land use land cover classification using polarimetric sar data remote sensing environment url http yeh xian zhang jul monthly detection land development using polarimetric sar imagery remote sensing environment url http reiche souzax hoekman verbesselt persaud herold oct feature level fusion multitemporal alos palsar landsat data mapping monitoring tropical deforestation forest degradation ieee sel top appl earth observations remote sensing url http reiche verbesselt hoekman herold fusing landsat sar time series detect deforestation tropics remote sensing environment roscher waske incremental import vector machines image vision computing roscher waske incremental import vector machines classifying hyperspectral data geoscience remote sensing ieee transactions rufin pflugmacher hostert sep land use intensity trajectories amazonian pastures derived landsat time series international journal applied earth observation geoinformation url http sarker nichol ahmad busu rahman potential texture measurements dual polarization palsar data improvement forest biomass estimation isprs journal photogrammetry remote sensing sarker nichol ahmad rahman forest biomass estimation using texture measurements sar data schlund von poncet hoekman kuntz schmullius importance bistatic sar features forest mapping monitoring remote sensing environment solberg taxt jain markov random field model classification multisource satellite imagery geoscience remote sensing ieee transactions sonobe tani wang kobayashi shimamura feb random forest classification crop type using data remote sensing letters url http stefanski chaskovskyy waske mapping monitoring land use changes western ukraine using remote sensing data applied geography stefanski chaskovskyy waske dec mapping monitoring land use changes western ukraine using remote sensing data applied geography url http szeliski zabih scharstein veksler kolmogorov agarwala tappen rother comparative study energy minimization methods markov random fields lecture notes computer science url http tarabalka fauvel chanussot benediktsson method accurate classification hyperspectral images geoscience remote sensing letters ieee tran wehrens hoekman buydens initialization markov random field clustering large remote sensing images geoscience remote sensing ieee transactions uhlmann kiranyaz apr classification single polarized sar images incorporating visual features isprs journal photogrammetry remote sensing url http vitousek jul human domination earths ecosystems science url http voisin krylov moser serpico zerubia classification high resolution sar images urban areas using copulas texture hierarchical markov random field model geoscience remote sensing letters ieee waske benediktsson fusion support vector machines classification multisensor data geoscience remote sensing ieee transactions waske braun classifier ensembles land cover mapping using multitemporal sar imagery isprs journal photogrammetry remote sensing waske van der linden classifying multilevel imagery sar optical sensors decision fusion geoscience remote sensing ieee transactions wehmann liu contextual markovian kernel method land cover mapping isprs journal photogrammetry remote sensing wulder masek cohen loveland woodcock opening archive free data enabled science monitoring promise landsat remote sensing environment landsat legacy special issue url http xie pierce ulaby sar speckle reduction using wavelet denoising markov random field modeling geoscience remote sensing ieee transactions zhu hastie kernel logistic regression import vector machine journal computational graphical statistics zhu woodcock continuous change detection classification land cover using available landsat data remote sensing environment references
| 1 |
separators region intersection graphs jul james abstract undirected graphs say region intersection graph family connected subsets show excludes complete graph minor every region intersection graph edges balanced separator nodes constant depending additionally uniformly bounded vertex degrees separator found spectral partitioning string graph intersection graph continuous arcs plane string graphs precisely region intersection graphs planar graphs thus preceding result implies every string graph edges balanced separator size bound optimal generalizes planar separator theorem confirms conjecture fox pach improves log bound contents introduction balanced separators extremal spread eigenvalues spread additional applications preliminaries vertex separators conformal graph metrics conformal graphs padded partitions random separators congestion crossings duality conformal metrics crossing congestion excluded minors vertex congestion rigs careful minors random separators careful minors rigs chopping trees random separator construction diameter bound subgraphs applications discussion spectral bounds weighted separators embedding problems university washington partially supported nsf introduction consider undirected graph graph said region intersection graph rig vertices correspond connected subsets edge two vertices precisely subsets intersect concretely family connected subsets succinctness often refer rig let rig denote family finite rigs prominent examples graphs include intersection graphs regions surface intersection graphs graphs drawn surface instance string graphs intersection graphs continuous arcs plane easy see every finite string graph rig planar graph simple compactness argument may assume every two strings intersect finite number times consider planar graph whose vertices lie intersection points strings edges two vertices adjacent string see figure rig difficult see converse also true see lemma illustrate nature objects recall string graphs strings require intersections representation recognition problem string graphs decidability recognition problem established membership proved refer recent survey background history behind string graphs even planar rigs dense every complete graph rig planar graph particular every complete graph string graph conjectured fox pach every string graph balanced separator nodes fox pach proved graphs separators size log presented number applications separator theorem obtained bound log present work confirm conjecture fox pach generalize result include rigs graphs exclude fixed minor theorem rig excludes minor separator size number edges moreover one estimate log preceding statement separator subset induced graph every connected component contains vertices proof theorem constructive based solving rounding linear program yields algorithm constructing claimed separator case bound maximum degree one use spectral bisection algorithm see section planar graphs exclude minor theorem implies string graphs balanced separators since graphs drawn compact surface genus exclude minor theorem also applies string graphs fixed compact surface addition implies separator graphs excluding fixed minor following reason let define subdivision graph graph obtained subdividing every edge path length two every graph hard see minor minor rig theorem quantitatively weaker sense shows existence separators vertices since every graph log edges bound log figure string graph rig planar graph applications topological graph theory mention two applications theorem graph theory authors present applications separator theorems string graphs two cases tight bound separators leads tight bounds problems next two theorems confirm conjectures fox pach proved follow theorem results tight constant factor theorem constant every holds every string graph vertices cnt log edges topological graph graph drawn plane vertices represented points edges curves connecting corresponding pairs points theorem every topological graph vertices edges two disjoint sets cardinality log every edge one set crosses edges improves bound log proved authors also show bound tight conclude section let justify observation made earlier lemma finite string graphs precisely finite region intersection graphs planar graphs proof already argued string graphs planar rigs consider planar graph finite graph rig let representation rig since finite may assume region finite see let type set since finite finitely many types region let finite set vertices exhausts every type let finite spanning tree induced graph regions finite connected also form representation rig region finite may assume also finite take planar drawing edges drawn continuous arcs every let drawing spanning tree represented string simply trace tree using traversal begins ends fixed node thus string graph balanced separators extremal spread since complete graphs string graphs access topological methods based exclusion minors instead highlight delicate structural theory following fact exercise fact string graph planar generally recall minor obtained sequence edge contractions edge deletions vertex deletions obtained using edge contractions vertex deletions say strict minor following lemma appears section lemma rig strict minor minor topological structure forbidden strict minors interacts nicely conformal geometry explain consider family spaces arise finite graph assigning lengths edges taking induced shortest path distance certainly add edge family spaces grow since giving edge length equal diameter space effectively remove consideration particular complete graph vertices every metric space path metric phenomenon arise one instead considers path metrics conformal graph pair graph defines follows assign every length equal let induced shortest path distance refer conformal metric sometimes abuse terminology refer conformal metric well significant tool study extremal conformal metrics graph unlike case family path distances coming conformal metrics even contains arbitrarily large complete graph minors simple example let denote complete graph countably many vertices every countable metric space metric yet every distance arising conformal metric ultrametric max vertex expansion observable spread fix graph rig since family rig closed taking induced subgraphs standard reduction allows focus finding subset small isoperimetric ratio set edges vertices outside also define interior let define vertex expansion constant min shown quantity related concentration lipschitz functions extremal conformal metrics study properties rich history consider instance concentration function sense milman gromov observable diameter finite metric space dist define spread quantity dist dist define observable spread sobs dist sup remark remark terminology general difficult view large metric space holds conceptually algorithmic standpoint one thinks lipschitz maps observations observable spread captures much spread define observable spread sup sobs extremal quantities arise naturally study linear programming relaxations discrete optimization problems like finding smallest balanced vertex separator graph related extremal notions often employed conformal geometry discretizations see particular notions extremal length employed duffin cannon section recall proof following theorem relates expansion observable spread theorem every finite graph example subgraph lattice vertex set achieved taking defining light theorem prove theorem suffices give lower bound natural compare quantity spread max let examine two notions planar graphs using theory circle packings example circle packings suppose finite planar graph circle packing theorem asserts tangency graph family circles unit sphere let centers radii circles respectively argument spielman teng see also hersch analogous result conformal mappings shows one take define centers latter two distances geodesic distance euclidean distance respectively using fact yields moreover vol follows observe three coordinate projections lipschitz respect one contributes least fraction sum conclude combined theorem yields proof separator theorem similar proofs separator theorem based circle packings known see one new certainly known authors prove theorem two steps first giving lower bound establishing first step follow optimization linear program dual optimization maximum problem see section detailed discussion statement duality theory shows string graph small vertex congestion used construct related planar graph low vertex congestion sense element proof crucial ingenious reduction string graph planar graph preserve congestion standard sense work shows small congestion exist thus one concludes flow providing lower bound via duality section extend argument rigs graphs using flow crossing framework spread observable spread major departure comes second step rounding fractional separator integral separator establishing rig graph used following result holds finite metric space follows easily arguments see also theorem finite metric space holds sobs log particular graph vertices log instead using preceding result employ graph partitioning method klein plotkin rao authors present iterative process repeatedly partitioning metric graph diameter remaining components bounded partitioning process fails construct minor since rigs graphs necessarily exclude minors need construct different sort forbidden structure role lemma plays section order argument work essential construct induced partitions remove subset vertices induces partitioning remainder connected components constructing suitable random partition standard methods metric embedding theory allow conclude theorem rig graph eigenvalues spread section show methods presented used control eigenvalues discrete laplacian rigs consider linear space let symmetric positive linear operator given let denote spectrum define spread max spread used give upper bounds first eigenvalue graphs exclude fixed minor stronger property conformal metrics used bound higher eigenvalues well roughly speaking control kth eigenvalue one requires conformal metric spread every subset size large combining main theorems methods section section prove following theorem section theorem suppose rig excludes minor dmax maximum degree holds dmax log particular bound shows dmax recursive spectral partitioning see finds balanced separator additional applications treewidth approximations bounding rigs graphs leads additional applications combined rounding algorithm implicit theorem explicit yields algorithms vertex uniform sparsest cut problem particular follows rig excludes minor algorithm constructs tree decomposition treewidth treewidth result appears new even string graphs refer lipschitz extension padded decomposability result section combines lipschitz extension theory show following suppose conformal graph rig free graph every banach space subset mapping extension see applications flow cut sparsifiers graphs preliminaries use notation graphs appearing paper finite undirected unless stated otherwise graph use edge vertex sets respectively induced subgraph use notation set edges one endpoint let denote neighborhood write graph arises subdividing every edge path length two dist space write dist inf dist dist dist finally employ notation denote means exists universal unspecified constant vertex separators conformal graph metrics following result standard recall definition vertex expansion lemma suppose every induced subgraph satisfies separator size thus remainder section focus bounding remark one basic fact consider graph partition subset indeed suppose conformal graphs conformal graph pair connected graph associated define distance function follows assign length every induced metric define supu extremal linear programming relaxation optimization defining universal constant factors section establish following result theorem connected graph rig graph excludes minor log recall theorem completes first step program exhibiting small separators second step need relate restate proof theorem language proof presented also somewhat simpler employ menger theorem theorem restatement theorem connected graph holds proof inequality straightforward suppose witnesses let let partition let define two maps since separates maps map satisfies satisfies otherwise map either case shown inequality establish interesting bound suppose conformal metric mapping define three sets observe therefore since hold conclude therefore yields hand used notation next section prove following theorem though main technical arguments appear section theorem rig excludes minor combining lemma theorem theorem theorem yields proof theorem indeed suppose rig excludes minor let log completing proof light lemma padded partitions random separators let finite metric space define closed ball partition write set containing say partition diam definition random partition almost surely every following result essentially contained see also thm recall argument since exact statement need appeared lemma let finite metric space admits partition sobs proof breaks two cases whose conjunction yields lemma lemma sobs proof let define map moreover hand combining two preceding inequalities yields sobs lemma sobs proof let partition let map chosen uniformly random conditioned define note almost surely moreover observe therefore satisfy independence yields assumption sobs hence order produce padded partition construct auxiliary random object let conformal graph define skinny ball say random subset separator following two conditions hold almost surely every connected component diameter metric lemma admits separator admits partition proof random partition defined taking connected components along single sets fact almost surely immediate set observe every since connected set moreover see observe max thus follows every completing proof following result proved section see corollary theorem rig excludes minor every every conformal metric admits separator prove theorem proof theorem suppose rig excludes minor let conformal metric combining theorem lemma shows admits separator lemma shows sobs completing proof remark one advantage introducing auxiliary random separator used directly relate without going padded partitions indeed done using weaker property every stronger padding property number additional applications see section section present argument suppose conformal graph let vertex one apply lemma theorem obtain sobs suppose exists case every subset let separator every connected component vertices particular separator probability therefore linearity expectation congestion crossings let undirected graph let denote set paths note allow paths consisting single vertex vertices use pguv subcollection paths map use terms flow interchangeably define congestion map denote total flow sent undirected graph pair satisfies following conditions flow every map injective say proper say integral duality conformal metrics define congestion min minimum flows next theorem follows strong duality convex optimization see thm employs slater condition strong duality see theorem duality theorem every holds pair dual exponents require case except section case central crossing congestion excluded minors define crossing congestion flow denote provides lower bound congestion clearly define inf infimum define also min infimum integral next lemma offers nice property crossing congestion infimum always achieved integral flows lemma every graph holds proof given define random integral flow follows every edge independently choose path probability let equal number edges choose path paths selected manner independence linearity expectation yield next result relates topology graph crossing congestion appears lem lemma bipartite graph minimum degree preceding lemma allows one use standard crossing number machinery arrive following result see thm theorem every following holds excludes minor moreover constant log log vertex congestion rigs generalize argument prove following theorem theorem graph rig moving proof state main result section follows immediately conjunction theorem theorem one require lower bound theorem bound always holds corollary suppose connected graph rig graph excludes minor particular theorem yields log log proof theorem let set regions realizing every path specify path fix distinguished vertex suppose let path starts ends entire subpath contained possible connected implies share least one vertex describe path visiting regions order let let proper achieving path mapping sends possibly improper establishing following claim complete proof theorem claim holds prove claim follows intersect charge weight element visits regions visits regions meet vertex charge crossing otherwise charge crossing edge charged thus total weight charged similarly charged thus total weight charged since weight contributing charged yields desired claim careful minors random separators graphs one says minor connected subsets sometimes refer sets supernodes say strict minor stronger condition holds finally say careful minor strict minor next result explains significance careful minors region intersection graphs prove next section lemma rig careful state main result section proof occupies sections theorem following holds suppose excludes careful minor number conformal graph admits separator applying lemma immediately yields following corollary suppose excludes minor rig number conformal graph admits separator proof theorem based procedure iteratively removes random sets vertices graph rounds modeled argument based exposition latter argument one consult book careful minors rigs next lemma clarifies slightly structure careful minors lemma careful exist connected subsets distinct vertices independent set every holds proof direction straightforward argue direction let witness strict every exists simple path one endpoint one endpoint whose internal vertices satisfy vertex subdividing edge choose vertex removal breaks graph two connected components define property strictness minor fact vertices similarly properties follow form independent set strictness minor fact prove rig careful minors yield minors proof lemma let set regions realizing assume careful let sets guaranteed lemma define since connected regions connected follows connected let verify sets pairwise disjoint must regions would imply lemma asserts show exist pairwise paths connects yield desired fix lemma know connected set shares vertex also shares different vertex thus choose note lemma particular also yields verifying thus left verify sets pairwise also follows specifically fact independent set chopping trees observe trivial approximation argument suffices prove theorem conformal metric one satisfies let fix conformal metric number fix arbitrary ordering order break ties argument follows illustration fat sphere chopping graph subgraphs figure chopping procedure use denote collection connected induced subgraphs subgraph use disth denote induced distance coming conformal metric let define skinny ball fat ball fat sphere respectively disth disth see figure useful illustration one imagines vertex disk radius note connected component graph next fact requires assumption fact disth path emanating every holds let define collection connected components graph see figure next lemma straightforward lemma chosen uniformly random every tree rooted tree nodes triples refer center node depth define inductively depth follows root node let denote sequence centers encountered path root including children otherwise children distg words chosen point vhi furthest centers ancestors ambient metric distg concreteness maximum unique choose first vertex according ordering achieves maximum final definition say node chopping tree value maximum least distg note nodes level correspond connected components result removing subset nodes state following consequence lemma suppose tree consider integer let denote collection induced subgraphs occuring nodes unique connected component induced graph vhi state main technical lemma chopping trees proof appears section lemma consider assume following conditions hold tree exists node depth contains careful minor finally following analysis random chopping tree lemma following holds suppose chosen uniformly random let denote collection induced subgraphs occurring nodes vhi proof note since connected set edges lemma vhi vhi set experiences random chops probability gets removed one bounded lemma desired result follows observing satisfies random separator construction require additional tool proving theorem nodes need apply one operation vgk define subset follows define let collection connected components shards next two lemmas straightforward consequences construction lemma every satisfies min distg every holds diamg max lemma vgk chosen uniformly random proof theorem may assume indeed denote produce connected components taking union separators together yields may therefore assume assume excludes careful minor let tree chosen uniformly random let collection nodes let vhi construction graphs precisely connected components occur without repetition lemma define chosen uniformly random lemma vhi consider vhi case therefore together yield moreover collection induced subgraphs precisely set connected components thus left bound diamg every consider node since excludes careful minor lemma implies follows distg max distg based chosen therefore lemma implies every diamg conclude every connected component diamg combining shows separator yielding desired conclusion note establishing existence separator every implies existence separator every homogeneity diameter bound subgraphs goal prove lemma lemma restatement lemma consider assume following conditions hold tree node depth contains careful minor order enforce properties careful minor need way ensure edges certain vertices following simple fact primary mechanism lemma suppose satisfy disth proof since induced subgraph disth clearly proof lemma construct careful minor inductively recall careful minor strict minor subdivision use notation supernodes corresponding original vertices supernodes corresponding subdivision vertices single nodes denote let node denote sequence nodes path root write observe since also holds property becomes stronger children hence distg show induction contains careful minor sake induction need maintain additional properties describe first three properties simply ensure found strict minor let use numbers index vertices show exist sets vht vht vht following properties figure construction careful minor sets connected mutually disjoint set independent set holds every representative distg distg every holds disth base case take easily checked choices satisfy suppose objects satisfying establish existence objects satisfying may help consult figure inductive step recall construction note since holds distg therefore recalling let denote disth denote let unique element let denote unique element recall fact define lemma every following holds proof observe vht emanates implies disth thus note furthermore separates vht vht thus need prove disth vht max disth therefore lemma yields suffices prove let verify six properties order consider first sets lemma next consider sets distg thus hence disjoint lemma finally observe separates observe disjoint assumption follows separates thus need verify independent set end observe distg employed implies distg hence lemma implies indeed independent set follow immediately facts construction separator vertex path connecting left verify every argue using three cases follows vht since separates vht disth implies desired bound using lemma follows lemma distg moreover similarly distg one also distg first note using gives disth hand hence disth follows triangle inequality disth next disth also note disth disth follows disth thus verified holds disth fact disth follows similarly since left verify last case disth two facts follows completed verification inductive step thus induction exists careful minor completing proof applications discussion spectral bounds say conformal graph holds every subset one let smallest value next theorem appears thm theorem graph maximum degree dmax following holds satisfies admits partition dmax methods also give way producing weights consider graph let probability measure subsets flow called number let denote set supp vrg supported subsets size exactly following consequence duality theory convex programs see thm theorem every graph holds max min need extend notion weighted graphs suppose equipped weight edges pair satisfies properties property replaced every define crossing congestion infimum given measure let defined need following result immediate consequence corollary corollary theorem constant every log following holds excludes minor graph measure supported vrh holds log use preceding theorem combined method section reach conclusion rigs graphs corollary suppose rig excludes minor every log holds dmax log proof suppose let flow induced mapping described proof theorem claim holds theorem know position prove theorem theorem restatement theorem suppose rig excludes minor dmax maximum degree holds dmax log proof let may assume log since bound always holds conjunction corollary theorem know exists conformal metric dmax log theorem know admits partition every applying theorem yields claimed eigenvalue bound weighted separators throughout paper equipped graphs uniform measure vertices natural extensions setting graph equipped measure vertices corresponding definitions naturally replace weighted space methods section section extend straightforward way setting see particular section extensions general setting pairs weights illustration state weighted version theorem suppose probability measure separator subset nodes every connected component theorem rig excludes minor probability measure separator weight log one estimate embedding problems state two interesting open metric embedding problems state string graphs extension rigs graphs straightforward random embeddings planar graphs let graph consider random variable len random planar graph len assignment lengths edges use dist len denote induced distance question constant following holds every finite string graph every exists triple len almost surely every dist len lipschitz expectation every dist len positive answer would clarify geometry conformal metrics string graphs lower bound method generalized rule existence reductions topology graphs random embeddings form method relies initial family graphs closed property manifestly violated string graphs since particular string graphs closed subdivision embeddings open question whether every planar graph metric admits embedding distortion universal constant see discussion conjecture extension general families following generalization also natural question conformal string metrics admit embeddings precisely constant following holds every string graph every mapping note unlike case flows positive resolution imply theorem string graphs see discussion stronger types embeddings yield implication question positive resolution implies question equivalent question planar graphs acknowledgements author thanks noga alon nati linial laci helpful discussions janos pach emphasizing jirka bound separators string graphs organizers mathematics conference work initiated references noga alon paul seymour robin thomas separator theorem nonplanar graphs amer math punyashloka biswal james lee satish rao eigenvalue bounds spectral partitioning metrical deformations via flows acm art prelim version focs bourgain lipschitz embedding finite metric spaces hilbert space israel stephen boyd lieven vandenberghe convex optimization cambridge university press cambridge james cannon combinatorial riemann mapping theorem acta amit chakrabarti alexander jaffe james lee justin vincent embeddings topological graphs lossy invariants linearization ieee symposium foundations computer science duffin extremal length network math anal uriel feige mohammadtaghi hajiaghayi james lee improved approximation algorithms minimum weight vertex separators siam jacob fox pach separator theorem string graphs applications combin probab jacob fox pach applications new separator theorem string graphs combin probab jacob fox pach csaba bipartite strengthening crossing lemma combin theory ser fakcharoenphol talwar improved decomposition theorem graphs excluding fixed minor proceedings workshop approximation randomization combinatorial optimization volume lecture notes computer science pages springer anupam gupta ilan newman yuri rabinovich alistair sinclair cuts trees graphs combinatorica misha gromov metric structures riemannian spaces modern classics boston boston english edition based french original appendices katz pansu semmes translated french sean michael bates joseph hersch quatre membranes acad sci paris kelner lee price teng metric uniformization spectral bounds graphs geom funct prelim version stoc jan string graphs requiring exponential representations combin theory ser kostochka minimum hadwiger number graphs given mean degree vertices metody diskret philip klein serge plotkin satish rao excluded minors network decomposition multicommodity flow proceedings annual acm symposium theory computing pages jan string graphs recognizing string graphs combin theory ser james lee manor mendel mohammad moharrami theorem math ser james lee assaf naor extending lipschitz functions via random metric partitions invent tom leighton satish rao multicommodity theorems use designing approximation algorithms acm richard lipton robert endre tarjan separator theorem planar graphs siam appl lectures discrete geometry volume graduate texts mathematics new york separators string graphs combin probab string graphs separators geometry structure randomness combinatorics volume crm series pages pisa konstantin makarychev yury makarychev metric extension operators vertex sparsifiers lipschitz extendability israel vitali milman gideon schechtman asymptotic theory normed spaces volume lecture notes mathematics berlin appendix gromov gary miller teng william thurston stephen vavasis separators nearest neighbor graphs acm mikhail ostrovskii metric embeddings volume gruyter studies mathematics gruyter berlin bilipschitz coarse embeddings banach spaces yuri rabinovich average distortion embedding metrics line discrete comput marcus schaefer daniel decidability string graphs comput system marcus schaefer eric sedgwick daniel recognizing string graphs comput system special issue montreal daniel spielman teng spectral partitioning works planar graphs finite element meshes linear algebra applications special issue honor miroslav fiedler march andrew thomason extremal function contractions graphs math proc cambridge philos
| 8 |
feb note computation frobenius number numerical semigroup julio bstract note observe frobenius number therefore conductor numerical semigroup obtained maximal socle degree quotient corresponding semigroup algebra ideal generated biggest generator semigroup ntroduction eview numerical semigroups occur often many branches mathematics one challenging problems area computation frobenius number semigroup biggest integer element numerical semigroup paper describe method calculate based fundamental concepts commutative algebra details general reference numerical semigroups reader refer works rosales much notation use comes originally many respects seminal work herzog kunz let denote set nonnegative integers let arbitrary field let positive integer numbers gcd consider numerical semigroup nnd minimally generated existence element minimal number called conductor denote number biggest integer belonging called frobenius number let nonzero element set called set easily checked max let subset satisfying said fractional sometimes also called mathematics subject classification primary secondary key words phrases numerical semigroup frobenius problem graded polynomial ring author partially supported spanish government ministerio ciencia mec grant cooperation european union framework founds feder deutsche forschungsgemeinschaft dfg julio uniquely determined maximal ideal important sequel consider also note since one indeed inclusion holds precisely case cardinality set elements denoted note also max let resp polynomial ring graded deg every resp deg let graded homomorphism given every image semigroup ring associated denoted homogeneous prime ideal ker said presentation ideal let consider image epimorphism mapping onto stands projection onto first coordinates define quotient ring following ring isomorphisms easily checked denotes class modulo every furthermore ring local unique maximal graded ideal esult let define trivial submodule socle set elements annihilated homogeneous maximal ideal namely triv largest subspace structure vector space identified hom note set fact subset semigroup yields isomorphism say trivial submodule triv set formal power series whose elements indeed polynomials furthermore bijection sets given mapping every together isomorphism leads equality cardinality dimension socle dimk triv note computation frobenius number numerical semigroup means particular trivial submodule triv finite dimensional vector space field let choose basis take element deg max deg simple matter realise lemma degree deg independent choice basis trivial submodule ring thus led following result theorem deg proof proof straightforward bijection corollary max deg proof result follows straightforward equality beginning paper example let take monomial curve given corresponding numerical semigroup presentation ideal associated mod therefore get triv clearly seen deg hence one might also semigroup eferences herzog kunz die wertehalbgruppe eines lokalen rings der dimension sitz ber heidelberg akad wiss rosales numerical semigroups developments mathematics vol springer york diophantine frobenius problem oxford lect series math vol oxford new york selmer linear diophantine problem frobenius reine angew math villarreal monomial algebras marcel dekker new athematik nformatik ermany address jmoyano
| 0 |
directional cell search delay analysis cellular networks static users sep yingzhe baccelli jeffrey andrews jianzhong charlie zhang abstract cell search process user detect neighboring base stations bss make cell selection decision due importance beamforming gain millimeter wave mmwave massive mimo cellular networks directional cell search delay performance investigated cellular network fixed user locations considered strong temporal correlations exist sinr experienced user poisson cellular networks rayleigh fading channels expression spatially averaged mean cell search delay users derived mean cell search delay network mmwave network proved infinite whenever nlos path loss exponent larger interferencelimited networks phase transition mean cell search delay shown exist terms number mean cell search delay infinite smaller threshold finite otherwise also demonstrated effective decreasing cell search delay especially cell edge users ntroduction cell search critical prerequisite establish initial connection cellular user cellular network specifically users detect neighboring bss make cell selection decision downlink cell search phase users acquire connections network initiating uplink random access phase transmissions receptions cell search performed lte unsuitable mmwave communication massive mimo due lack enough directivity gain contrast directional cell search schemes leverage andrews baccelli wireless networking communications group wncg university texas austin email yzli jandrews zhang samsung research email date revised september user achieve extra directivity gains ensure reasonable cell search performance paper leverage stochastic geometry develop analytical framework directional cell search delay performance fixed cellular network user locations fixed long period time several minutes believe analytical tools developed paper provide useful insights practical fixed cellular networks fixed mmwave massive mimo broadband networks mmwave backhauling networks related work useful method improve cell search performance compared conventional cell search mmwave massive mimo networks specifically mmwave links generally require high directionality large antenna gains overcome high isotropic path loss mmwave propagation result mmwave networks applying cell search provides sufficient ratio snr create viable communications also facilitates beam alignment users directional cell search delay performance mmwave systems investigated link level perspective system level perspective particular consider user mmwave locations fixed within initial access cycle independently reshuffled across cycles block coherent scenario fundamentally different fixed network massive mimo system bss achieve effective power gain scales number antennas channel state information csi known bss however since array gain unavailable cell search operations due lack csi new users may unable join system using traditional omnidirectional cell search order overcome issue proposed beamforming exhaustively sweep beams cell search design implemented verified ghz massive mimo prototype analytical directional cell search performance investigated fixed cellular networks system level perspective due analytical tractability cellular networks stochastic geometry natural candidate analyzing directional cell search delay fixed cellular networks particular stochastic geometry already widely used investigate fixed poisson network performance local delay metric characterizes number time slots needed sinr exceed certain sinr level local delay fixed hoc networks found infinite several standard scenarios rayleigh fading constant noise new phase transition identified case terms mean local delay latter finite certain parameters threshold infinite otherwise local delay fixed poisson networks also investigated shown power control efficient method ensure finite mean local delay previous works mainly focused communications contributions work analyze cell search delay fixed cellular networks directional cell search protocol consider duplex tdd cellular system system time divided different initial access cycles cycle starts cell search period wherein bss apply synchronous pattern broadcast synchronization signals mathematical framework developed derive exact expression mean cell search delay quantifies spatial average individual mean cell search delays perceived users main contributions paper summarized follows shown reduce number cycles needed succeed cell search arbitrary locations fading distribution mean number initial access cycles required succeed cell search proved decreasing number multiplied factor exact expression mean cell search delay derived poisson point process ppp distributed bss rayleigh fading channels expression given infinite series based following observations obtained noise limited scenario mmwave networks prove long path loss exponent nlos path larger mean cell search delay infinite irrespective transmit power number interference limited scenario massive mimo networks ghz bands exists phase transition mean cell search delay terms number specifically mean cell search delay infinite smaller critical value finite otherwise fact never observed literature best knowledge cell search delay distribution numerically evaluated conditional mean cell search delay typical user given nearest distance derived ppp distributed bss rayleigh fading channels distribution conditional mean cell search delay also numerically evaluated observe cell search delay distribution also show increasing number significantly reduce cell search delay cell edge users overall paper shown fixed networks mean cell search delay could large due temporal correlations induced common randomness result fixed cellular networks system parameters including number antennas intensity need carefully designed reasonable cell search delay performance achieved ystem odel work consider cellular system carrier frequency total system bandwidth transmit power denoted total thermal noise power denoted rest section present proposed directional cell search protocol location models propagation assumptions performance metrics directional cell search protocol consider tdd cellular system shown fig system time divided different initial access cycles period denotes ofdm symbol period initial access refers procedures establish initial connection user cellular network consists two main steps cell search downlink random access uplink specifically detecting synchronization signals broadcasted bss cell search user determine presence neighboring bss make cell selection decision user initiate random access process desired serving transmitting preamble shared random access channel successfully connected network decode preamble without collision main focus work cell search performance random access performance incorporated future work equipped large dimensional antenna array support highly directional communications analytical tractability actual antenna pattern approximated sectorized beam pattern antenna gain constant within main lobe addition assume side lobe gain reasonable approximation uses large dimensional antenna array narrow beams possibly ratio larger supports analog beamforming maximum possible vectors beamforming vector corresponds mainm lobe antenna gain covers sector area angle user assumed single antenna unit antenna gain cell search phase sweeps transmit beamforming directions broadcast synchronization signals user able detect sufficiently small miss detection probability ratio sinr synchronization signal exceeds bss transmit synchronously using beam direction every symbol cell search delay within cycle therefore tcs every transmits using direction typical user receive bss located inside sector define infinite sector domain centered sectors cell search sector say sector detected cell search typical user able detect provides smallest path loss closest inside sector path loss estimated beam reference signals cell search typical user selects smallest path loss among detected sectors serving simplicity neglect scenario providing smallest path loss inside sector deep fade unable detected bss detected sector scenario change fundamental trends regarding finiteness mean cell search detailed section iii theorem corresponding analysis significantly complicated initial access cycle initial access cycle time period period data transmission period period period beam pair beam pair beam pair data transmission period beam pair beam pair beam pair fig illustration two initial access cycles timing structure table notation simulation parameters symbol lcs lcs dcs definition ppp intensity user ppp intensity user transmit power carrier frequency system bandwidth total thermal noise power number antennas directions supported path loss exponents model path loss reference distance model critical distance path loss model sinr threshold detect synchronization signal preamble ofdm symbol period initial access cycle period sector providing smallest path loss typical user inside distance typical user nearest number cycles succeed cell search mean number cycles succeed cell search conditionally cell search delay closed open ball center radius simulation value dbm dbm ghz ghz spatial locations propagation models locations assumed realization stationary point process intensity user locations modeled realization homogeneous ppp intensity denoted paper fixed network scenario investigated locations fixed users either fixed move slow speed pedestrian speed less result user locations appear fixed across different initial access cycles fundamentally different high mobility scenario investigated assumes user ppps independently shuffled across every initial access cycles without loss generality analyze performance typical user located origin guaranteed slivnyak theorem states property observed typical point ppp observed point origin process path loss function adopted path loss link distance given dual slope path loss model captures dependency path loss exponent link distance various network scenarios mmwave networks particular referred los ball blockage model mmwave networks wherein represent los nlos path loss exponents represent path loss reference distance meter focus scenario max dual slope path loss model reverts standard path loss model due adopted antenna pattern bss directivity gain user beam aligned user otherwise fading effect every bsuser link modeled random variable whose complementary cumulative distribution function ccdf decreasing function support addition assume cycle length fading random variables given link also across different cycles performance metrics main performance metrics investigated work number cycles corresponding cell search delay typical user discover neighboring bss determine potential serving without loss generality cycle fig represents first cycle typical user denote success indicator cell search cycle number cycles typical user succeed cell search therefore lcs inf since analog beamforming adopted cell search delay defined follows dcs lcs finally table summarizes notation definitions system parameters used rest iii nalysis ean ell earch elay section mean cell search delay performance typical user investigated corresponds cell search delay palm expectation respect user ppp dcs fact palm expectation also understood ergodic interpretation states user cell search delay dcs following relation true dcs dcs lim therefore mean cell search delay typical user also understood spatial average individual cell search delays among users notational simplicity use rest paper denote palm expectation user ppp cell search delay general deployment fading assumptions part first investigate cell search delay general location model necessarily ppp fading distribution according section user locations fixed fading variables every link across cycles therefore given process cell search success indicators different cycles form bernoulli sequence random variables cell search success probability denoted since sector independently detected given cell search successful least one sector detected conditionally cell search success probability symbols two simulation values first one noise limited scenario second one interference limited scenario detailed section every cycle therefore denotes indicator providing smallest path loss inside sector detected specifically denote sector providing smallest path loss typical user fji fading random variables bss typical user fji kxij xij fji kxij xij expectation taken respect fading random variables fji following theorem derive mean number cycles typical user succeed cell search palm expectation user process theorem mean number cycles needed typical user succeed cell search given lcs lcs proof first part proved fact given lcs geometric distribution success probability second part follows taking expectation respect remark since according conditional mean cell search delay lcs finite almost surely however overall spatial averaged mean cell search delay respect ppp lcs could infinite certain network settings detailed next subsection lower bound upper bound lcs immediately obtained provided following remarks remark applying jensen inequality positive random variable function get thus lcs equality holds ppp independently across different cycles typical user perspective coincides high mobility scenario considered remark denote providing smallest path loss typical user index sector contains therefore upper bound lcs given lcs based theorem prove following relation number mean cell search delay lemma given realization locations mean number cycles succeed cell search lcs lcs integer larger proof since know denote providing smallest path loss typical user inside assume due facts since decreasing function get also note according hence thus cell search success probability typical cycle satisfies finally proof concluded applying theorem lemma shows location models fading distributions conditional number cycles cell search succeed decreases number multiplied integer equivalently beamwidth divided result also implies lcs lcs remark fact lemma extended integer always exist special constructions deployments lcs lcs rest section investigate mean cell search delay several specific network scenarios mean cell search delay poisson networks rayleigh fading part locations assumed form homogeneous ppp intensity fading random variables exponentially distributed unit mean exp due high analytical tractability network setting widely adopted obtain fundamental design insights conventional macro cellular networks cellular networks even mmwave cellular due ppp assumption bss fact different sectors every sector therefore detected independently probability since path loss function provides minimum path loss typical user inside sector closest origin angle uniformly distributed within ccdf norm derived follows min kxk exp second equality follows void probability ppps therefore probability sinr rate trends mmwave networks rayleigh fading ppp configured bss shown close realistic fading assumptions nakagami fading shadowing distribution function pdf given exp applying ppp exp conditional detection probability sector given exp fji kxij xij exp exp exp xij xij fji kxij kxij step obtained taking expectation fading random variables theorem ppp fading variables exponentially distributed unit mean mean number cycles cell search succeed lcs given exp rdr exp exp proof substituting theorem obtain lcs step derived fact step follows monotone convergence theorem step events sectors detected ppp distributed bss furthermore compute follows exp exp exp kxij exp denotes expectation palm distribution step derived slivnyak theorem finally proof concluded applying probability generating functional pgfl ppps remark theorem interpreted lcs lcs representing probability sectors detected within cycles lcs theorem provides series representation expected number cycles succeed cell search however unclear theorem whether lcs finite following investigate finiteness lcs two representative network scenarios namely noise limited scenario interference limited scenario noise limited scenario noise limited scenario assume noise power dominates interference power interference power perfectly canceled noise power needs taken account compared conventional cellular networks operate ghz bands mmwave networks much higher noise power due wider bandwidth interference power much smaller due high isotropic path loss mmwave result mmwave cellular networks typically noise limited especially carrier frequency system bandwidth high enough ghz carrier frequency ghz bandwidth since interference power zero noise limited scenario theorem becomes lcs exp exp change variable becomes exp exp lcs shows lcs intensity increases network densification helps reducing number cycles succeed cell search next two lemmas prove finiteness lcs depends nlos path loss exponent phase transition lcs happens theorem noise limited scenario finite number intensity lcs whenever nlos path loss exponent proof given number arbitrarily large positive value obtain following lower bound lcs exp exp exp exp exp exp exp exp exp exp exp step step follows fact thus note since goes infinity goes infinity completes proof according lemma expected cell search delay infinity whenever alleviated densification increase using higher number antennas increase reason explained shows due deployment typical user could located cell edge closest inside every sector farther arbitrarily large distance exp fraction cell edge users corresponding number cycles required succeed cell search least exp therefore expected cell search delay averaged users ultimately infinite system level perspective indicates noise limited networks always significant fraction cell edge users requiring large number cycle succeed cell search spatial averaged cell search delay perceived users determined largely cell edge users explains infinite mean cell search delay observed theorem noise limited scenario nlos path loss exponent expected number cycles succeed cell search lcs density number satisfy lcs phase transition lcs happens proof clear lcs addition simplify upper bound lcs remark noise limited scenario given follows lcs exp exp exp exp exp exp exp exp exp exp obtained applying noise limited assumption noting providing smallest path loss among bss closest origin since observed lcs guaranteed finite mean observe proof lemma arbitrarily large distance fraction exp cell edge users whose nearest bss farther number cycles edge users succeed cell search scales exp result deployment sparse number cell search delay averaged users becomes infinity due cell edge users contrast network densification fraction cell edge users poor signal power reduced average cell search delay reduced finite mean value whenever similar behavior happens bss using antennas increase snr cell edge users summarize noise limited scenario mmwave network mean cell search delay infinite whenever nlos path loss exponent typically case however special case nlos path loss exponent mean cell search delay could switch infinity finite value careful network design densification adopting antennas interference limited scenario interference limited scenario noise power dominated interference power assume example massive mimo network operates ghz bands typically interference limited part investigate cell search delay network standard single slope path loss function suitable networks sparsely deployed bss opposed networks first prove theorem greatly simplified interference limited scenario lemma interference limited scenario expected number initial access cycles required succeed cell search given lcs rdr proof substituting defined simplified follows exp rdr exp exp rdr exp rdr completes proof remark observe lemma lcs depend intensity interference limited scenario increase decrease signal power perfectly corresponding increase decrease interference power another immediate observation lemma independent number antennas since according definition theorem lcs therefore monotonically respect stronger observation lemma remark path loss exponent proved lemma lcs mainly interference power dominate signal power coverage probability sinr threshold prove may exist phase transition lcs terms beam number order show first apply remark obtain sufficient condition guarantee finiteness lcs lemma interference limited scenario path loss exponent expected number cycles succeed cell search lcs number beams denotes detection threshold particular lcs finite proof denote closest origin among sector containing obtain upper bound lcs substituting remark follows lcs kxj kxj exp exp exp exp obtained noting closest origin follows pgfl derived change variables observed finite whenever sufficient condition finiteness lcs particular equality holds first step result lcs finite according lemma lemma number cycles succeed cell search lcs may phase transition terms number beams depending relation path loss exponent detection threshold detailed following theorem theorem number cycles succeed cell search interference limited networks satisfy following lcs antenna case monotonicity lcs respect lcs guaranteed finite lcs lcs therefore according monotonicity lcs exists phase transition lcs lcs particular lcs means path loss exponent depends propagation environment corresponds free space los scenario increases environment becomes relatively lossy urban suburban areas addition sinr detection threshold note theorem directly apply pgfl calculation since larger however use dominated convergence theorem prove ppp pgfl result intensity measure still holds function satisfies exp depends receiver decoding capability typically within theorem shows lossy environment typical user detect nearby finite number cycles average mainly relative strength useful signal respect interfering signals strong enough however lcs could infinite due significant fraction cell edge users poor sir coverage therefore require high number cycles succeed cell search specifically small edge user subject many strong nearby interferers inside every sector corresponding cell search delay averaged users becomes infinity however increases beam sweeping create enough angular separation nearby bss edge user could locate different sectors result lcs significantly decreased cell edge users increases therefore phase transition lcs happens summary network always ensure network desirable condition finite mean cell search delay tuning number appropriately cell search delay distribution poisson networks rayleigh fading previous part mainly focused mean number cycles succeed cell search lcs equivalently mean cell search delay however shown theorem theorem theorem lcs could infinite various settings large variations performance cell edge user cell center user therefore also important analyze cell search delay distribution system design since cell search delay dcs depends spatial point process model bss fading random variables cycle distribution intractable general section evaluate distribution conditional mean cell search delay given distance typical user closest random variable pdf exp specifically first derive expected number cycles succeed cell search given lcs function random variable mean lcs notation simplicity denote lcs lcs rest paper according evaluate distribution following conditional mean cell search delay dcs lcs main reason investigate cell search delay conditionally captures location therefore signal quality typical user particular corresponds cell center user corresponds cell edge user represents mean distance typical user nearest ppp order derive lcs first derive lcs denotes distance typical user closest sector lemma given distances typical user nearest bss inside every sector mean number cycles cell search lcs denotes probability detected first cycles exp rdr exp proof first prove lcs lcs due tower property conditional expectations rest proof follows steps similar theorem therefore omit details next prove following corollary derive lcs lcs corollary random variables ccdf functions symmetric following relation holds true min proof denote min obtain follows lim lim lim proof completed noting symmetric taking lcs corollary lcs directly obtained follows lemma given distance typical user nearest mean number cycles succeed cell search lcs exp exp function defined lemma lemma provides method evaluate cell search delay distribution general setting noise limited networks interference limited networks obtain following simplified results corollary noise limited network lcs given exp exp exp exp lcs exp wpcmn exp exp exp wpcml exp exp corollary easily proved lemma fact interference power corollary interference limited network standard path loss model path loss exponent lcs given lcs exp exp rdr proof since lemma simplified exp therefore obtain exp exp proof completed substituting lemma umerical valuations section distribution conditional mean cell search delay numerically evaluated noise limited scenario interference limited scenario specifically noise limited scenario consider cellular network operating mmwave band carrier frequency ghz bandwidth ghz intensity path loss exponents los nlos links respectively critical distance addition ofdm symbol period cycle length chosen interference limited scenario consider cellular network carrier frequency ghz intensity standard single slope path loss model path loss exponent ofdm symbol period cycle length conditional expected number cycles succeed cell search order evaluate distribution conditional mean cell search delay first illustrate lemma specifically simulated cellular network directional cell search protocol proposed section given distance user nearest shown lemma remark cell edge users require large number cycles succeed cell search therefore set upper bound number cycles user try cell search equal cycles noise limited scenario ghz ghz lcs lines theory markers simulation lines theory markers simulation noise limited networks interference limited networks fig conditional expected number cycles succeed cell search cycles interference limited scenario specifically infinite summation lemma computed term simulation treat user outage connected within cycles fig shows close match analytical results simulation results noise interference limited scenarios line lemma addition also observe fig conditional expected number cycles succeed cell search monotonically decreasing number increases distance nearest decreases cell search delay distribution noise limited networks cell search delay distribution noise limited networks numerically evaluated part fig plots ccdf conditional mean cell search delay dcs obtained generating realizations computing corresponding dcs corollary observe fig scale tail distribution function dcs dcs decreases almost linearly respect indicates cell search delay actually pareto type also observed fig tail distribution function satisfies log dcs log therefore expected cell search delay always infinite line lemma fig also shows number antennas increases tail dcs becomes lighter thus cell search delay edge users significantly reduced example cell search delay percentile user almost times smaller increases fact increasing increase snr cell edge users number cycles required edge users succeed cell search lcs shortened since dcs lcs cycle length much larger ofdm symbol period tail distribution dcs therefore becomes lighter increases despite higher overhead within every cycle ccdf ghz ghz increases cell search delay fig cell search delay distribution noise limited networks due nature cell search delay distribution fig shows exists extremely large variation cell search delay performance cell center users cell edge users fig plots cell search delay percentile users number antennas increases since percentile users located cell center typically los serving bss sufficiently high isotropic snr thus succeed cell search first cycle initiates therefore fig shows increases cell search delay percentile users increases almost linearly due increase overhead cell search delay performance percentile users median users plotted fig observe contrast mean cell search delay infinite median delay less various antenna number small median users high enough snr thus need cycles succeed percentile cell search delay ghz ghz number beams fig percentile cell search delay noise limited network cell search increases cell search delay median users first decreases due improved snr cell search success probability median users could succeed cell search first cycle initiates cell search delay increase increased beam sweeping overhead becomes dominant optimal antenna number beamwidth fig corresponds cell search delay percentile cell search delay ghz ghz number beams fig percentile cell search delay noise limited network cell search delay distribution interference limited networks similar noise limited scenario evaluated ccdf cell search delay interference limited scenario fig generating realizations computing corresponding dcs corollary fig shows tail distribution function dcs decreases almost linearly scale means distribution dcs also interference limited scenario however contrast noise limited scenario overall mean cell search delay always infinite phase transition mean cell search delay interference limited scenario observed fig specifically cell search performed fig shows decay rate tail satisfies log dcs log indicates infinite mean cell search delay increases fig shows log dcs log leads finite mean cell search delay observation consistent theorem shows considered interference limited scenario path loss exponent sinr detection threshold mean cell search delay infinite finite long also observed fig significantly reduce cell search delay median users edge users interference limited networks example number corresponding cell search delay percentile user respectively corresponding cell search delay percentile user respectively main reason performance gain interferencelimited network increases creates angular separations nearby bss user number cycles succeed cell search effectively reduced especially edge users ccdf omni cell search delay fig cell search delay distribution interference limited network onclusions paper proposed mathematical framework analyze directional cell search delay fixed cellular networks user locations static conditioned locations first derived conditional expected cell search delay palm distribution user process utilizing taylor series expansion derived exact expression overall mean cell search delay poisson cellular network rayleigh fading channels based expression expected cell search delay noiselimited network proved infinite nlos path loss exponent larger contrast phase transition expected cell search delay network identified delay finite number greater threshold infinite otherwise finally investigating distribution conditional cell search delay given distance nearest cell search delay edge user shown significantly reduced number increases holds true noise interference limited networks framework developed paper provides tractable approach handle spatial temporal correlations user sinr process cellular networks fixed user locations future work leverage proposed framework derive random access phase performance overall expected initial access delay well downlink throughput performance fixed cellular networks addition also extend framework incorporate user beamforming power control acknowledgments work supported part national science foundation grant award simons foundation university texas austin eferences dahlman parkvall skold mobile broadband elsevier science khan introduction mobile broadband systems ieee communications magazine vol jun rappaport sun mayzus zhao azar wang wong schulz samimi gutierrez millimeter wave mobile communications cellular work ieee access vol may roh seol park lee lee kim cho cheun aryanfar beamforming enabling technology cellular communications theoretical feasibility prototype results ieee communications magazine vol ghosh thomas cudak ratasuk moorut vook rappaport maccartney sun nie enhanced local area systems approach future wireless networks ieee journal selected areas communications vol jul marzetta noncooperative cellular wireless unlimited numbers base station antennas ieee transactions wireless communications vol larsson edfors tufvesson marzetta massive mimo next generation wireless systems ieee communications magazine vol rusek persson lau larsson marzetta edfors tufvesson scaling mimo opportunities challenges large arrays ieee signal processing magazine vol jan bjornson larsson marzetta massive mimo ten myths one critical question ieee communications magazine vol andrews buzzi choi hanly lozano soong zhang ieee journal selected areas communications vol jun andrews bai kulkarni alkhateeb gupta heath modeling analyzing millimeter wave cellular systems ieee transactions communications vol barati hosseini mezzavilla korakis panwar rangan zorzi initial access millimeter wave cellular systems ieee transactions wireless communications vol andrews baccelli novlan zhang design analysis initial access millimeter wave cellular networks ieee transactions wireless communications vol karlsson larsson operation massive mimo without transmitter csi ieee international workshop signal processing advances wireless communications spawc jun shepard javed zhong control channel design proceedings annual international conference mobile computing networking baccelli blaszczyszyn stochastic geometry wireless networks volume publishers inc andrews baccelli ganti tractable approach coverage rate cellular networks ieee transactions communications vol haenggi andrews baccelli dousse franceschetti stochastic geometry random graphs analysis design wireless networks ieee journal selected areas communications vol verizon radio access physical layer procedures jun choi heath gigabit broadband evolution toward fixed access backhaul ieee communications magazine vol apr hur kim love krogmeier thomas ghosh millimeter wave beamforming wireless backhaul access small cell networks ieee transactions communications vol giordani mezzavilla zorzi initial access mmwave cellular networks ieee communications magazine vol giordani mezzavilla barati rangan zorzi comparative analysis initial access techniques mmwave cellular networks annual conference information science systems mar andrews baccelli novlan zhang performance analysis cellular networks beamforming initial access protocols asilomar conference signals systems computers baccelli blaszczyszyn new phase transition local delays manets infocom proceedings ieee apr haenggi local delay poisson networks ieee transactions information theory vol mar zhang haenggi power control policies ieee transactions wireless communications vol iyer vaze achieving information velocity wireless networks international symposium modeling optimization mobile hoc wireless networks wiopt may waterhouse novak nirmalathas lim broadband printed sectorized coverage antennas millimeterwave wireless applications ieee transactions antennas propagation vol alkhateeb nam rahman zhang heath initial beam association millimeter wave cellular systems analysis design insights ieee transactions wireless communications vol may hussain michelusi throughput optimal beam alignment millimeter wave networks arxiv preprint chiu stoyan kendall mecke stochastic geometry applications john wiley sons baccelli blaszczyszyn stochastic geometry wireless networks volume theory publishers inc zhang andrews downlink cellular network analysis path loss models ieee transactions communications vol mar bai heath coverage rate analysis cellular networks ieee transactions wireless communications vol renzo stochastic geometry modeling analysis millimeter wave cellular networks ieee transactions wireless communications vol singh kulkarni ghosh andrews tractable model rate millimeter wave cellular networks ieee journal selected areas communications vol haenggi stochastic geometry wireless networks cambridge university press
| 7 |
gene expression time course clustering countably infinite hidden markov models matthew beal praveen krishnamurthy department computer science engineering state university new york suny buffalo buffalo mbeal abstract learn causal relationships may help elucidate well existing approaches clustering gene expression time course data treat different time points independent dimensions invariant permutations reversal experimental time course approaches utilizing hmms shown helpful regard hampered choose model architectures appropriate complexities propose clustering application hmm countably infinite state space inference model possible recasting hierarchical dirichlet process hdp framework teh hence call show infinite model outperforms model selection methods finite models traditional methods measured variety external internal indices clustering two large publicly available data sets moreover show infinite models utilize hidden states employ richer architectures transitions without damaging effects overfitting two problematic issues hamper practical methods clustering gene expression time course data first deriving clustering metric often unclear appropriate model complexity second current clustering algorithms available handle therefore disregard temporal information usually occurs constructing metric distance two genes common practice experiment measurements gene expression time consider expression positioned space perform worse spherical metric clustering space result clustering algorithm invariant arbitrary permutations time points highly undesirable since would like take account correlations genes expression nearby adjacent time points introduction large number popular techniques clustering gene expression data goal elucidate many different functional roles genes players important biological processes said genes cluster similar similar functional roles process see example eisen bioinformaticians recently access sets measurements genes expression duration experiment desired therefore data sets model complexity issue recently successfully tackled using dirichlet process mixture models particular countably infinite gaussian mixture models examples found rasmussen wild medvedovic sivaganesan dubey medvedovic however models applicable time series data unless ones uses unsavory concatenation time points mentioned example work medvedovic sivaganesan infinite mixture gaussians applied spellman time series data address second issue parametric models models differential equation models employed modelcomplexity issue still needs tackled either heuristic model selection criteria iyer approximate bayesian methods bic ramoni variational bayes beal continuous time series modeling spline ters popular heard discrete time series consider paper random walk models wakefield proposed advanced analysis thus far mixture hmms approach schliep wherein bic schwarz entropic criterion used model selection approach outlined paper also uses hmm question model selection using flexible nonparametric bayesian mixture modeling framework allows model countably infinite number hidden states countably infinite countably infinite transition matrix present simple powerful temporal extension recently introduced hierarchical dirichlet process mixture model hdp teh jordan beal blei form model coined hierarchical dirichlet process hidden markov model describe extension allows address issues noted moreover still provide measure similarity two genes time courses examining probabilistic degree overlap hidden state trajectories possible despite state spaces countably infinite paper arranged follows section briefly review hdp framework show countably infinite hmm particular though nontrivial instantiation within framework describe straightforward similarity measure pairs sequences section present results timecourse clustering experiments two publicly available gene data sets ground truth labels provided measuring performance respect variety external internal indices conclude section suggesting directions future work expansion infinite section present hidden markov model countably infinite state space call hierarchical dirichlet process hidden markov model hdphmm way relationship hdp framework teh begin overview hdp according work recast infinite hmm framework previous research first author infinite hmms beal provided approximate sampling scheme inference learning unable prove correctness recasting infinite hmm constrained hdp show functioning sampling scheme disposal explained hierarchical dirichlet processes hdp considers problems involving groups data observation within group draw mixture model desirable share mixture components groups consider first single group data number mixture components unknown priori inferred data natural consider dirichlet process mixture model depicted figure exposition mixtures see neal well known clustering property dirichlet process provides nonparametric prior number mixture components within group following generative model base measure concentration parameter dirichlet process parameter drawn datum drawn distribution parameterized models experiments hereafter gaussian density ferguson showed draws discrete probability one therefore sufficiently large data set several identical gives rise natural clustering phenomenon giving mixtures name consider several groups data denoted xji setting natural consider sets dirichlet processes one group still desire tie mixture models various groups therefore consider hierarchical model specifically one child dirichlet processes set base measure distributed according dirichlet process base measure discrete probability one child dirichlet processes necessarily share atoms thus ensuring mixture models different groups necessarily share mixture components generative model given xji figure shows graphical model plate data also groups data model number mixture components unknown priori group also data whole xji zji xji hdp hdp figure graphical model descriptions compares hdp mixture model original hdp model interpretation original hdp mixing proportions jth group drawn common weights drawn process item xji drawn mixture model mixing proportions unraveled hdp graphical model wherein distribution mixture model item drawn mixture model mixture indicator mixing proportions determined previous hidden state recasting hidden markov model straightforward route understanding connection hidden markov model hdp described first realize stickbreaking characterization sethuraman hdp given depicted figure stick zji xji zji construction stick gives rise weights beta advantage representation makes explicit generation one countably infinite set parameters jth group access various parameters need model data xji depending sampled mixing proportion recall hidden markov model hmm doubly stochastic markov chain sequence multinomial state variables linked via state transition matrix element sequence observations drawn independently observations conditional rabiner essentially dynamic variant finite mixture model one mixture component corresponding value multinomial state note hmm involves single mixture model rather set mixture models one value current state current state indexes specific row transition matrix probabilities row serving mixing proportions choice next state given next state observation drawn mixture component indexed thus consider nonparametric variant hmm allows unbounded set states must consider set dps one value current state moreover dps must linked want set next states reachable current amounts requirement atoms associated dps framework hierarchical thus simply replace set conditional finite mixture models underlying classical hmm hdp resulting model provides alternative methods place explicit parametric prior number states make use model selection methods select fixed number states stolcke omohundro exist two gibbs sampling methods based extension chinese restaurant process crp aldous called chinese restaurant franchise crf based auxiliary variable method described teh auxiliary variable method straightforward implement used experiments shown fact work served inspiration hdphmm beal sampler presented resembles crf scheme necessarily proximate reduce time complexity urn model presented earlier work related hdp framework describing latter using formalism particular consider unraveled hierarchical dirichlet process representation shown figure parameters representation following distributions stick assume simplicity distinguished initial state consider crf representation model turns result equivalent coupled urn model beal advantage representation use auxiliary variable sampling scheme designed hdp described current instantiation importantly order state variables defines grouping data groups indexed given grouping settings mixtures sampled independently sampling indicators change hence grouping data changes thus given access countably infinite set hidden states thought hdp countably infinite number groups metaphor countably infinite tables countably infinite restaurants sharing choices dishes last hdp hyperpriors hyperparameters gamma distributed shape inverse scale like gamma gamma sample auxiliary variable sampling scheme integrated model analysis experiments data sequence similarity measure used two publicly available sets analysis iyer gene expression time course data iyer consisting genes expressions across time points expressions bookkeeping crf representation difficult adopting well used crp metaphor customers entering restaurant crf representation multiple restaurants sampling table customer sits influence restaurant following customer must dine resulting highly coupled system standardized log expression time genes gene labeled belonging cluster outlier cluster denoted cluster assume labels result biological expert modification following preliminary eisen simple correlation dimensions analysis eisen cho second data set expression data described cho consisting genes expression across time points similarly normalized genes one clusters compare model standard hmm referred finite hmm finite hmm ran experiments different seed values averaged various scores explained order minimize effects initialization define probabilistic measure dissimilarity two genes time courses finite hmm element matrix size iyer probability two time courses gene identical hidden state trajectories computed straightforwardly step algorithm denoting posterior hidden state time cth gene sequence current parameters hmm log pcd straightforwardly given log therefore pcd pdc measures probability two genes traversed similar entire hidden trajectories use log pcd measure divergence thought clustering distance genes analogous measure divergence dissimilarity computed infinite model posterior distribution hidden state trajectories represented set samples quantity calculate simply empirical computation samples trajectories taken long mcmc runs since posterior samples always consist represented hidden states suffer countably infinite state space similar method used clustering using infinite mixture gaussians work rasmussen wild extended measure similarity sequences thoroughly one compute similarity involves pairwise marginals time well would require dynamic programming computation similarity current research however found approximation sufficient experiments note marginals used still obtained using agreement provided labels used common external internal indices assess quality clustering obtained various methods refer appendix details definition metrics note score smaller values better also use recently introduced index called purity defined wild metric given measure dissimilarity provided simple eisen correlation finite hmms construct dendrogram use average linkage based representation fix number clusters severing tree point force putative labeling genes possible labels results compare simple correlation eisen time depedence analysis finite hmms sizes varying several settings auxiliary variable gibbs sampling consisted burnin samples collecting posterior samples thereafter spacing samples tables display subset indices vary iyer data set denotes setting hdphmm denotes number hidden states used finite hmm run noting sensitivity specificity also indifference decided comparative purposes would fix iyer cho results shown space used similar reasons table shows comparison eisen finite hmm various results external internal indices better visualization given figure iyer results cho results impressive omitted due space figure clarity highlight entries column perform best reiterate index lower better consider result finite hmm since degenerates case genes class log pcd note several trends first degree variation indices different settings hyperhyperparameter small suggesting level bayesian hierarchy setting priors influence learning model moreover index variation finite hmm much larger second clear considering time infinite table choice number clusters iyer data rand crand jacc spec sens advantageous compared simple correlation analysis eisen already established schliep third evidence finite hmm overfitting iyer data set according sil indices cho according several external indices fit one particular model integrates countably infinite set models fourth clear table vast majority highlighted winners last rows shows dramatic success finite hmm eisen time independent analyses inferred architecture generally speaking finite models show improvement performance beyond around hidden states interesting therefore figure find wide range settings uses excess represented classes reason architectures countably infinite finite models quite different shown transition matrices figures almost three times sparse connectivity finite counterpart many states one two possible destination states conclusion directions described infinite hmm framework hdp auxiliary variable gibbs sampling scheme feasible fact crf countably infinite number tables countably infinite number restaurants potentially sharing common dishes shown two time course gene expression data sets performs similarly scenarios better best finite hmms found model selection find hmms outperform standard eisen analysis based simple correlation sequence vector treats time points independent used common measures external internal indices including table effect varying complexity finite models varying dataset iyer cho index rand crand jacc sens spec sil dunn puri rand crand jacc sens spec sil dunn puri eisen finite eisen finite eisen spec sens crand finite eisen finite eisen finite eisen purity finite eisen silhouette figure relative performance finite horizontal axis infinite horizontal solid dashed lines standard eisen correlation horizontal thick line algorithms subset indices given table cluster number iyer data set dex purity also find models learning quite different architectures hidden state dynamics current work examining closely prevalent paths hidden states may elucidate interesting regulatory networks play report biological significance forthcoming article paper considered terms classification terms density single gene using model trained remainder partly sampling calculate test likelihoods however previous preliminary work teh simpler case learning represented classes transitions finite hmm transitions figure analysis hidden states iyer data distribution number represented classes models shown stacked values various values hyperparameter demonstrating mass number represented classes shift dramatically even orders magnitude high level hyperparameter equivalent transition matrix entries row source state entries transition probabilities sum brighter squares denote higher probability transition matrix size finite less sparse entries sequences letters forming sentences alice adventures wonderland showed perplexity test sentences minimized using compared maximum likelihood trained hmms maximum posteriori trained hmms variational bayesian hmms beal also medvedovic sivaganesan show robustness infinite mixtures gaussians model compared finite mixtures another reason expect time series analysis perform well analyses finally host exciting variants nested group models may useful capturing ontological information working countably infinite switching model well variants mixture model formalism terms processes may attractive properties domain modeling acknowledgements acknowledge support nsf award yee whye teh hdp code discussion helpful comments anonymous reviewers references aldous exchangeability related topics pages springer berlin gerber gifford jaakkola simon continuous representations time series gene expression data journal computational biology beal variational algorithms approximate bayesian inference phd thesis gatsby computational neuroscience unit university college london beal ghahramani rasmussen infinite hidden markov model advances neural formation processing systems cambridge mit press beal falciani ghahramani rangel wild bayesian approach reconstructing genetic regulatory networks hidden factors bioinformatics february cho campbell winzeler steinmetz conway wodicka wolfsberg gabrielian landsman lockhart davis transcriptional analysis mitotic cell cycle molecular cell dubey hwang rangel rasmussen ghahramani wild clustering protein sequence structure space infinite gaussian mixture models altman dunker hunter klein editors pacific symposium biocomputing pages world scientific publishing singapore eisen spellman brown botstein cluster analysis display expression patterns proceedings national academy sciences usa december ferguson bayesian analysis nonparametric problems annals statistics heard holmes stephens quantitative study gene regulation involved immune response anopheline mosquitoes application bayesian hierarchical clustering curves journal american statistical association iyer eisen ross schuler moore lee trent staudt hudson boguski lashkari shalon botstein brown transcriptional program response human fibroblasts serum science medvedovic sivaganesan bayesian infinite mixture model based clustering gene expression profiles bioinformatics medvedovic yeung bumgarner bayesian mixture model based clustering replicated microarray data bioinformatics neal markov chain sampling methods dirichlet process mixture models technical report department statistics university toronto rand index rand rabiner tutorial hidden markov models selected applications speech recognition proceedings ieee crand index orrected chance assignments crand nij ramoni sebastiani kohane cluster analysis gene expression dynamics proc national academy sciences usa rasmussen infinite gaussian mixture model advances neural information processing systems cambridge mit press schliep costa steinhoff sch onhuth analyzing gene expression ieee trans comp biology bioinformatics jaccard coefficient jaccard nij number points number points number points usual definitions sensitivity specificity sens spec schwarz estimating dimension model annals statistics sethuraman constructive definition dirichlet priors statistica sinica internal indices computed quantitatively assess clustering absence provided labels attempt evaluate cohesion similar points clusters separation dissimilar points different clusters usually indices computed euclidean distance metric preserve integrity analysis used log pcd dissimilarity given stolcke omohundro hidden markov model induction bayesian model merging hanson cowan giles editors advances neural information processing systems pages san francisco morgan kaufmann teh jordan beal blei sharing clusters among related groups hierarchical dirichlet processes saul weiss bottou editors advances neural information processing systems cambridge mit press teh jordan beal blei hierarchical dirichlet processes journal american statistical society appear wakefield zhou self modelling gene expression time curve clustering informative prior distributions bernardo bayarri berger dawid heckerman smith west editors bayesian statistics proc valencia international meeting pages oup wild rasmussen ghahramani cregg cruz kan scanlon bayesian approach modelling uncertainty gene expression clusters extended conference abstract int conf systems biology stockholm sweden appendices cluster validation usually done computation indices signiify quality clustering based either comparison labels external without comparison relying inherent qualities dendrogram putative labels internal external indices let number data points possible labels let clustering obtained clustering algorithm define two incidence matrices fij ith point point belong cluster otherwise cij ith point point belong cluster otherwise defining following categories cij fij cij fij cij fij cij fij use following indices internal indices silhouette given cluster method assigns sample quality measure known silhouette width silhouette width confidence indicator membership ith sample cluster defined max average distance ith sample samples minimum average distance ith sample samples given cluster possible calculate cluster silhouette characterizes heterogeneity isolation properties cluster global silhouette value gsu dunn index identifies sets clusters compact well separated partition produced clustering algorithm let represent ith cluster dunn validation index defined min min defines distance clusters intercluster distance represents intracluster distance cluster number clusters partition main goal measure maximize intercluster distances whilst minimizing distances large values correspond good clusters index defined max defined equation small values correspond clusters compact whose centers far away therefore smaller preferred
| 5 |
polish models sofic entropy dec ben hayes abstract deduce properties koopman representation positive entropy probability measurepreserving action countable discrete sofic group main result may regarded representationtheoretic version factor theorem show probability actions completely positive entropy infinite sofic group must mixing group nonamenable spectral gap implies nonamenable group probability measurepreserving action strongly ergodic action orbit equivalent completely positive entropy crucial results formula entropy presence polish priori noncompact model contents introduction preliminaries notational remarks preliminaries sofic groups definition entropy presence polish model spectral consequences positive entropy representation theoretic preliminaries proofs main applications references introduction paper concerned structural consequences positive entropy probability measurepreserving actions groups entropy actions classical goes back work kolmogorov roughly speaking measures randomness action realized kieffer one could replace weaker condition amenability group amenability requires sequence finite subsets group one average approximately translation invariant way abelian groups nilpotent groups solvable groups amenable whereas free group letters entropy amenable groups well established useful quantity ergodic theory computed many interesting cases although easy general positive reveals interesting structure action useful general intuitive properties fundamental examples entropy theory bernoulli shifts countable discrete group standard probability space bernoulli shift probability action defined bernoulli shifts amenable groups completely classified entropy infinite amenable group free ergodic probability action positive entropy factors onto bernoulli shift fact situation factors onto bernoulli shift entropy known factor theorem proved general amenable groups factor theorem date january mathematics subject classification key words phrases sofic entropy factor theorem noncommutative harmonic analysis ben hayes fundamental result entropy theory shows bernoulli factors capture entropy probability action amenable group sense shows entropy simply measure amount behavior action groundbreaking work bowen defined entropy probability actions sofic groups assuming existence finite generating partition see assumption removed also defined entropy actions sofic groups compact metrizable spaces see refer reader see section precise definition sofic groups form vastly larger class groups amenable groups known amenable groups free groups residually finite groups linear groups sofic soficity closed free products amalgamation amenable subgroups see thus sofic entropy considerable extension entropy actions amenable group defined kieffer roughly group sofic almost actions finite sets almost sequence almost actions called sofic approximation entropy probability action sofic group defined exponential growth rate number finitary models action compatible fixed sofic approximation using sofic entropy bowen showed two bernoulli shifts sofic group isomorphic base entropies reproved result direct proof base space infinite entropy since subject fairly young relatively known structural consequences positive measure entropy actions arbitrary sofic groups example factor theorem known sofic groups previous consequences entropy actions sofic groups either topological actions specific groups prove actions positive topological entropy must exhibit chaotic behavior example must chaotic acting group free group one consider another form measure entropy defined bowen called entropy entropy roughly randomized version sofic entropy interesting consequences given seward case entropy apply group free group appearance preprint meyerovitch showed positive sofic entropy implies almost every stabilizer action finite paper deduce structural consequences positive measure entropy actions arbitrary sofic groups applications spectral properties actions best knowledge aside results paper results meyerovitch ones deduce properties action general sofic group assuming action positive entropy recall probability action countable discrete group induced unitary representation given use restriction representation called koopman representation properties preserving action depend upon koopman representation called spectral properties koopman representation played significant role ergodic theory since early days subject means deduce von neumann mean ergodic theorem ergodic theorem relies upon first step additionally many fundamental properties compactness weak mixing mixing ergodicity spectral properties results show one canonical representation group plays special role entropy theory recall left regular representation group defined sometimes use specify group also use group clear orthogonal representation restriction let two unitary representations countable discrete group say singular write nonzero subrepresentation embeds subrepresentation mean restriction closed linear subspace customary often call closed linear subspace subrepresentation well terminology singularity comes case borel measure natural unitary representation polish models sofic entropy given easy check similar analysis done abelian group replacing pontryagin dual thus singularity representations natural generalization noncommutative groups singularity measures probability action countable discrete sofic group sofic approximation see definition precise definition sofic approximation use entropy respect defined bowen theorem let countably infinite discrete sofic group sofic approximation let action standard probability space suppose subrepresentation generated sets measure zero borel singular respect left regular representation standard probability space bernoulli action mentioned factor theorem known sofic groups note factors embeds manner theorem may onto bernoulli shift regarded weak version factor theorem sofic groups shows representationtheoretic level action sofic group positive entropy must contain subrepresentation koopman representation bernoulli shift thus think theorem representationtheoretic version factor theorem also first result indicates positive entropy actions sofic groups must behave manner similar bernoulli shifts say even theorem assume stronger version positive entropy recall action completely positive entropy respect sofic approximation whenever factor space following easy theorem corollary let countable discrete sofic group sofic approximation suppose completely positive entropy respect koopman representation embeddable infinite direct sum left regular representation corollary proved amenable dooley golodets mention actually prove completely positive entropy amenable koopman representation isomorphic infinite direct sum left regular representation easy consequence corollary factor theorem since factor theorem known sofic groups know instead embeds sofic corollary automatically deduce important structural properties completely positive entropy actions sofic group specifically mixing spectral gap probability action said mixing measurable strongly ergodic every sequence measurable subsets gan iii spectral gap every sequence true easy see spectral gap implies strong ergodicity ben hayes corollary let countable discrete sofic group sofic approximation suppose completely positive entropy respect infinite mixing nonamenable subgroup strongly ergodic fact spectral gap another approach nonamenable entropy called rokhlin entropy due seward advantage easy define defined groups disadvantage extremely difficult compute rokhlin entropy upper bound sofic entropy known cases one prove action positive rokhlin entropy without using positive sofic entropy mention completely positive rokhlin entropy defined similar manner appearance preprint alpeev proved actions completely positive rohklin entropy weakly mixing approach completely elementary easy see completely positive sofic entropy implies completely positive rokhlin entropy whether actions nonamenable group completely positive rohklin entropy mixing strongly ergodic spectral gap appear open seems difficult deduce properties koopman representation action assumption action positive rokhlin entropy part corollary rather special nonamenable groups recall two probability actions countable discrete groups said orbit equivalent measure space isomorphism takes almost every orbit equivalence theory area much current interest relating operator algebras ergodic theory group theory strong ergodicity invariant orbit equivalence class action corollary shows probability action nonamenable group action strongly ergodic action orbit equivalent completely positive entropy result indicates entropy actions nonamenable groups may nontrivial consequences orbit equivalence theory celebrated deep fact ergodic actions amenable group orbit equivalent due see thus entropy actions amenable groups consequences orbit equivalence theory spectral gap strong ergodicity important properties many applications spectral gap connections expander graphs see orbit equivalence rigidity see number theory see spectral gap also related problem asks lebesgue measure unique probability measure sphere defined lebesgue measurable sets solved independently margulis sullivan well known action amenable group strongly ergodic consequences entropy strong ergodicity spectral gap orbit equivalence indicate entropy nonamenable groups may used deduce phenomena present realm ergodic theory amenable groups reveals importance generalizing entropy actions nonamenable groups briefly outline key differences approach approach prove corollary first prove theorem fourier analysis reduces theorem fact mutually singular borel probability measures circle continuous function circle close constant function close zero fact simple exercise measure theory deduce corollary amenable case using amenable groups orbit equivalent integers approach prove theorem general sofic group simply replacing harmonic analysis noncommutative harmonic analysis general group assumption singularity measures replaced singularity representations groups representation theory group captured universal natural replace algebra continuous functions group one characterize singularity representations arbitrary group manner similar preceding paragraph short abstract harmonic analysis case noncommutative harmonic analysis nonabelian group approach uses essentially structure group removes orbit equivalence techniques approach valid case amenable groups polish models sofic entropy although approach using noncommutative harmonic analysis may make methods seem abstract esoteric proof theorem essentially input aforementioned noncommutative harmonic analysis techniques well basic facts borel measures polish spaces rest proof theorem elementary relies basic consequences finitedimensional spectral theorem simple volume counting estimates additionally noncommutative harmonic analysis techniques used lie basics theory hand fact every action amenable groups orbit equivalent integers fairly deep amenable case discovered direct elementary proof corollary intuitively entropy measure randomness system theorem shows indeed positive entropy actions must exhibit randomness properties example view left regular representation representation exhibits perfect amount mixing also use theorem show highly structured actions like compact actions must nonpositive sofic entropy corollary let countable discrete sofic group sofic approximation suppose aut image contained compact weak topology subgroup aut action given previous version article used techniques prove distal action entropy zero appearance version alpeev proved measure distal actions arbitrary group rohklin entropy zero burton proved distal actions naive entropy zero group contains copy naive entropy zero implies rohklin entropy zero turn implies sofic entropy zero proofs elementary whiles arguably elected remove section current version article actually going prove general results give structural results probability action sofic group relative pinsker factor results imply rohklin entropy decreases compact extensions results prove result sofic entropy appears unknown whether naive entropy decreases compact extensions important new tool use prove theorems polish models probability action countable discrete group topological model action action isomorphic separable metrizable topological space borel probability measure action homeomorphisms roughly one think giving topology action homeomorphisms actions standard probability spaces compact models always exist moreover show one compute entropy presence compact model manner uses topology many computations sofic entropy used compact model formalism see prove theorems give definition sofic entropy presence polish model merely assumed polish space completely metrizable separable topological space remark bowen defined see topological entropy uniformly continuous automorphisms metric space proving invariant uniformly continuous conjugacies approach slightly different require homeomorphisms uniformly continuous bowen consider actions since compact models always exist mention decided consider case polish model let mention natural way obtaining polish models given probability measurepreserving action say family measurable functions generating smallest complete sets makes elements measurable associated family generators one canonically produce topological model following manner define let shift action given ben hayes setting topological model action thus think topological models approach ergodic theory analogous presentation theory groups topological model one produces compact model general topological model polish thus polish models canonical way dealing unbounded generators relevant assumptions functions course ways turning family generators unbounded family generators bounded one employ functions compose injective continuous maps warn reader attempts reduce family bounded generators destroy hypotheses really need deal unbounded generators consequence polish models remark take topological model associated family generators essentially recover operator algebraic approach sofic entropy thus polish model approach sofic entropy may regarded generalization operator algebra approach given family unbounded generators crucial aspect polish spaces allows equate definition entropy polish model bowen tightness single probability measure polish space tightness roughly asserts small error probability measure probability measure supported compact set examples separable metrizable spaces every borel probability measure tight see remarks theorem reason need spaces polish assumption topological model polish also natural since canonical topological model associated family generators always polish acknowledgments much work done still phd student ucla grateful kind hospitality stimulating environment ucla would like thank lewis bowen suggesting problem computing sofic entropy gaussian actions program von neumann algebras ergodic theory group actions insitut henri solution problem ultimately led work would like thank stephanie lewkiewicz many interesting discussion polish models preliminaries notational remarks use representation thus forego usual practice sofic entropy using metric instead use sets use functions use instead another set function use map polish space use uniform norm use kcb space clear context let pseudometric space subsets say write say use smallest cardinality subset subsets finite set use uniform probability measure typically write instead denote norm respect unless otherwise stated certain times use time instance use notation specifies using additionally use inner products potential confusion otherwise specified refers inner product respect polish models sofic entropy say every use smallest cardinality subset note preliminaries sofic groups use symmetric group letters set use sym set bijections definition let countable discrete group sofic approximation sequence sdi functions assumed homomorphisms lim udi lim udi call sofic sofic approximation known amenable groups residually finite groups sofic also known soficity closed free products amalgamation amenable subgroups see also graph products sofic groups sofic additionally residually sofic groups locally sofic groups sofic thus malcev theorem know linear groups sofic finally subgroup sofic amenable sense mean sofic seen mild generalization argument theorem using observation definition need extend sofic approximation certain algebras associated let ring finite formal linear combinations elements addition defined naturally multiplication defined also define involution given sofic approximation sdi define mdi order talk asymptotic properties extended sofic approximation need analytic object associated let left regular representation defined continue use linear extension group von neumann algebra defined wot denotes weak operator topology use denote group von neumann algebra define leave exercise reader verify following properties equality weak operator topology continuous ben hayes call third property tracial property typically view subset particular use well functional restriction order state extension sofic approximation properly shall give general definition recall complex algebra equipped involution conjugate linear antimultiplicative definition tracial pair equipped linear functional equality let let let hilbert space completion inner product condition definition representation defined densely let make tracial using usual trace particular use denote operator norm let free call elements indeterminates elements use image unique sending definition let tracial embedding sequence sequence mdi sup frequently use following fact prove first note suffices handle case case since since proved proof next two propositions left reader proposition let countable discrete sofic group sofic approximation sdi extend maps mdi linearly embedding sequence proposition let tracial mdi embedding sequence mdi another sequence functions sup embedding sequence polish models sofic entropy fact need extend sofic approximation group von neumann algebra use following lemma lemma let countable discrete group embedding sequence extends one use preceding lemma sofic combination proposition often need following estimate lemma let orthogonal projection ball proof let ball maximal subset ball thus ball ball side disjoint union linear algebra real dimension image thus computing volumes vol ball vol ball thus definition entropy presence polish model definition follow ideas use dynamically generating pseudometrics need state definition works actions polish spaces need assume pseudometrics bounded longer automatic noncompact case definition let countable discrete group polish space homeomorphisms bounded continuous pseudometric said dynamically generating open neighborhood finite recall compact countable discrete group acting homeomorphisms continuous pseudometric said dynamically generating sup whenever see section fact equivalent definition easy exercise using compactness polish shall see proof lemma really necessary require existence preceding definition instead sup whenever one way realize correct definition follows let preceding definition let modded equivalence relation give metric consider continuous map given ben hayes use equivalence class existence definition precise requirement one needs guarantee homeomorphism onto image explicitly proven lemma pseudometric space let pseudometric defined definition let countable discrete group polish space homeomorphisms let bounded pseudometric function finite let map functions max caution reader though shall typically require polish require pseudometrics complete typically need care topological consequences polish metric properties note map account measuretheoretic structure given polish space finite prob let prob form basis neighborhoods weak topology space bounded continuous functions recall denotes uniform probability measure definition suppose borel probability measure finite finite let set map definition let countable discrete sofic group sofic approximation sdi let polish space homeomorphisms borel probability measure define entropy lim sup log inf finite sup know unchanged replace use instead reader may concerned finiteness expression since compact note given finite finite see note prokhorov theorem may choose compact hard see sufficiently large sufficiently small polish models sofic entropy suppose finite subset let diameter fix set find udi find defined thus main goal section show measure entropy defined bowen extended throughout use sofic measure entropy defined use formulation sofic entropy terms partitions due kerr however use terminology observables bowen definition let standard probability space let subalgebra necessarily finite observable measurable map finite set simply call finite observable another finite observable said refine written almost every countable discrete group measurepreserving transformations say generating generated sets measure zero next definition need set notation given standard probability space countable discrete group transformations finite observable finite let defined definition let countable discrete group let standard probability space let subalgebra let finite given finite let set udi give kerr definition sofic measure entropy definition let countable discrete sofic group sofic approximation sdi let standard probability space transformations let subalgebra let finite observable let refine definition finite use set log inf lim sup finite set inf ben hayes sup last infimum supremum observables need following result kerr theorem let countable discrete sofic group sofic approximation let standard probability space transformations let generating subalgebra additionally one show independent generates sets measure zero case set proceed prove definition sofic entropy respect polish model recovers entropy respect sofic approximation let briefly outline proof first show dynamically generating pseudometric compatible metric see lemma thus may assume metrics compatible use kerr version measure entropy using subalgebra sets point view measure appear open closed sense made precise later show topological version observable version microstates produce roughly space see lemma essential fact proving last step tightness single probability measure polish space theorem follow without much difficulty preliminary lemmas following proof minor modification argument lemma well lemma decided include proof alleviate concerns may arise working noncompact case well address necessary modifications occur definition dynamically generating pseudometric polish case lemma let countable discrete sofic group sofic approximation sdi let polish space homeomorphisms probability measure given dynamically generating pseudometric bounded compatible metric proof let diameter since countable may find positive real numbers set prove lemma lemma proved several steps step show compatible metric let modded equivalence relation let equivalence class make metric space metric given observe satisfies triangle inequality compatible metric given moreover injective map polish models sofic entropy enough show homeomorphism onto image clear continuous suppose neighborhood definition dynamically generating may choose finite max let neighborhood thus homeomorphism onto image step show let choose finite sufficiently large let finite finite given assume choose sufficiently small manner depending upon determined later since minkowski inequality minkowski inequality last line use thus sufficiently small thus monotone let increase take find letting completes proof step step show let suppose given finite set let sufficiently large finite set depending upon manner determined later set choose finite ben hayes minkowski inequality force set udi find large find taking infimum find letting proves step prove theorem need single nice subalgebra measurable sets let polish space borel probability measure let set borel sets int note algebra sets sets often called continuity sets literature next lemma need notation given metric space let lemma let polish space borel probability measure let compatible metric given neighborhood weak topology prob proof consequence portmanteau theorem sequence probability measure converges weakly every continuity set thus choose neighborhood obtain second estimate portmanteau theorem choose neighborhood polish models sofic entropy since continuity set choose small enough lemma completed setting given shall define lemma let countable discrete sofic group sofic approximation sdi let polish space homeomorphisms borel probability measure let bounded compatible metric let finite observable given finite finite given finite finite finite observable proof let sufficiently small depending upon manner determined later preceding lemma may find finite prob set suppose note choose sufficiently small forced udi means map since let diameter let sufficiently small depending upon manner determined later since polish prokhorov theorem applied implies find compact set since compact find points numbers ball radius respect sup let define ben hayes note measurable set let sufficiently small manner depend upon determined later assume min suppose given let since udi see large udi necessarily thus choose sufficiently small forced want force even smaller later using uniform norm udi udi udi udi udi nkf may choose forces may choose sufficiently small arbitrary completes proof ready show definition entropy case polish model agrees usual measure entropy theorem let countable discrete sofic group sofic approximation let polish space homeomorphisms borel probability measure dynamically generating pseudometric polish models sofic entropy proof let sdi lemma may assume bounded compatible metric let diameter apply theorem leave exercise show countably many thus generates borel subsets first show let since polish may apply prokhorov theorem find compact compactness find set define let finite observable refining let definition suppose given finite preceding lemma may find finite lemma may assume sufficiently large udi choose elements index set let let fact implies thus taking infimum find letting implies ben hayes reverse inequality let finite observable fix let depend upon manner determined later lemma may choose finite prob let given finite sets given may assume preceding lemma may choose refinement finite choose choose map construction let sufficiently small depending upon determined later let may choose note thus bound fix suppose let choosing sufficiently small may assume udi let thus udi choose thus find udi thus large polish models sofic entropy stirling formula sum exp constant log log thus log taking infimum let letting taking supremum spectral consequences positive entropy let probability action countable discrete group associated action natural representation space inside clearly consider representation obtained restricting representation called koopman representation properties probability action called spectral depend upon koopman representation section deduce spectral properties action assumptions positive entropy representation theoretic preliminaries need apply theory representations paper need unitary representations groups later work need generality notation set use given let mention theory generalizes groups countable discrete group unitary representation define use conjugate linear antimultiplicative map given operations unitary representation two write homa space bounded linear maps definition let say mutually singular written every pair nonzero subrepresentations isomorphic say absolutely continuous respect write embeddable set ben hayes terminology motivated measure theory intuition suppose find spectral measures sense dej leave exercise reader check similarly definitions absolute continuity singularity spectral measures usual measures need following equivalent conditions singularity representations following must well known include proof completeness throughout proof shall use functional calculus see chapter vii background functional calculus proposition let unital two unitary representations suppose separable following equivalent homa iii homa sequence max strong operator topology strong operator topology proof equivalence iii proved taking adjoints prove implies suppose closed linear subspaces isomorphism define know implies zero see implies suppose homa let polar decomposition see fact equivariant implies equivariant hence approximating square root function polynomials since sot lim see equivariant thus gives isomorphism ker since find ker hence prove implies let homa let lim lim suppose iii hold wish prove recall hilbert space denotes commutant suppose regard matrix tij since see tij homa thus thus see sot last equality follows von neumann double commutant theorem prove using kaplansky density theorem need analogue lebesgue decomposition polish models sofic entropy proposition let unital two proof zorn lemma find maximal family pairwise orthogonal closed linear subspaces embeds let maximality singular respect setting defining restricting completes proof proofs main applications theorem let countable discrete sofic group sofic approximation let standard probability space transformations let closed linear subspace generated suppose borel proof let subset let defined let let shifts since borel generates see induces isomorphism thus let defined thus span gzn simplify notation use let dynamically generating pseudometric defined clearly polish use computation let arbitrary let arbitrary let sufficiently small sufficiently large finite set depend upon manner determined later given map define cdi ben hayes define conversely given define chosen carefully sufficiently large choose since span gzn proposition may find max let sufficiently large depending upon manner determined later assume large enough exists note may chosen independent let sufficiently large manner determined later assume use norm respect uniform probability measure sufficiently large sufficiently small notational conventions introduced definition let expression interpreted sense functional calculus kpg since lim see large polish models sofic entropy provided since orthogonal projection know lemma may choose subset ball udi sufficiently large define values choose define kpg max second line following inequality thus hence large thus log note number thus let find since arbitrary find space much smaller example consider case action bernoulli let defined take one show span ben hayes indeed space much smaller next application recall weak topology aut defined saying basic neighborhood given measurable subsets aut action compact compact subgroup aut weak topology homomorphism almost every corollary let countably infinite discrete sofic group sofic approximation suppose compact action proof recall unitary representation called weakly mixing compact clear compact representation nontrivial weakly mixing subrepresentations also left regular representation weakly mixing thus compact may apply theorem particular note compact group homomorphism action given entropy zero respect sofic approximation definition let countable discrete sofic group sofic approximation say probability action completely positive entropy respect whenever factor space corollary let countable discrete sofic group sofic approximation suppose probability action completely positive entropy respect proof proposition write suppose define let let bernoulli action factor set span borela generates borel subsets zero tautologically span polish models sofic entropy hence theorem know since completely positive entropy implies space possible constant thus corollary illustrates utility assuming theorem instead assuming generates proof priori know references alpeev pinsker factors rokhlin entropy representation theory dynamical systems combinatorial methods part xxiv volume zap nauchn sem pomi pages petersburg pomi bourgain gamburd spectral gap subgroups invent math bourgain gamburd uniform expansion bounds cayley graphs ann bourgain yehudayof expansion monotone expanders geom funct bowen measure conjugacy invariants actions countable sofic groups amer math soc bowen new measure conjugacy invariant actions free groups ann bowen entropy theory sofic groupoids foundations anal bowen entropy group endomorphisms trans amer math brown ozawa approximations cambridge university press burton naive entropy dynamical systems ciobanu holt rees sofic groups graph products graphs groups pacific journal mathematics november connes feldman weiss amenable equivalence relations generated single transformation ergodic theory dynam systems conway course functional analysis graduate texts mathematics springer new york second edition dooley golodets spectrum completely positive entropy actions countable amenable groups funct dykema kerr pichot orbit equivalence sofic approximation dykema kerr pichot sofic dimension discrete measurable groupoids trans amer math soc elek szabo sofic groups group theory hayes determinants sofic entropy hayes mixing spectral gap relative pinkser factors sofic groups hayes von neumann dimension banach space representations sofic groups funct ioana orbit equivalence borel reducibility rigidity profinite actions spectral gap kerr sofic measure entropy via finite partitions groups geom dyn kerr soficity amenability dynamical entropy amer math kerr bernoulli actions infinite entropy groups geom kerr topological entropy variational principle actions sofic groups invent math kerr combinatorial independence sofic entropy comm math kieffer generalized theorem action amenable group probability space harmonic models spanning forests residually finite groups funct compact group automorphisms addition formulas determinants ann sofic mean dimension adv math lubotzky phillips sarnak ramanujan graphs combinatorica margulis remarks invariant means monatsh meyerovitch positive sofic entropy implies finite stabilizer ornstein weiss ergodic theory amenable groups rokhlin lemma bull amer math ben hayes ornstein weiss entropy isomorphism theorems actions amenable groups anal math paunescu sofic actions equivalence relations funct november popa superrigidity malleable actions spectral gap amer math popa independence properties sublagebras ultraproduct factors funct selberg estimation fourier coefficients modular forms proc sympos pure math volume pages providence american mathematical society seward finite entropy actions free groups rigidity stabilizers type phenomenon appear anal math seward krieger finite generator theorem ergodic actions countable groups weak isomorphism transformations invariant measure mat sullivan one finitely additive rotationally invariant measure lebesgue measurable sets bull ams varadarajan measure topological spaces mat stevenson center nashville address
| 4 |
eliminating network protocol vulnerabilities abstraction systems language design jasson andrew gabriel dos alex department electrical computer engineering texas university computer science texas university nov department implementations network protocol message specifications affect stability security cost network system development implementation defects fall one three categories well defined message constraints however general process constructing network protocol stacks systems capture categorical constraints introduce systems programming language new abstractions capture constraints safe efficient implementations standard message handling operations synthesized compiler analysis used ensure constraints never violated present language examples using openflow protocol ntroduction message handling layer network protocol notoriously difficult implement correctly common errors include accepting allowing creation malformed messages using incorrect byte ordering byte alignment using undefined values etc defects lead problems stability security performance cost network systems table result survey vulnerability database demonstrates sophisticated organizations implementing mature protocols commit errors persistent introduction defects sign engineering problem failure use correct levels abstraction working network protocols message handling focus several research efforts message important serialization solutions used however network protocols interoperability requirements adherence specific necessary result series domain specific languages dsls allow programmer control designed approaches synthesize data structures hold messages typical operations necessary manipulate using correct construction techniques language researchers improved upon dsls rich type systems prove certain safety properties address problems mentioned previously work developed static analysis techniques require domain knowledge survey existing code bases find occurrences previously mentioned defects material based upon work partially supported afosr contract nsf grants ieee systematically eliminating categories message related defects requires rich type systems whole program analysis supported existing declarative dsls invariants semantic information produced dsl incorporated used program analysis target language furthermore finding occurrences message related defects require level domain knowledge static analysis must either built language programmer specified way resembles existing network protocol specifications analysis formal methods compiling network program require specialized knowledge programmer develops systems programming language address issues allows full program analysis providing stronger safety guarantees offering domain specific optimization exceedingly difficult accomplish hand impossible dsl paper clearly identify categories message related vulnerabilities structural semantic constraints show categories responsible known vulnerabilities using tools show even live internet traffic violates constraints introduce systems programming language allows programmers capture network protocol message structure constraints constraints allow compiler reason entire programs identifying eliminating categories vulnerabilities mentioned additionally choice systems programming language allows efficient code generation throughout paper use openflow reference protocol contributions identificaiton three categories network protocol vulnerabilities common network programs network traffic development abstractions prevent construction messages lead vulnerabilities unsafe access conditional fields iii table essage elated ulnerabilities proto ntpd icmp vtp bootp age bug date vendor broadcom quagga gnu cisco cisco apple error semantic struct struct semantic struct cert design systems programming language eliminates unsafe protocol implementations type checking iii implementation compiler library supporting language essage ulnerabilities three categories message vulnerabilities address structural constraint violation semantic constraint violation unsafe access conditional fields section describe categories detail examples using openflow protocol first briefly describe protocol show vulnerability examples version version type hello bits type xid length payload type fig hello openflow message format figure summarizes message format representation openflow message message consists fixed byte header followed variable sized payload header indicates version openflow protocol type payload length entire message transaction identifier used match response messages requests payload one types version protocol designed operate transports difficult handle datagram message oriented transports streams concept message boundaries give program anything single byte several messages single read programmer responsibility transport determine payload ends next message begins semantic constraints ensure message field value defined meaning protocol instance openflow domain version field domain type field domain length field domain xid field value domain field semantically invalid meaning protocol definition violating semantic constraint similar using undefined behavior programming language table shows constraint violations lead vulnerabilities structural constraints address messages constructed used program messages constructed two ways either program send network network send program cases important construct structurally valid messages constraints deal number bytes message occupies buffers used communicate network structural constraint testing process ensuring enough bytes complete operation constructing message buffer filling buffer message example buffer containing less bytes possibly represent valid openflow message attempt interpret buffer message would error semantic constraint violations lead structural constraint violations reading stream produce buffer containing several messages header first message must contained first bytes payload must end position header length field indicates header length field semantically invalid used constrain payload size becomes structural constraint violation well similar problem arises semantically invalid type field used choose payload safe access ensures fields dependency always validated use many fields openflow dependent meaning determined values previously encountered fields example dependency payload message header type field structurally semantically valid hello message payload treated flowmod type without invoking undesired behavior iii anguage work builds fundamental notions system programming structured generic programming mathematical programming languages axiom liz work heavily inspired liz language aims support simple safe efficient handling network protocol messages core language supports values references constants functions records minimal set expressions follows semantics expose pointers users language drastically restrict heap allocation certain language types figure shows abstract syntax language dependent types primary contribution paper language captures structural semantic constraints message type variable declarations enforce constraints process object symbolic construction object construction completes successfully structural constraints upheld symbolic construction completes successfully semantic constraints hold using construction establish invariants common way reason program behavior object instance type must constructed use process object construction involves allocating space object live initializing values establish invariant symbolic construction extends object construction include ensuring value object consistent symbolic constructor upon completion construction object invariant established types see figure allow user definition precise structural semantic constraints structural constraints explicitly stated specifiers bits constraint structural constraints otherwise implicit type explained semantic unit bool byte char int uint string const ref type buffer view spec bits constraint xform msbf lsbf uint spec xform array vector record decl variant return else block block define hdr type record vrsn uint bits type uint bits len uint bits msbf xid uint bits msbf define pld uint type variant hello flowmod define cref hdr bool return bytes define msg type record hdr hdr pld pld constraint bytes hdr func decl block decl toplevel fig core language syntax straints introduced declaration syntax declaration impose semantic constraint use bar followed guard expression declarations provide constraint information compiler used full program analysis allows compiler reason constraint satisfaction safe usage following short summary types uint spec xform defines unsigned integer precise bit width specified structural constraint spec optionally type also takes transform parameter xform allows representation data instance protocol may specify value representation significant byte first msbf complement array vector types allow sequences objects type array statically sized contain elements vector dynamic size record decl sequence declarations whose objects accessed field name padding alignment applied object compiler padding alignment desired programmer explicitly declare fields additionally order fields preserved variant union types type guarded predicate variant constructed evaluating predicate set invoking constructor corresponding true predicate variant uninitialized evaluation predicate set figure demonstrates declare types corresponding openflow header payload message header simple record four fields constant specifiers follow msbf ordering payload type unique choice types based type parameter values header semantic meaning version fig openflow message declaration openflow protocol constrain values achieved semantic constraint must first defined function takes constant reference header returns bool message defined record including header version semantic constraints payload parameterized header type field constraint type constructors exception uint array allow optional specifier constraint particular case construction pld exceed result constraint bytes hdr buffer view abstractions underlying machine architecture help compiler ensure structural constraints never violated reading file socket results buffer begin end boundaries surrounding bytes received view mechanism restricts visibility buffer set operations defined buffer view view returns view entire buffer available returns byte size view advance returns view advanced head constrain returns view constrained tail put writes value view get reads value view view view msg msg buffer msg constrain fig constraining view limits number bytes available access figure illustrates buffer returned read system call tcp socket single read resulted one protocol message initial view wraps data however first bytes view contain protocol header provides length first message length used constrain visibility precisely one message constrain operation supports use datagram stream oriented transports also providing safety boundaries object construction ompiler ynthesis programmers continue make mistakes implementing message operations necessary protocols mentioned strategy eliminate need write common operations compiler synthesize safe efficient versions type definitions contain structural semantic constraints information sufficient synthesize following operations construction constructs object expressions constructs object another assignment copies object state another bytes returns number bytes object writes object view constructs object view equal compare objects equivalence returns string representation object remainder section describe synthesis process small subset operations focusing synthesis constraint validation bytes bytes name operation determining byte size object operation bytes synthesized declared type program uint returns number bytes indicated specifier array returns result sizeof type contained array objects type vector bytes constant expression dependencies returns accumulation calling bytes elements record returns sum calling bytes constituent fields constant fields also constant calling bytes variant returns variant uninitialized proxies call contained object figure illustrates process described pseudo code several synthesized operations depend bytes structural constraint violations must prevented object construction three ways violate structural constraint overflowing view underflowing view constructing invalid variant overflowing view involves advancing view beyond number bytes contained underflowing view involves constraining view bytes contained constructing invalid variant caused none variant contained type predicates evaluate true order construct openflow header size view must least large number bytes header bytes object construction view always advanced size bytes object constructing message object view less bytes would result view overflow openflow protocol indicates length message header length field value used constrain view payload construction possible accident malicious intent length field inconsistent define bytes uint bits uint return define bytes cref array uint return sizeof define bytes cref vector uint accum uint foreach item accum bytes return accum define bytes cref record uint return bytes define bytes cref variant uint init return switch case variant return bytes case variant return bytes case variant return bytes fig synthesis rules bytes operation amount data actually sent field indicated less bytes payload could possible underflow view value used construct variant payload header either accident malicious intent possible header indicate type result valid variant construction using variant invalid way result undefined behavior object construction possible either constructor operates expressions using operation figure illustrates pseudo code synthesizing operation returns false structural constraint violated failure indicates partially constructed object simple types uint array structural constraints always checked enough bytes view complete operation operation fails otherwise object value constructed reading view xform present object value updated using specified transform finally view advanced size object constructed vector version operates greedy fashion consume entire view behavior desired view must constrained construction long bytes view vector attempt construct object upon success object inserted vector process repeats record version attempt construct constituent fields either return first failure succeed variant must guard third type structural constraint must initialized valid type true fail otherwise called appropriate type symbolic construction ensures type dependencies must propagated semantic constraints inserted synthesized code openflow message figure semantic constraints type dependency constrained view semantic constraint turns predicate check immediately call header check fails operation immediately returns define ref view uint bits available bytes return false get advance bytes return true define ref view ref array available bytes return false foreach return true define ref view ref vector available return false push return true define ref view ref record return false return false return define ref view ref variant init return false switch case return case return case return fig synthesis rules operation define ref view msg bool return false return false construct return false return constrain fig compilation program useful error messages goal compiler synthesize safe efficient code synthesis algorithm described previously produce safe inefficient code operations contain guards protect view underflow overflow guards source inefficiency example better single guard sequence objects constant size guard object call guard fusing analogous fusing basic blocks form larger basic blocks also useful lift guards inside called function call site lifting guards call site potential guard fusing optimization becomes possible optimization strategy follow set fusing lifting rounds reduce number guards form largest possible object construction basic blocks synthesized operations similar call sequence graph csg figure illustrates generalized csg flow modification message node represents function call sequence function types indicated node shape figure csg used synthesize operation definitions analyze safe usage messages guards start leaves graph lifted parent nodes guards fused within interior node new guard lifted process repeated process ensures guards covering largest possible constant sized objects performed additionally process unaware protocols optimize across layer boundaries sometimes node one parent node parents differing behaviors case split node two versions lift guard contained within constant structure leave place otherwise message header record variant vector uint synthesis msg failure next type parameter must checked initialized call payload check ensures failure happens value undefined upon success initializes payload kind finally constrained view operation propagated result final step shown figure match action afety ptimization three ways violate structural constraints view underflow view overflow reading writing uninitialized variant three categories mistakes identified two simple invariants bytes view always impossible underflow overflow view read write variant always preceded valid initialization third category also impossible compiler uses dataflow analysis framework prove invariants always hold fail fig flow modification control flow graph code generation takes place optimization phase currently support target language message related type definitions synthesized operations written single set files program written mplementation valuation protocol implementations send messages either structurally semantically invalid exist applying work able discover structural constraint violations packet traces core internet routers furthermore able define three categories message constraints show violation constraints lead high profile vulnerabilities network programs must always handle messages safe manner aim developed systems programming language library writing safe efficient network programs language implementation originally developed library library used test ideas guide language design however order enforce safety guarantees optimization compiler necessary experimented language using two types network programs protocol analyzer openflow stack set applications provided good coverage diversity protocol formats constraints core internet traces obtained caida test data packet analyzers written new language facilities traffic recorded high speed interfaces layer layer headers timing summary information present layer addresses randomized layer payloads removed anonymization purposes trace data timestamped minute intervals analyzed minute segment traces october focused looking structural value constraints violations within tcp udp table shows found structural constraint violations one protocol semantic constraint violations found violated structural constraint regards options values internet header length ihl field indicated number options constructed however packet would overflow view received block data small tcp udp structural constraint violations nature violated basic constraint minimum sized header table aida races desc count cdf struct tcp udp source structural constraint violations currently known could evidence unintentional errors sending devices could maliciously crafted packets could due collection process trace data however regardless source structural constraints violated packets admitted safe network programs vii onclusion uture ork incorrect implementations protocol message specifications affect stability network systems potentially lead vulnerabilities paper identified three categories constraints used either test whether message generate safe code developed systems programming language allowed types capture constraints well reasoning framework ensure constraints always upheld within users program presented example type definitions compiler synthesized code using openflow protocol next steps work fall two categories extending types formalizing extending type system allow support protocols vectors extended support termination predicates structural constraint parameter allow sequences character strings generalized enumerations added easier mechanism restricting values used message construction finally work focused generating proof certificates used mechanical verification safety eferences caida anonymized internet traces claffy dan andersen paul hick back specification scripting language binary data condit harren anderson gay necula dependent types programming programming languages systems pages fisher gruber pads language processing hoc data sigplan june fisher mandelbaum walker next data description languages sigplan january google protocol buffers http government united states computer emergency readiness team international organization standards international standard programming languages edition jenks sutor axiom scientific computation system mccann chandra packet types abstract specification network protocol messages proceedings conference applications technologies architectures protocols computer communication sigcomm pages new york usa acm mckeown anderson balakrishnan parulkar peterson rexford shenker turner openflow enabling innovation campus networks sigcomm comput commun march pang paxson sommer peterson binpac yacc writing application protocol parsers proceedings acm sigcomm conference internet measurement imc pages new york usa acm dos reis system axiomatic programming pages alexander stepanov paul mcjones elements programming professional international telecommunication union abstract syntax notation one technical report available http
| 6 |
configurations circular orders free groups nov dominique malicet kathryn mann rivas michele triestino abstract discuss actions free groups circle dynamics dynamics determined finite amount combinatorial data analogous schottky domains markov partitions using show free group admits isolated circular order even stark contrast case linear orders answers question inspired work also exhibit examples exotic isolated points space circular orders analogous results obtained linear orders groups introduction let group linear order often called left order total order invariant left multiplication directly implies order determined set elements greater identity called positive cone often far obvious whether given order determined finitely many inequalities whether given group admits order latter question turns quite natural algebraic perspective traced back arora mccleary special case free groups mccleary answered question free groups shortly afterwards showing finitely determined orders question finite determination gained topological interpretation following sikora definition space linear orders space denoted set linear orders endowed topology generated open sets iff ranges finite sets finitely determined linear orders precisely isolated points going forward refer isolated orders correspondence isolated points finitely determined orders perhaps simplest instance general theme topological properties reflect algebraic properties presently several families groups known either admit fail admit isolated orders proofs use purely algebraic dynamical methods examples groups admit isolated orders include free abelian groups free groups free products arbitrary linearly orderable groups amalgamated free products fundamental groups orientable closed surfaces large families groups isolated orders lama cnrs umr descartes champs sur marne france dept mathematics brown university thayer street providence dpto universidad santiago chile alameda central santiago chile imb bourgogne cnrs umr alain savary dijon france primary secondary include braid groups groups form groups triangular presentations fact latter examples orders positive cone finitely generated strictly stronger condition consequence work give family groups interestingly behaviors occur theorem let denote free group generators group isolated linear orders even result appears give first examples group finite index subgroup case odd even infinite contains isolated points theorem also interesting consequence regarding space marked groups shown prop set groups closed subset space marked groups generators however theorem implies case either subset groups admitting isolated linear orders complement one may take sequence markings approach similarly sequence markings chosen approach see thus theorem immediately gives following corollary space finitely generated marked groups isolated linear order neither closed open property main tool theorem main focus work study circular orders dynamics corresponding actions well known countable admitting linear order equivalent acting faithfully homeomorphisms line vein circular order algebraic condition countable groups equivalent acting faithfully homeomorphisms recall definition basic properties section action lifts action central extension line giving way pass circular linear orders groups giving many dynamical tools study analogous one define space circular orders second third authors showed circular order isolated corresponding action circle called dynamics gave examples isolated circular orders free groups even rank odd rank case left open problem answer question negative theorem admits isolated circular order even similarly corollary one also prove set groups admitting isolated circular orders neither closed open space marked groups prove theorem developing combinatorial tool study actions dynamics similar actions admitting markov partitions inspired work expect applications beyond study linear circular orders one statement given theorem notion dynamics defined motivated next section sections give application study circular linear orders respectively proofs theorems actions configurations definition let free group rank freely generated action representation exist figure classical two generators pairwise disjoint open sets finitely many connected components assume connected components call sets domains similar definition given additional requirement domains closed general definition natural purposes although later introduce convention reconcile two reader may notice given action many choices sets satisfying property definition instance action one may choose arbitrary open set disjoint replace leaving domains unchanged new domains still satisfy later adopt convention avoid kind ambiguity motivation actions classical lemma implies actions always faithful little work shows action determined finite amount combinatorial data coming cyclic ordering images connected components sets see definition lemma thm particular one think actions family simplest possible faithful actions easy produce diverse array examples perhaps examples actions discrete free subgroups psl actions one choose domains single connected component figure shows example dynamics action despite simplicity actions quite useful instance actions used construct first known examples discrete groups circle diffeomorphisms acting minimally conjugate subgroup finite central extension psl series papers concerning longstanding open conjectures hector ghys sullivan relationship minimality ergodicity codimensionone foliation see instance general quite tractable study dynamic ergodic properties action markov system program carried many authors basic properties lemma action exists choice domains holds proof let action sets given modify domains satisfy requirements lemma generator recall free symmetric generating set shrink domain setting applying sides expression gives moreover since connected components disjoint closures holds hence also also equivalently needed show convention assume choices domains every action lemma particular means sets connected components cardinality induces bijection connected components connected components definition let action configuration data consisting cyclic order connected components assignment connected components induced action note every abstract assignment definition realized action following construction gives one way produce large families examples example easy construction actions let disjoint sets cardinality integer every two points separated exactly one point choose sets pairwise disjoint ranges let neighborhoods respectively chosen small enough sets remain pairwise disjoint one easily construct piecewise linear homeomorphism even smooth diffeomorphism set attracting periodic points set repelling periodic points assignments dictated period cyclic order sets reader keep construction mind source examples show example every configuration obtained manner however regularity smooth construction attainable general following construction gives one possibility realization useful later text leave modifications smooth case easy exercise lemma given action domains following convention one find another action domains action piecewise linear exists one actions configuration proof let statement lemma replace original domains smaller domains chosen small enough largest connected component half length smallest connected component require also exactly one connected component connected component define piecewise linear homeomorphism maps connected components onto connected components linearly following assignment next definition proposition give means encoding combinatorial data action used later proof theorem definition let action domains define oriented bipartite graph vertex set equal edges defined follows let denote connected component adjacent right put oriented edge interval similarly adjacent interval right put oriented edge proposition let action free group generator exists graph oriented proof first construction graph ensures bipartite vertex one outgoing edge consequence convention connected component exists vertex indeed outgoing edge moreover different connected component vertex unique incoming edge shows union disjoint cycles remains prove graph connected show connectivity let connected component consider connected component let connected components possibly adjacent either side definition graph intervals consecutive vertices cycle graph vice versa three intervals consecutive vertices connected component adjacent proves consecutive connected components belong cycle hence easily deduce connected components cycle also holds components graph connected circular orders begin quickly recalling standard definitions properties reader familiar circular orders may skip section definition let group circular order function homogeneous space circular orders denoted set functions endowed subset topology natural product topology although spaces linear orders cases topology completely understood sporadic examples complete description spaces circular orders known authors comes gives classification groups finite also proof homeomorphic cantor set abelian group given free groups well understood natural next case circular orders study main tool purpose following classical relationship circular orders actions see proposition given circular order countable group action ord ord denotes cyclic orientation moreover canonical procedure producing gives conjugacy class action conjugacy class called dynamical realization basepoint description procedure given modeled analogous linear case see note modifying dynamical realization blowing orbit point may result action still satisfies property ord however action obtained canonical procedure remark converse proposition also true countable subgroup admits circular order proof given thm special case point trivial stabilizer may define induced order ord prop authors propose alternative way inducing ordering different however method incorrect following example shows suppose three distinct homeomorphisms coinciding one half circle half point always two equal points triple one expect general find point trivial stabilizer hold actions following lemma lemma suppose action domains orbitsof free cyclic order completely determined cyclic order elements assignments proof obtained careful reading standard proof classical lemma details given lemma isolated circular orders free groups section use actions prove theorem introduction builds framework start introducing two results obtained let group recall finite orbit unique closed set contained closure every orbit called minimal set denote set action called minimal otherwise homeomorphic cantor set permutes connected components many examples actions permutation many disjoint cycles next lemma states case dynamical realizations lemma lemma cor let dynamical realization circular order suppose minimal invariant cantor set acts transitively set connected components since action free group rank least finite orbits invariance minimal set immediately implies additionally one invariance definition implies fact going forward convenient stronger condition given following lemma lemma let action domains exists action configuration domains satisfying whenever proof let action finitely many points contained sets form point blow orbit replacing point interval lengths chosen sum converges obtain new circle say natural continuous degree one map given collapsing point let preimage since trivial stabilizer free may extend action new circle allowing act map show may choose maps way achieve action desired properties inserted interval adjacent set form left right fix points interior extend include include done interval let denote new extended domains note disjoint closures define action interval follows restriction may homomorphism otherwise adjacent either right left define map chosen point point ensures indeed domains action finally note construction configuration changed convention action assume domains satisfy whenever follows easily invariance definition actions convention inclusion following theorem relates circular orders actions theorem thm let free group circular order isolated dynamical realization action satisfying convention tools proceed main goal section proof theorem case even covered explained representation psl coming hyperbolic structure genus surface one boundary component gives isolated circular order fact taking lifts cyclic covers one obtain infinitely many isolated circular orders distinct equivalence classes action aut show admit isolated circular order odd need work begin generalities applicable free groups rank even odd suppose dynamical realization isolated circular order fix free generating set theorem lemma action domains satisfying convention connected components form unique orbit let finitely many connected components contained domain suppose endpoints generator addition belong indeed intersection nonempty image contained adjacent convention moreover intersect convention must holds implies orbit equivalent equivalence relation generated exists argue number equivalence classes relation even done using combinatorial data graphs definition build surface boundary using disc making euler characteristic argument generator let integer given proposition let topologically disc cyclically ordered vertices choose connected component glue oriented edge agree orientation glue edge connected component containing according orientation let denote endpoint connected component glue connected component containing iterate process edges glued convention follow orientation implies resulting surface boundary orientable note remaining unglued edges correspond exactly edges graph definition precisely collapsing connected component point representing vertex recovers cycle repeat procedure generator obtain orientable surface boundary denote cartoon result procedure action example shown figure may helpful reader claim number boundary components surface exactly number equivalence classes relation see proceed follows construction connected components exactly intervals interval endpoints joined edges respectively thus implies lie boundary component intersection boundary component defines equivalence class proves claim compute euler characteristic conclude proof proposition implies gluing described procedure adds one face edges existing surface therefore polygons ranges elements glued surface obtained mod since orientable agrees mod number boundary components claim proved agrees number equivalence classes discussed dynamical realization isolated order number equal hence mod must even proof improved give statement general actions theorem let free group rank free generating set consider action satisfying conventions let minimal invariant cantor set action number orbits connected components complement congruent mod proof previous proof let connected components contained domain recall permutes connected components claim cycle permutation contains least one given claim may construct orientable surface proof theorem whose boundary components count number cycles computing euler characteristic shows number cycles congruent mod prove claim suppose connected component contained lemma take piecewise linear expands uniformly increasing length connected component factor independent iteratively assuming length least length process continue indefinitely image contained domain exotic examples indicate potential difficulty problem classifying isolated orders give example configuration even applying automorphism arise construction example example let consider action defined graph figure hyperbolic element chosen connected components figure surface associated exotic example left boundary component right figure domains left graph right circle oriented counterclockwise domains action cyclic order follows abusing notation slightly using appearance stand connected component see figure left illustration domains surface constructed proof theorem since two hyperbolic fixed points four example realized action psl finite extension fact configuration alone atypical sense classical configuration hyperbolic element psl slow contraction left half circle two iterations needed order bring external gaps component attracting fixed point however surface construction one boundary component shown figure right corresponds isolated circular order observe one create several examples kind choosing two hyperbolic fixed points arbitrarily slow contraction connected components arbitrary choosing lift hyperbolic element psl linear orders purpose section prove theorem stating admits isolated linear order even preliminaries linear orders proceeding proof theorem recall standard tools circular orders linear orders countable groups dynamical realization see instance prop one quick way seeing given already described thinking linear order special case circular order indeed given linear order group one defines cocycle setting distinct sign permutation indices thus construction dynamical realization sketched proof proposition may performed also linear order result action circle single one global fixed point one view action line global fixed point conversely faithful action real line viewed faithful action circle single fixed point circular orders produced remark linear orders next recall notion convex subgroups dynamical interpretation relationship isolated orders definition subgroup group convex two elements condition implies lemma see prop let countable group consider dynamical realization basepoint convex subgroup let interval bounded inf following property either moreover stabilizer precisely conversely given faithful action real line interval property stabilizer stabg convex induced order basepoint easy see family convex subgroups linearly ordered group forms chain two convex subgroups either moreover convex subgroup group acts ordered coset space orderpreserving transformation induced order coset space given every makes sense convex particular implies convex linear order may extended new order declaring elaborating one show following lemma see prop thm details lemma infinite chain convex subgroups let also introduce dynamical property implies order recall two representations proper map definition let discrete group let rep denote space representations homomorphisms endowed topology let rep subspace representations global fixed points representation rep said flexible every open neighborhood rep contains representation following lemma implicit work navas well explicit proof found prop lemma let discrete countable group let dynamical realization order basepoint flexible remark though needed work note precise characterization isolated circular linear orders terms strong form rigidity strong dynamical realizations given mentioned introduction order prove theorem use relationship circular orders groups linear orders central extensions purpose need notion cofinal elements definition element group called cofinal exist remark cofinal elements also characterization terms dynamical realization dynamical realization basepoint cofinal fixed point indeed cofinal point inf every fixed conversely satisfies orbit clearly unbounded sides figure crossing dynamical realization given group circular order natural procedure lift linear order central extension generator central subgroup cofinal following statement appears proposition proposition assume finitely generated isolated circular order induced linear order isolated lift central extension finally recall definition crossings definition let group acting totally ordered space action crossings exist every exist case say crossed crossed graph dynamical realization locally given picture figure application notion crossings following lemma lemma cor let convex subgroup suppose natural action crossings exists homomorphism kernel moreover maximal convex subgroup agrees kernel isolated linear orders turn main goal describing isolated linear orders proving theorem begin reducing proof statement proposition since every central extension splits proposition tell admits isolated linear orders precisely lift isolated order isolated furthermore central extension linear order cofinal gives canonical circular order follows let generator since cofinal exists unique representative given distinct elements let permutation define sign one checks well defined circular order proof proposition shown continuous locally injective finitely generated implies isolated linear order cofinal center induces isolated circular order procedure since isolated circular orders theorem finish proof theorem enough show following proposition let free group linear order central factor cofinal well tool used proof start short proof special case lemma let free group infinite rank order isolated proof let set free generators free factor generator central factor let order dynamical realization basepoint fixed define representation setting easy see orbit free actions two distinct representations one another thus determine distinct orders orders converge proof proposition already eliminated case infinite rank rank one abelian admits isolated orders see assume rank finite least looking contradiction suppose linear order isolated center cofinal let dynamical realization let generator central subgroup remark acts fixed points moreover since central set fixed points since global fixed point implies fixed points every neighborhood find convex subgroup cofinal let denote connected component fix contains basepoint particular bounded interval let stabg claim convex subgroup cofinal proof claim satisfy also since central implies replacing without loss generality may assume thus sequence points converges rightmost point fixed deduce also fixes point similarly considering limit shows leftmost point fixed hence shows convex finally remark fact fixed points implies cofinal since isolated convex restriction also isolated order additionally fact isolated implies lemma chain convex subgroups finite let denote smallest convex subgroup properly containing since also direct product free group subgroup assumptions imply restriction isolated cofinal thus may work instead equivalently notational convenience proceed may assume maximal convex subgroup next claim observe maximal convex subgroup also admits decomposition form subgroup claim non trivial free group even rank proof claim since restriction isolated lemma implies infinite rank trivial action would action thus making easy perturb action thus order recall free groups isolated orders thus nontrivial free group finite rank cofinal ordering rank must even remarks beginning section claim infinite index proof claim finite index interval would bounded would imply dynamical realization global fixed point absurd since every nontrivial normal infinite index subgroup infinite rank conclude claims thus normal subgroup lemma thus implies action crossings otherwise would normal particular collapse obtain action minimal crossings using observation prove following claim claim compact set exists agreeing proof claim fix compact set modify action outside produce action suppose initial case primitive element generator free generating set fixed point without loss generality assume right case completely analogous since fix bounded accumulates fixed point may also assume without loss generality chosen common fixed point define commute property let connected component suppose first contains point fix fact commute means endpoints preserved define restriction agree contains point fix connected component fix may define agree satisfy lastly set fix definition analogous let actions obtained replacing action leaving generators unchanged since commute defines representations clearly left deal case primitive element fixed point outside case perturb action obtain primitive element fixed point outside hence action use fact action crossings minimal minimality implies crossings found outside compact set thus compact fix component outside right let denote one components notice primitive element property case would satisfy property observed remark would convex subgroup properly containing conjugate since assumed maximal impossible fix primitive element property let homeomorphism defined identity outside agreeing define generator since commutes new action representation moreover changing power necessary fixed point ends proof claim finish proof proposition thus theorem note flexibility claim together statement lemma implies order giving desired contradiction acknowledgments authors thank yago suggesting corollary partially supported nsf grant partially supported fondecyt partially supported peps jeunes cnrs projet jeunes labourie financed louis foundation references alonso brum rivas orderings flexibility subgroups lond math soc alvarez barrientos filimonov kleptsyn malicet triestino maskit partitions locally discrete groups circle diffeomorphisms preparation arora mccleary centralizers free groups houston math baik samperton spaces invariant circular orders groups groups geom dyn appear calegari circular groups planar groups euler class proceedings casson fest cantwell conlon foliations subshifts tohoku math leaves markov local minimal sets foliations codimension one publ mat champetier guirardel limit groups limits free groups israel math clay mann rivas number circular orders group arxiv available dehornoy monoids subword reversing ordered groups group theory dehornoy dynnikov rolfsen wiest ordering braids mathematical surveys monographs vol american mathematical society providence deroin kleptsyn navas question ergodicity minimal group actions circle mosc math ergodic theory free group actions circle diffeomorphisms invent math appear deroin navas rivas groups orders dynamics arxiv available dubrovina dubrovin braid groups mat filimonov kleptsyn structure groups circle diffeomorphisms property fixing nonexpandable points funct anal appl ghys groups acting circle enseign math inaba matsumoto resilient leaves transversely projective foliations fac sci univ tokyo sect math ito left orderings isolated left orderings algebra hyperbolicity sinks measure dynamics comm math phys mann rivas group orderings dynamics rigidity ann inst fourier grenoble appear matsumoto measure exceptional minimal sets codimension one foliations topology basic partitions combinations group actions circle new approach theorem kathryn mann enseign math dynamics isolated left orders arxiv available mccleary free groups represented transitive groups trans amer math soc navas dynamics left orderable groups ann inst fourier grenoble remarkable family groups central extensions hecke groups algebra rivas free products groups algebra sikora topology spaces orderings groups bull london math soc cyclically ordered groups sibirsk mat
| 4 |
computing convolution analog discrete time exponential signals algebraically francisco mota jun departamento engenharia universidade federal rio grande norte brasil mota november abstract present procedure computing convolution exponential signals without need solving integrals summations procedure requires resolution system linear equations involving vandermonde matrices apply method solve ordinary equations constant coefficients notation definitions introduce definitions notation used along paper respectively set integers real complex numbers analog time signal defined complex valued function discrete time signal complex valued function paper mainly concerned exponential signals two basic signals necessary development namely unit step signal unit impulse generalized signal analog discrete time setting unit step defined analog discrete time analog time context define unit impulse derivative supposed defined generalized sense since jump discontinuity denote generalized signal analog signal continuous product given module also jump discontinuity fact additionally using generalized signal obtain derivative discrete time context time shifting fundamental operation denote shifting signal units time using notation discrete time impulse written convolution two signals represented binary operation defined analog signals discrete time signals additionally get convolution commutative associative unity operation signal signal important properties convolution related derivation time shifting introduction convolution signals fundamental operation theory linear time invariant lti importance comes mainly fact lti operator represents lti system analog discrete time context satisfies following property involving signals convolution signals analog discrete time defined since signal taking particular get signal equation implies signal denominated impulse response characterizes operator lti system sense system output due input signal given convolution system impulse response pretty much similar fact linear function characterized value maybe important class lti systems analog discrete time context ones modeled order ordinary equation constant coefficients shown bellow analog discrete time represents system output signal system input signal class systems shown impulse response written convolution exponential signals defined system model specifically eri analog discrete time roots characteristic equation associated model result equation motivate find procedure compute convolution exponential signals text books question generally dealt domain laplace transforms time domain convolution certain circumstances becomes usual product see next sections hand show convolution exponential signals evaluated directly time domain without solve integrals summations solving algebraic system linear equations involving vandermonde matrices approach adequate implemented computationally software packages like scilab note since quite old question equivalent results scattered literature may exists see results obtained context probability theory believe approach problem new additionally find system impulse response also use technique compute complete solution equation given signal another important context convolution also used compute probability density function sum independent random variables convolution analog exponential signals consider analog time signal defined well known appear impulse response causal linear time invariant systems lti modeled first order ordinary differential equation note defined two simple important properties module jump discontinuity amplitude one precisely derivative satisfies first order differential equation deduced lets consider convolution two signals kind let since zero get solving integral note convolution satisfies properties continuous precisely since course integral zero integration exponential function infinitesimal interval derivative fact return analyse integral considering two cases remark note case complex conjugate pair represented get sin equation see case convolution written linear combination signals fact along conditions used find scalars without need solving convolution integral shown bellow solving get shown tert consider generalization results convolution exponential signals shown start finding generalization conditions theorem consider convolution signals erj derivative represented evaluated given consider proof note since involves integration exponentials infinitesimal interval proves consider least two terms since two terms composed convolution least two signals conclude equals zero considering since sum least two signals convolution consequently following find procedure computing convolution erj without need solving integrals begin consider case implies generalization theorem convolution exponentials signals erj given scalars computed solving linear system nonsingular vandermonde matrix defined vij vectors vector last column inverse proof use induction prove valid shown suppose valid prove proved prove take derivative sides get applying theorem left side equation using fact rji get consider general convolution possibility repeated convolution initially consider facts convolution power convolution exponentials convolution defined repeated times represent equation formula lemma convolution power exponentials ert denoted given terms proof induction trivially true suppose valid ert ert lemma bellow shows generalization theorem applied convolution power lemma let ert derivative computed represented given proof equation follows lemma setting formula dti analyse would like convolution convolution convolution convolution lemma let convolution convolution convolution denoted given terms terms proof prove induction true shown induction valid let induction valid generic let since rearranged prove general result power convolution exponential signals show generalization theorem theorem convolution exponentials signals eri distinct repeated times given aqj asj scalars computed solving linear system nonsingular confluent generalized vandermonde matrix defined block matrix whose entries defined vectors vector zero vectors vector vector last column inverse alternatively using lemma rewrite polynomial defined asj proof use induction prove valid shown lemma suppose valid prove akj bkj akj proved prove take derivative sides get aqj applying theorem left side equation using fact along lemma get solution ordinary differential equations constant coefficients consider ordinary differential equation models order causal linear time invariant lit system input signal output signal impulse response system given convolution eri roots characteristic equation associated supposing characteristic equation distinct roots one repeated times obtain impulse response using theorem equation asj ers asj calculated solving vandermonde system complete solution generally written homogeneous zero input solution particular solution depends input signal solving particular solution written homogeneous solution format ers therefore solve need obtain equivalent obtain constants compute evaluating convolution showed find use fact particular solution convolution signals namely conclude using theorem using conditions get set conditions used find constants since initial values generally known solving implies constants computed solving vandermonde system like one showed theorem vandermonde matrix one used compute impulse response vector composed vector differently one used compute defined finally order obtain complete solution shown need compute particular solution convolution input signal impulse response avoid solving convolution integral use result theorem writing possible signal convolution finite sum exponential signals type ert situation shown examples section bellow increase order vandermonde matrix defined theorem depending many exponential modes exists input signal convolution discrete time exponential signals context discrete time signals consider exponential signal defined also consider signal defined right shift one unit well known appear impulse response causal linear time invariant systems lti modeled first order difference equation since satisfies relationship lets consider convolution two signals kind let since time shift exponentials defined write therefore obtained right time shift two units develop instead noting since null additionally also solving summation note convolution importantly since right shift two units develop summation considering two cases since remark note case complex conjugate pair represented get sin equation see case convolution written linear combination signals fact along conditions used find scalars without need solving convolution sum shown bellow solving get shown since given consider generalization results convolution exponential signals shown start finding generalization conditions applied convolution theorem consider convolution proof defining rik note time shift right units since result proved following find formula computing convolution begin consider case implies generalization equation theorem convolution exponentials signals given scalars computed solving linear system nonsingular vandermonde matrix defined vij vectors vector last column inverse proof use induction prove valid shown suppose valid prove following reasoning used prove theorem prove apply result theorem equation taking value sides using theorem fact get consider general convolution possibility repeated convolution begin consider facts convolution discrete time exponentials convolution defined repeated times represent equation formula lemma bellow shows generalization theorem applied convolution exponential signal lemma power convolution exponentials denoted given terms compact notation proof induction trivially true suppose valid obviously since last step proof used following fact sum binomial coefficients corollary consider convolution exponentials equivalently since assumed proof since terms right shifted units setting equivalent obtain note rewriten analyse would like convolution lemma let convolution convolution convolution denoted given terms terms proof prove induction true shown inductive step one used proof lemma analog time case following prove general result convolution exponential signals show generalization theorem theorem convolution exponentials signals distinct repeated times given aqj asj scalars computed solving linear system nonsingular confluent generalized vandermonde matrix defined block matrix whose entries defined vectors vector zero vectors vector vector last column inverse alternatively using equation rewrite polynomial defined asj proof use induction prove valid shown lemma inductive step follows way proof theorem prove evaluate equation obtain aqj applying result theorem left side equation using lemma equation get solution difference equations constant coefficients consider order difference equation models order discrete time causal linear time invariant lit system input signal output signal impulse response system given convolution roots characteristic equation associated assumed supposing characteristic equation distinct zero roots discarded order difference equation reduced amount discarded roots final solution solution reduced order equation many units number zero roots characteristic equation see examples section roots one repeated times obtain using theorem equation asj asj calculated solving vandermonde system solution written homogeneous zero input solution particular solution depends input signal solving particular solution written homogeneous solution format rsk therefore solve need obtain equivalent obtain constants obtain evaluating convolution shown since particular solution fact convolution signals namely conclude using theorem used find constants since initial values generally known solving fact constants computed solving vandermonde system like one showed theorem vandermonde matrix one used compute impulse response vector composed vector differently one used compute defined finally order obtain complete solution shown need compute particular solution convolution input signal inpulse response done result theorem write signal convolution sum exponential signals type situation shown examples bellow increase order vandermonde matrix defined theorem depending many exponential modes exists input signal section apply results resolution specific difference equations examples bellow apply results discussed previous sections solution specific equations differential equations example let second order initial value problem ivp find solution consider characteristic equation whose roots impulse response computed implies homogeneous solution computed implies particular solution compute solution augmented vandermonde system implies finally solution ivp example let following third order ivp sin characteristic equation whose roots impulse response optionally simplified cos sin homogeneous solution cos sin particular solution since sin two possibilites using compute following augmented vandemonde system particular solution sin cos sin finally solution ivp given cos sin cos sin example let following ivp cos whose characteristic equation implies impulse response sin homogeneous solution cos sin particular solution cos since therefore since regrouping terms sin cos sin solution given sin cos cos sin difference equations example let third order initial value problem ivp find solution consider characteristic equation whose roots impulse response computed homogeneous solution computed particular solution since first compute order convolution format required theorem end take compute solution augmented vandermonde system since finally solution ivp example let third order ivp whose characteristic polynomial impulse response computed cos sin homogeneous solution computed cos sin particular solution need write sum convolution signals remark easily get calculated computed solving cos sin given cos sin computed solving implies cos sin cos sin therefore given cos sin cos sin finally solution ivp cos sin cos sin cos sin turn simplified cos sin example let third order ivp sin whose characteristic equation discard solve second order equation end shift solution one unity right impulse response computed implies cos homogeneous solution computed particular solution sin since optionally simplified sin sin finally contemplate zero root characteristic equation initial condition sin sin cos cos conclusions showed paper technique computing convolution exponential signals analog discrete time context avoids resolution integrals summations method essentially algebraic requires resolution vandermonde systems extensively discussed problem literature see references therein question computing convolution exponentials discussed previously literature proposed approach apparently different previous ones additionally quite simple suitable implemented computationally finally use proposed approach solve order equation constant coefficients references grubb distributions operators graduate texts mathematics book springer media llc wikipedia free encyclopedia wikimedia foundation june web june available http mota signals systems transforms lecture notes azzo houpis linear control system analysis design second edition kogakusha scilab enterprises scilab free open source software numerical computation orsay france available http akkouchi convolution exponential distributions journal chungcheong mathematical society vol december liu novel analytical scheme compute convolution distribution functions applied mathematics computation chung elementary probability theory stochastic processes third edition wikipedia free encyclopedia wikimedia foundation june web june available http power golub van loan matrix computations second edition johns hopkins univ press hou pang inversion confluent vandermonde matrices computers mathematics applications
| 3 |
nov verified language extension secure computations aseem rastogi nikhil swamy michael hicks university maryland microsoft research microsoft research university maryland computation mpc enables set mutually distrusting parties cooperatively compute using cryptographic protocol function private data paper presents new language dsl implementation writing mpcs verified integrated language extension vdsile new kind embedded dsl hosted fullfeatured programming language source programs essentially programs written mpc library meaning programmers use logic verify correctness security properties programs reason distributed semantics programs formalize deep embedding also mechanize necessary metatheory prove properties verified source programs carry distributed semantics finally use extraction mechanism extract interpreter proved matches semantics yielding verified implementation first dsl enable formal verification source mpc programs also first mpc dsl provide verified implementation implemented several mpc protocols including private set intersection joint median card dealing application verified security correctness introduction secure computation mpc framework enables two parties compute function private inputs party sees others inputs rather sees output utilizing trusted third party compute would achieve goal fact achieve using one variety cryptographic protocols carried among participants one example use mpc private set intersection psi could individuals personal interests function computes intersection revealing interests group common interests among applications mpc used auctions detecting tax fraud managing supply chains performing privacy preserving statistical analysis typically cryptographic protocols expect specified boolean arithmetic circuit programming directly circuits cryptography via api painful starting fairplay project many researchers designed languages dsls program mpcs dsls compile source code circuits given underlying protocol undoubtedly makes easier program mpcs languages still several drawbacks regarding security usability first mpc participants able reason sufficiently privacy preserving output reveal much information inputs goal mpc dsl secure computations reasoning gives assurance goal yet dsls sharemind dsl wysteria scvm mathematical semantics serve basis formal reasoning second languages semantics lack support mechanized reasoning mpc programs proofs possible provide less assurance proofs middle ground might mechanization semantics metatheory system like coq agda isabelle adds greater assurance correct far mpc dsl even mechanized semantics third gap semantics one actual implementation within gap potential security holes formal verification mpc dsl toolchain significantly reduce occurrence bugs existing mpc dsl implementation even partially formally surprising since mentioned earlier dsls lacked formal semantics base verification effort finally practical problem existing dsls scale lack infrastructure language adding features language formalization would help quickly becomes unwieldy frustrating especially added features standard much mpc want access libraries frameworks guis etc way easily adds functionality without adding complexity compromising security paper presents new mpc dsl addresses problems unlike previous mpc dsls attacker model model attackers participants protocol assume participants protocol play roles faithfully motivated deduce much participants secrets observing protocol standalone language rather call verified integrated language extension vdsile new kind embedded dsl hosted programming language following distinguishing elements integrated language extension section programmers write mpc source programs essentially extended dialect inherits basic programming model wysteria see section comparison wysteria like shallow domainspecific language embeddings embeds wysteriaspecific combinators normal syntax prescriptions correct use expressed dependent system arrangement two benefits firstly programs use extra effort standard constructs datatypes libraries directly secondly programmers formally verify properties related correctness security mpc program using verification facilities first dsl enable formal verification source mpc programs deep embedding semantics section shallow embedding implements semantics dsl using abstraction facilities host language kind library however impossible core semantics directly encoded semantics program like simd program whose execution alternates computations party privately joint computations involving parties securely program conceptually viewed single thread control directly implemented way take approach typical deep embedding define interpreter operates abstract syntax trees asts defined data type trees produced running compiler special mode extended source program importantly ast hence interpreter bake standard constructs like numbers lists rather inherited language features appear abstractly ast semantics handled novel foreign function interface ffi easy use programming verifying verified implementation modulo trusted crypto library sections within mechanize two operational semantics asts conceptual semantics formalizes simd view distributed semantics formalizes actual runs programs prove conceptual semantics sound respect actual distributed semantics including semantics ffi calls distributed semantics correctly implemented interpreter proofs checked proof checking algorithm result verified properties formally proven source programs carry programs run multiple parties distributed manner important caveat though interpreter makes use circuit library compile asts circuits execute using goldreich micali wigderson gmw computation protocol present library formally verified formal verification gmw present open problem would add even greater assurance using implemented several programs including psi joint median card dealing application section psi joint median implement two versions straightforward one version composes several small mpcs improves performance increases number visible outputs formally prove psi median optimized unoptimized versions equivalent functionally respect privacy parties inputs particular enhances wysteria target semantics instrument trace observations prove visible events optimized versions traces provide neither participant additional information secrets performance experiments confirm optimized versions indeed perform better card dealing application relies support secret shares formally prove card dealing algorithm always deals fresh card summary paper main contribution new verified integrated language extension vdsile supporting secure multiparty computation unique use formal methods ensure underlying implementation programs written behave according important correctness security properties implementation example programs proofs publicly available online github https related work computational model based programming abstractions previous language wysteria offers several new contributions first wysteria standalone language implemented extension language programs freely use datatypes libraries host language via novel ffi mechanism outlined section architecture allows programs scale easily rather requiring constant extension reimplementation standalone language verification results second relying verification features implementation also verified except cryptographic libraries third since also embedded dsl mpc programs formally verified satisfy correctness security properties support security verification semantics includes notion observable traces use state prove information flow properties although point departure wysteria enjoys none benefits mpc dsls dsl extensions addition wysteria several mpc dsls proposed literature languages standalone implementations drawbacks come like implemented language extensions launchbury describe dsl writing share protocols smc machine oblivc extension twoparty mpc annotates variables conditionals obliv qualifier identify private inputs programs compiled translation former essentially shallow embedding latter compilerbased unique use vdsile strategy combines shallow embedding support source program verification deep embedding support nonstandard target semantics mechanized metatheory verification results different typical verification result might either mechanize metatheory using proof assistant idealized language might prove interpreter compiler correct formal semantics mechanize metatheory establishing soundness conceptual semantics actual distributed semantics also mechanize proof interpreter implements correct semantics source mpc verification verification underlying crypto protocols received attention verification mpc source programs remained largely unexplored previous work know backes devise applied based abstraction mpc use formal verification auction protocol computes min function abstraction comprises lines code hand enables direct verification mpc source programs addition provides verified toolchain general dsl implementation strategies dsls mpc purposes implemented various ways developing standalone embedding dsl shallowly deeply host language vdsile syntax bears relation approach taken linq embeds query language normal programs implements programs extracting query syntax tree passing provider implement particular backend researchers embedded dsls host languages bedrock coq permit formal proofs dsl programs provides advantage host language since effectful making easier write dsl combinators effectful languages still proving dsl programs good properties able easily extract programs runnable code figure architecture deployment sum first dsl enable formal verification efficient source mpc programs written host programming language also first mpc dsl provide partially verified interpreter verified programming consider dating application enables users compute common interests without revealing private interests one another instance private set intersection psi problem illustrate main concepts showing several stages program optimize verify deploy figure provides overview secure computations sec mpc written single specification executes one two computation modes primary mode called sec mode specifies secure computation carried among multiple parties private set intersection example written let psi input input sec fun let reveal input reveal input give give six arguments psi respectively principal identifiers alice bob alice bob secret inputs expressed lists public lengths sec construct indicates thunk run sec mode mode code may jointly access secrets principals case jointly intersect input input inputs return result outside sec mode alice would permitted see bob secret input vice versa inside made visible using reveal coercion finally code constructs map associating result principal case builds singleton map concatenates disjoint maps running code requires following steps first run compiler special mode extracts code ast data structure constructs like sec full syntax figure section extracted nodes rest program code extracted ffi nodes indicate use calls functionality provided next step party alice bob run extracted program using interpreter interpreter written provably implements deep embedding semantics also specified shown figures section interpreter extracted ocaml code standard process party reaches sec interpreter compiles particular values secrets environment boolean circuit loopfree code compiled circuit provides specialized support several common combinators current example lengths input lists required public order alice bob able create boolean circuits circuit handed library choi implements gmw computation protocol running protocol party starts confirming wish run circuit proceeds generating secret shares party secret inputs running gmw protocol involves evaluating boolean circuit secret shares involving communication parties one obvious question parties able get process ground running program six inputs five inputs known principals inputs size party input values specific principal sealed principal name appears sealed container type types input input respectively list sealed int list sealed int program run alice host former list alice values whereas latter list garbage values denote reverse true bob host circuit constructed principal links values relevant input wires circuit likewise output map component party derived output wires circuit thus party gets see output would like mpc like psi called normal programs example would like logic dating application involves reading inputs displaying results etc able call psi compute common interests achieve provides way compute projection functions version psi called single party inputs party inputs filled sealed garbage values described calling function code also kicks interpreter run psi described interpreter completes result returned program continue optimizing psi par although psi gets job done turns inefficient cases shown better implementations psi cases involve performing computation participant evaluates local computations parallel iterating elements sets interleaved small amounts jointly evaluated cryptographically secure computations second computation mode called par mode supports computation particular construct par states principal locally execute thunk simultaneously principal set simply skips expression within principals may engage secure computations via sec optimized version psi based algorithm uses par function psi opt line begins using par involving alice bob provided thunk principal calls alice turn calls check bob element alice list secure computation occurs use sec line within circuit alice bob securely compare values gather list list bool one outer list alice elements ith inner list contains comparisons alice ith value bob comparing alice elements bob code optimized described omit redundant comparisons line parties build matrix comparisons boolean lists alice inspects rows matrix line determine elements intersection bob inspects columns line joint function gives result principal line let rec alice else let check bob alice check bob else let let sec fun reveal reveal else let check bob let psi opt par fun let build matrix alice let par fun filteri contains true row let par fun filteri contains true col give give optimizations line detect element intersection return immediately instead comparing remaining elements furthermore remove excluding future comparisons elements alice set since representations sets repeats excluded comparisons guaranteed false one might wonder whether could programmed code normal relying sec mode circuit evaluation however recalling goal formally reason code prove correct secure par mode provides significant benefits particular simd model provided enables capture many invariants free example proving correctness psi opt requires reasoning participants iterate loops lock assures construction besides code would harder write read split across multiple functions files general guideline use code written view single principal programming principals rely ffi mediate two embedding type system using abstractions provided wysteria designing various computation protocols relatively easy however deploying protocols three important questions arise protocol realizable example computation claimed executed principals using par sec ever access data belonging protocol correctly implement desired functionality example correctly compute intersection alice bob sets protocol secure example optimizations previous section omit certain comparisons inadvertently also release information besides final answer embedding leveraging type system address three questions strategy make use extensible monadic dependent system define new indexed monad called wys use describe precise trace properties wysteria computations additionally make use abstract type sealed representing value accessible principals combining wys monad sealed type encode form control ensure protocols realizable wys monad wys monad provides several features first dsl code typed monad encapsulating rest within monad computations specifications make use two kinds ghost state modes traces mode computation indicates whether computation running par sec context trace computation records sequence nesting structure messages exchanged parties jointly execute sec result computation trace constitute observable behavior wys monad essence product reader monad modes writer monad traces formally define following types modes traces mode mode pair mode tag either par sec set principals trace forest trace element telt trees leaves trees record messages tmsg received result executing sec block tree structure represented tscope nodes record set principals able observe messages trace type mtag par sec type mode mode mtag prins mode type telt tmsg telt tscope prins list telt telt type trace list telt every computation monadic computation type wys pre post type indicates wys monad may perform computations result type pre mode may executed post relating computation mode result value trace observable events run context mode satisfying predicate pre may send receive message according trace returns result value validating predicate post style indexing monad computation standard technique defer definition monad bind return actual implementation focus instead specifications combinators specific describe two combinators sec reveal give types defining sec val sec prins unit wys pre post wys requires fun par pre mode sec ensures fun tmsg post mode sec type sec dependent first parameter second argument thunk evaluated sec mode result computation type form wys requires ensures predicates respectively free variables type pre post implicitly universally quantified front use requires ensures keywords semantically significant sec predicate mode computation whose context sec called jointly execute require transition perform sec call simultaneously current mode must mode par also require pre valid mode transitioned mode sec says sec predicate relating initial mode result trace computation line states trace secure computation sec singleton tmsg reflecting execution reveals result additionally ensures result related mode run mode sec ideal functionality ensured backend gmw empty trace since observables according post defining reveal discussed earlier value type sealed encapsulates value accessed calling reveal call succeed certain circumstances example par mode bob able reveal value type sealed alice int type reveal makes access control rules clear val unseal prins sealed ghost val reveal prins sealed wys requires fun ensures fun unseal function ghost function meaning used specifications reasoning purposes hand reveal called concrete programs precondition says executing mode par current participants must listed seal however executing mode sec subset current participants required secure computation executed jointly access individual data postcondition reveal relates result argument using unseal function correctness security verification using wys monad sealed type write precise types psi program proving various useful properties lack space discuss statements main lemmas prove proof details proofs left actual implementation programming protocols using abstractions provided proofs relatively straightforward particular rely heavily view parties execute different fragments code contrast reasoning directly message passing semantics would much unwieldy section formalizing connection semantics justify reasoning present structure security correctness proof psi opt showing specification psi opt val psi opt prin prin list sealed int list sealed int wys map list int requires fun par dups dups ensures fun let set let set set set opt trace signature establishes alice bob simultaneously execute psi opt start together par mode lists containing secrets without duplicates protocol terminates obtain results corresponding intersection sets protocol functionally correct prove properties beyond functional correctness also prove trace observable events run psi described function psi opt trace purely specificational function effect records boolean results every sec comparison performed run structure alice check bob given full characterization observable behavior psi opt trace terms inputs prove optimizations correct using relational reasoning also prove security hyperproperties relating traces multiple runs protocol goal prove noninterference delimited release property psi opt since attacker model model mentioned section aim prove security properties thirdparty network adversary psi perspective alice attacker aim prove two runs protocol alice input constant bob varies alice learns observing protocol trace allowed covering bob perspective symmetrically show two runs psi psi satisfy formula traces observed alice bob indistinguishable permutation type lset int type integer sets represented lists intersect intersect length length length length words alice bob learn intersection sets size set predicate delimits information released protocol far aware first formal proof correctness security huang optimized private proof style refinement via psi inefficient variant psi opt program running psi always involves exactly length length comparisons two nested loops prove following relational security property psi relating traces trace psi trace psi formal statement lemma prove shown val psi secure lemma requires ensures permutation trace psi trace psi reason traces psi permutation given alice prior knowledge choice representation bob set bob shuffle list carrying proof becomes evident alice bob learn size sets one compose psi opt protocols partially hide makes easy compose protocols simply composing functions principal principal set ffi const constant expression map value true false par sec seal reveal ffi mkmap project concat let fix else figure syntax traces alice observes equivalent formalize observation using probabilistic relational variant yet next step prove optimizing psi psi opt secure showing exists function trace psi trace psi opt trace psi opt computed length words trace produced psi opt computed using function information already available alice bob observes run secure unoptimized version psi optimizations reveal information present examples verification details section formalizing previous section presented examples verifying properties programs using logic however programs executed using semantics distributed semantics carried multiple parties properties verify using carry actual runs section present metatheory answers question first formalize semantics arguing faithfully realizes semantics including api presented section next formalize distributed semantics multiple parties use run programs theorems establish correspondence two semantics thereby ensuring properties verify using carry actual protocol runs mechanized metatheory presented section syntax figure shows complete syntax principal principal sets values denoted respectively constants language also include unit booleans ffi constants expressions include regular forms functions applications let bindings etc constructs among ones seen section expression mkmap mode context frame stack environment trace element trace configuration par component sec component protocol true false sealed fix par sec par par tmsg tscope figure runtime configuration syntax creates map principals principal set value computed project projects value principal map concat concatenates two maps host language constructs also part syntax including constants include strings integers lists tuples etc likewise host language called invocation function arguments ffi confers two benefits first simplifies core language still allowing full consideration security relevant properties second helps language scale incorporating many standard features libraries etc host language semantics semantics model semantics api semantics defines judgment represents single step abstract machine configuration consists mode stack environment trace expression syntax elements given figure value form represents host language ffi values stack environment standard trace mode discussed previous section semantics formalized style hieb felleisen redex chosen standard evaluation contexts prescribe evaluation order core rules given figure essence semantics extends standard reduction machinery lambda calculus direct correspondence pure fragment several constructs argue inspection constructs correspondence specifications wys monad despite eyeball closeness room formal discrepancy semantics static model let let let app asparret aspar par seal par par seal seal append tscope sealed assecret assec par sec sec seal seal sealed sec append tmsg par sec reveal reveal sealed mkmap par sealed sec mkmap concat proj par singleton sec project dom dom concat exec ffi ffi ffi figure semantics selected rules within wys monad leave future work formally proving correspondence semantics official semantics standard constructs let bindings let applications etc evaluate usual see rules let app mode traces play role rules aspar asparret reduce par expression arguments fully evaluated aspar first checks current mode par contains principals set pushes seal frame stack starts evaluating rule asparret pops frame seals result accessible principals rule also creates trace element tscope essentially making observations reduction visible principals see rules faithfully model api consider type par shown val par prins unit wys pre post wys sealed requires fun seal pre mode par ensures fun tscope post mode par unseal rule aspar implements line line rule asparret checks returned value rule also generates trace element tscope per technical reasons function closures may sealed see end section details line returns sealed value per return type api line next consider rules assec assecret see rules implement type sec shown rule assec checks precondition api rule assecret generates trace observation tmsg per postcondition api similar manner easily see rule sreveal implements corresponding postconditions given section rules mkmap proj concat implement map creation projection concatenation respectively map creation current mode par rule ensures parties access value requiring sealed value parties reveal rule also requires par sec mode parties map domain present current mode rule proj current mode par current party set must singleton equal index map projection whereas current mode sec index map projection must present current party set rule concat simply checks two maps disjoint range returns disjoint union two maps rule ffi implements ffi call calling function exec ffi expected calling hostlanguage function effect state concretely enforced monadic encapsulation effects present details exec ffi section remaining rules straightforward distributed semantics semantics implements judgments form protocol tuple maps principal local configuration maps set principals configuration ongoing secure computation kinds configurations local secure form per figure semantics principals evaluate program locally asynchronously reach secure computation point synchronize jointly perform computation semantics expressed four rules given figure state either principal take step local configuration secure computation take step principals enter new secure computation finally secure computation return result waiting participants first case covered rule par nondeterministically chooses principal configuration evaluates according local evaluation judgment given figure discussed second case covered sec evaluates using semantics last two cases covered enter exit also discussed local evaluation rules figure present local evaluation semantics express single principal behaves par mode mode always par local evaluation agrees semantics standard language constructs rules let app differs par expression principal either participates computation skips rules aspar lasparret handle case principal participates computation rules closely mirror corresponding semantics rules one difference rule asparret trace scoped semantics traces contain tmsg elements trace flat list secure computation outputs observed active principal principal skips computation result sealed value containing garbage rule aspar contents sealed value matter since principal allowed unseal value anyway rule seal intuition rule lreveal allows principal reveal value sealed rule mkmap requires value sealed value case current principal set rule requires access contents sealed value creates singleton map maps contents sealed value case current principal set rule simply creates empty map rule formal development actually shares code sets rules using extra flag indicate whether rule local joint proj projects current principal mapping map rule map concatenation straightforward case local rules perform secure computation parties need combine data jointly computation secure computations returning figure rule enter handles case principals enter secure computation requires principals must expression form sec local environment associated closure party local environment contains secret values addition public values conceptually secure computation combines environments thereby producing joint view evaluates combination define auxiliary combine function values follows combine combine combine combine sealed sealed sealed combine first two rules handle case one values garbage cases function picks value sealed values set function recursively combines contents combine function environments combines mappings pointwise combine functions values environments folding corresponding function consider following code let par alice fun let par bob fun let sec alice bob fun unseal unseal alice environment mapped sealed alice whereas bob environment mapped sealed alice similarly alice environment mapped sealed bob whereas bob environment mapped sealed bob secure computation environments combined producing environment mapped sealed alice mapped sealed bob secure computation function evaluated new environment although combine function written partial function metatheory guarantees runtime function always succeeds since principals computing program view data views structurally similar rule enter combines principals environments creates new entry map principals waiting secure computation finish rule exit applies secure computation terminated returns results waiting principals secure computation terminates value principal gets value slice slice function analogous combine opposite strips parts accessible cases slice function slice let let let app aspar asparret seal par seal append sealed aspar seal par seal seal seal sealed reveal sealed reveal mkmap sealed mkmap concat proj project dom dom concat figure distributed semantics selected local rules mode always par singleton par sec sec dom combine sec exit sec slice enter figure distributed semantics rules slice sealed sealed slice sealed sealed slice example consider following code let sec alice bob fun let seal alice since return value secure computation sealed alice bob get sealed alice produced using slice function result seal alice rule exit notation defined append tmsg returned value also added principal trace note observation value return point allowing closures sealed consider following example secure block alice environment would map seal alice seal bob whereas bob environment would map seal alice closure fun maps seal alice per target semantics environments combined combined environment gets value bob environment closure garbage value thus running program target semantics fails make progress found problem effort mechanizing semantics allow closures boxed plan fix problem future metatheory let par alice let par bob fun let sec fun let reveal let reveal goal show semantics faithfully represents semantics programs executed multiple parties according semantics proving simulation semantics semantics proving confluence semantics development mechanizes metatheory presented section source semantics program returns parties target semantics call simulation define slice function returns corresponding protocol configuration component principal mapped slice protocol slicing values use slice function traces sliced follows slice tmsg tmsg slice slice tscope slice slice tscope slice expression source program components slice functions defined analogously say terminal par mode fully reduced value value empty similarly protocol terminal empty local configurations terminal simulation theorem following theorem simulation let set principals terminal exists derivation slice slice slice terminal notably principal value trace protocol slice slice value trace confluence state confluence theorem first define notion strong termination definition strong termination protocol strongly terminates terminal protocol written possible runs terminate number steps confluence result says theorem confluence terminal combining two theorems get corollary establishes soundness semantics semantics corollary soundness semantics let set principals terminal slice slice suppose source program prove result sealed alice soundness semantics conclude program run semantics may diverge terminates alice output also sealed alice principals outputs sealed alice aside correspondence results semantics also covers correspondence traces thus via vdsile embedding wysteria correctness security properties prove program using logic hold program actually runs course statement caveated produce actual implementation semantics details presented next section implementation section describes implementation begin describing interpreter proved core implements formal semantics adding confidence bugs introduced translation formalism implementation describe novel ffi programs easily take advantage features libraries host language interpreter formal semantics presented prior section mechanized inductive type style useful proving properties directly translate implementation therefore implement interpretation function step prove corresponds rules input configurations step implies according semantics core principal implementation stub function tstep repeatedly invokes step ast source program produced extractor run custom mode unless ast sec node functions step tstep extracted ocaml standard extraction process local evaluation defined sec stub implements amounts enter exit figure stub notices program reached sec expression calls circuit library written converts ast second argument sec boolean circuit circuit encoded inputs communicated server written using library due choi implements gmw mpc protocol server evaluates circuit coordinating gmw servers principals sends back result circuit library decodes result returns stub stub carries local evaluation formalization semantics including ast specification lines code formalization used metatheory well executable interpreter metatheory connects semantics section lines interpreter correctness proof another lines code interpreter step function essentially big current expression calls functions semantics specification tstep stub another lines size circuit library including gmw implementation lines stub implementation gmw circuit library extractor including custom mode part trusted computing base bugs could constitute security holes verifying components well especially circuit library gmw implementation open problems knowledge interesting future work ffi writing source program programmer access definitions ffi exports datatypes library functions programs explain extensible ffi mechanism enables programmers add new datatypes functions ffi module ensuring metatheory remains applicable source programming ffi datatypes functions added ffi module usual type list nil list cons list list val append list list tot list let rec append match append addition usual definitions requires programmer define corresponding slice combine functions section new datatype val slice list prin tot prin list tot list let rec slice list match slice list datatype defined program free use importing ffi module prove properties using standard reasoning monadic encapsulation effects ensures ffi functions interfere state mode trace example effect annotation tot append indicates append pure function hence modify use wys monad state ffi metatheory recall section formalize ffi calls using expression form ffi ffi values using value form semantics ffi calls using meta function exec ffi metatheory section needs relate semantics ffi calls purpose ffi programmer must prove added definitions meet certain obligations example one obligation slice value returned ffi call must match return value ffi call slice arguments formally ffi function slice exec ffi exec ffi slice fulfill obligation append function example programmer required prove following lemma val slice append lemma prin tot prin list list lemma ensures slice list append append slice list slice list programs call functions described section lemma easily provable obligations proven metatheory guarantees theorems section extend new datatypes functions ffi ffi implementation ffi module extracted ocaml via regular extraction rely metatheory conclude extracted code executes per specification custom extraction mode implemented identifies integer string constants source program extracts ffi ast form part ast value type ffi calls take arguments return datatypes extracted ffi ast form part ast expression type type exp ffi args list exp inj exp argument extracted name ffi function links extracted ocaml function explain inj argument shortly evaluating ast interpreter may reach ffi args inj node saw section interpreter calls library function exec ffi list values addition passes inj argument exec ffi function first embedded arguments function straightforward values shown value ast form unembed unit unembed ffi values host language unembed seal seal values seal passed ffi module access api hence must treat values abstractly exec ffi calls ocaml function arguments ocaml function returns result needs embedded back ast programs inspect type values fortunately compiler enough information extraction know result particular extractor compiles ffi call source program ffi node type information return value ffi call using information instruments ffi node injection function used runtime embed ffi call result back ast example result injection ocaml function fun unit return value interpreter value seal injection identity return value host value list tuple int injection creates ffi node exec ffi uses injection embed result back ast returns interpreter interface essentially provides form monomorphic interoperability dynamically typed interpreter host language foresee problems extending approach higher order using coercions pre captures preconditions inputs mentioned median spec idealized median specification versions able prove specification without extra hints code example monolithic version programmed let median let unit wys int requires fun mode sec alice bob ensures fun pre unseal unseal median spec unseal unseal fun median algorithm sec alice bob figure time run secs normal optimized psi varying set sizes intersection densities applications private set intersection evaluate performance psi computing intersection single secure computation psi opt optimized version algorithms section programs benchmark slightly different ones presented local col row functions verified ones results shown figure measure time seconds per party set sizes intersection densities fraction elements common time taken unoptimized version independent intersection density since always compares pairs values however intersection density increases optimized version performs far better able skip many comparisons lower densities optimization improve performance algorithm essentially becomes quadratic setup cost secure computation takes note similar performance profile also noted rastogi although experiment set size like joint median program unoptimized optimized versions joint median programs take two distinct sorted inputs alice two distinct sorted inputs bob return median four unoptimized version whole computation takes place monolithic secure computation whereas optimized version breaks computation revealing intermediate results parts local hosts much like psi refer reader details algorithms versions prove functional correctness val median sealed alice int int sealed bob int int wys int requires fun mode par alice bob ensures fun pre unseal unseal median spec unseal unseal proving security properties unoptimized version prove observable trace tmsg result computation basically reflecting parties see final result optimized version first prove observable trace opt trace reveal reveal opt trace purely specificational function takes arguments alice reveal bob reveal inputs returns trace generated optimized median algorithm prove trace reveal final output proving following relational lemma val optimized median secure alice int int int int int int lemma requires pre pre median spec median spec ensures opt trace opt trace lemma says two runs optimized median arbitrarily different inputs alice bob input output median spec median spec observable trace also essentially trace reveal alice inputs beyond already revealed output also prove symmetrical lemma bob vary bob inputs keep alice input output proofs able prove automatically card dealing implemented card dealing application application play role dealer game online poker thereby eliminating need trust game portal card dealing application relies support secret shares using secret shares participating parties share value way none parties observe actual value individually party share consists bytes recover value combining shares secure block application parties maintain list secret shares already dealt cards number already dealt cards public information deal new card party first generates random number locally parties perform secure computation compute sum random numbers modulo let call output secure block secret shares declaring newly dealt card parties needs ensure card already dealt iterate list secret shares already dealt cards element list check different check performed secure block simply combines shares combines shares list element checks equality two values different previously dealt cards declared new card else parties repeat protocol generating fresh random number exports following api secret shares location sharing application moment run applications using secure server backend backend sec works literally sending code inputs separate server implements semantics directly server returns result cryptographic proof correctness party verified use cryptography using technique similar conjecture server could useful trusted hardware based deployment scenario type type type type type type assume cansh int int conclusions val type ghost val type ghost prins val type wys requires fun sec ensures fun val comb type wys requires fun sec ensures fun type types shares values type implementation currently supports shares int values predicate enforces restriction source programs extending secret shares support types pairs straightforward functions marked ghost meaning used specifications reasoning purposes concrete code shares created combined using comb functions together specifications functions enforce shares created combined set parties comb recovers original value interpreter transparently handles details extracting shares gmw implementation choi reconstituting shares back comb addition implementing card dealing application formally verified returned card fresh signature function checks freshness newly dealt card follows abc set parties val check fresh list int mem abc int abc wys bool requires fun mode par abc ensures fun mem specification says function takes two arguments list secret shares already dealt cards secret shares newly dealt card function returns boolean true iff concrete value different concrete values elements list using verify implementation check fresh meets specification applications secure server implemented applications including paper proposed verified integrated language extensions vdsile new way implement language paper specifically applies idea design implement new mpc dsl hosted inherits basic programming model wysteria however virtue implemented vdsile provides several novel capabilities missing previous mpc dsls including wysteria first dsl enable formal verification source mpc programs also first mpc dsl provide partially verified interpreter furthermore programs freely use standard constructs datatypes libraries directly thereby making scalable usable capabilities constitute significant step towards making mpc practical trustworthy paper reported several mpc applications programmed verified correctness security references shamir rivest adleman mental poker springer yao generate exchange secrets focs goldreich micali wigderson play mental game stoc beaver micali rogaway round complexity secure protocols stoc bogetoft christensen geisler jakobsen nielsen nielsen nielsen pagter schwartzbach toft financial cryptography data security secure multiparty computation goes live bogdanov jemets siim vaht estonian tax customs board evaluated tax fraud detection system based secure computation financial cryptography data security springer berlin heidelberg kerschbaum schroepfer zilli pibernik catrina hoogh schoenmakers cimato damiani secure collaborative management computer kamm statistical analysis using secure multiparty computation dissertation university tartu malkhi nisan pinkas sella fairplay secure computation system usenix security huang evans katz malka faster secure computation using garbled circuits usenix viff virtual ideal functionality framework http malka vmcrypt modular software architecture scalable secure computation ccs nisan pinkas fairplaymp system secure computation ccs holzer franz katzenbeisser veith secure twoparty computations ansi ccs nielsen schwartzbach programming language secure multiparty computation plas nielsen languages secure multiparty computation towards strongly typed macros dissertation bogdanov laur willemson sharemind framework fast computations computer security esorics schropfer kerschbaum muller intermediate language secure computation compsac rastogi hammer hicks wysteria programming language generic multiparty computations proceedings ieee symposium security privacy liu huang shi katz hicks automating efficient secure computation ieee symposium security privacy oakland laud randmets language secure multiparty computation protocols proceedings acm sigsac conference computer communications security ser ccs new york usa acm online available http mardziel hicks katz hammer rastogi srivatsa knowledge inference optimizing enforcing secure computations proceedings annual meeting international technology alliance short paper consists coherent excerpts several prior papers aydemir bohannon fairbairn foster pierce sewell vytiniotis washburn weirich zdancewic mechanized metatheory masses poplmark challenge proceedings international conference theorem proving higher order logics ser tphols berlin heidelberg klein clements dimoulas eastlund felleisen flatt mccarthy rafkind findler run research effectiveness lightweight mechanization proceedings annual acm sigplansigact symposium principles programming languages ser popl new york usa acm yang chen eide regehr finding understanding bugs compilers proceedings acm sigplan conference programming language design implementation leroy formal verification realistic compiler commun acm bhargavan fournet kohlweiss pironti strub implementing tls verified cryptographic security ieee symposium security privacy oakland online available http polarssl verification kit http yang hawblitzel safe last instruction automated verification operating association computing machinery june swamy keller rastogi forest bhargavan fournet strub kohlweiss zinzindohoue beguelin dependent types multimonadic effects popl shamir share secret communications acm vol launchbury diatchki dubuisson efficient protocol secure multiparty computation icfp zahur evans language extensible computation unpublished http almeida barbosa barthe davy dupressoir grgoire strub verified implementations secure verifiable computation backes maffei mohammadi computationally sound abstraction verification secure computations iarcs annual conference foundations software technology theoretical computer science fsttcs meijer beckman bierman linq reconciling object relations xml framework proceedings acm sigmod international conference management data ser sigmod new york usa acm online available http bedrock coq library verified programming http coq development team coq proof assistant online available http choi hwang katz malkin rubenstein secure computation boolean circuits applications privacy marketplaces http huang evans katz private set intersection garbled circuits better custom protocols ndss atkey parameterised notions computation journal functional programming vol online available http nanevski morrisett birkedal hoare type theory polymorphism separation funct vol online available http benton simple relational correctness proofs static analyses program transformations proceedings acm symposium principles programming languages ser popl new york usa acm online available http clarkson schneider hyperproperties comput vol online available https sabelfeld myers model delimited information release software security theories systems second international symposium isss tokyo japan november revised papers barthe fournet strub swamy probabilistic relational verification cryptographic implementations annual acm symposium principles programming languages popl san diego usa january online available http felleisen hieb revised report syntactic theories sequential control state theoretical computer science vol henglein dynamic typing syntax proof theory sci comput vol online available http rastogi mardziel hammer hicks knowledge inference optimizing secure computation plas fournet kohlweiss strub modular cryptographic verification proceedings acm conference computer communications security
| 6 |
antifragility intelligent autonomous systems feb anusha mujumdar swarup kumar mohalik ramamurthy badrinath ericsson research bangalore abstract antifragile systems grow measurably better presence hazards contrast fragile systems break presence hazards robust systems tolerate hazards certain degree resilient systems like selfhealing systems revert earlier expected behavior period convalescence notion antifragility introduced taleb economics systems applicability illustrated biological engineering domains well paper propose architecture imparts antifragility intelligent autonomous systems specifically based argue architecture allows system uncovering new capabilities obtained either hazards opportunistic deliberation strategic case study autonomous wheeled robot presented show proposed architecture robot develops antifragile behaviour respect oil spill hazard introduction intelligent autonomous system adapt unforeseen situations answering question focus significant research effort last two decades however notion adaptivity far limited system capability cope unknown situations return best original performance work argue presence unforeseen stressors hazards offers system opportunity selfimprove presence hazards central notion antifragility term antifragility coined nicholas taleb book taleb argued opposites fragile systems robust resilient systems systems benefit hazards grow stronger result concepts relationships compelling illustrated many domains economics biology engineering however literature provide suggestion towards formalization concepts due concrete guidelines designing antifragile system paper address gap formalizing notions fragility robustness resilience antifragility context intelligent systems particular large subclass systems based upon reasoning planning show antifragile intelligent systems designed using refinement monitor analyse planning execution knowledge autonomic architecture suggested ibm ibm paper organized follows briefly review background work antifragility section section lays scope current study describes approach towards introducing antifragility concepts intelligent systems section details intelligent systems model followed formalizes concepts fragility antifragilility systems section proposes design modules antifragility augmented existing autonomic computing architecture intelligent systems section illustrate developed concepts help robot path planning example conclude section future directions related work systems degrade behavior acted upon external hazards stressors known fragile systems robust systems display tolerance hazards upto certain predesigned level show change performance magnitude hazard exceeds level system performance rapidly degrades example robust system building designed withstand earthquake particular magnitude resilient systems affected hazards return normal performance hiatus robust resilient systems studied extensively means allow intelligent systems cope hazards russell however systems performance best retains original behavior taleb notes certain systems fact benefit presence hazards systems need distinguished merely robust resilient systems abound nature immune system strengthens upon encounter antigen often retains developed coping capability permanently monperrus another example muscular strength developed stressor exercise jones increase strength developed improves capability muscle perform wide variety tasks many may unrelated exercise initially triggered environmental ecosystems taleb presence hazard form predator strengthens surviving prey population since learn past experience antifragile systems studied literature context communication systems lichtman infrastructure networks fang sansavini aerospace jones health care clancy cloud systems johnson gheorghe emphasize following characteristics antifragile systems systems thrive presence hazards opposed robust systems simply endure resilient systems merely survive return original performance best antifragile systems learn failure handle hazards failure may members system acted upon hazards system responds fashion improves performance time necessarily immediately improvement acquisition new capabilities caused deliberate strategies system opportunistic exploitation certain attributes hazard literature general provide formal definitions concepts fragility robustness resilience antifragility lichtman antifragility demonstrated mathematically particular hazard communication system definitions specific problem generic enough applicable systems scope approach paper wish formalize concepts antifragility context intelligent systems specifically consider class systems based planning systems fixed set actions given goals constraints attempt achieve goals deriving sequence called plan available actions executing plan intelligent system fragile hazards encountered execution may drive system states find plan achieve goals hand able find alternate plan lead goals matter hazards drive designated resilient note system able find plans avoid hazards altogether occurrence hazards affect plans system robust hazards possible many cases since hazards localized traffic jam corruption memory block crucial point note robustness resilience uses plans actions available system however antifragile systems definition must become stronger hazard occurs intelligent systems correlate strength available plans take position system stronger plans available implemented introducing new actions system consequence system fragile hazards may become robust resilient consistent observation antifragility literature new capabilities introduce redundancy brings beneficial effects resilience robustness modeling notion antifragility context intelligent systems difficulty antifragility literature new capabilities introduced external entities medicine realization already existing capabilities exercise however intelligent systems consider fixed sets actions access external source new ameliorate situation partitioning set actions visible hidden subsets point time plan synthesis algorithms access visible subset actions brought hidden set introduce new actions planner one argue approach artificial system default access actions results largest set plans possible however approach realistic strong rationale first due smaller visible set planning efficient importantly cost support planning execution actions sensors actuators predicates larger state space therefore advisable consider subset actions necessary current situation enhance subset need handle hazard accurately captured partition actions visible hidden sets antifragile intelligent system must therefore first decide actions made visible duration execute decisions show suitable refinement loop mentioned introduction refined action sets appropriate modules one obtain architecture design intelligent systems exhibit antifragility rest section give details base loop intelligent systems based planning implemented using loop see fig comprises monitor analyzer planner executor knowledge modules orchestrated loop autonomic manager monitor module responsible gathering relevant state information resources detect events interest information used analysis module determine need change current plan sequence actions executed change may necessitated due change goals system changes environment force course correction planning module called upon synthesize new plan changed execution module uses generated plan resume execution monitoring environment execution system derives knowledge environment possible responses store note generalization open intelligent systems certainly interesting leave direction future work edge used derive faster better responses system continues interact environment system model model intelligent system planningbased agent interacting environment interaction specified game environment supplies goal agent tries achieve goal plan sequence actions goal achieved environment supplies another goal thus game continues infinitum execution plan environment may change state system nondeterministically capturing notion hazards type response determines whether fragile robust resilient antifragile formally finite set boolean predicates system states defined valuations true false environment specified set goal states goal transitions also called missions set hazard transitions hazard referred hazard source hazard consequence special subset called waypoints intelligent system defined set actions act action specified hprecondition ecti pair precondition effect act induces deterministic labeled graph sact nodes edges precondition consistent effect denoting values predicates overridden effect sequence actions plan pair states starting initial state applying actions consecutively arrive goal state corresponding path denoted path assume system special reset action moves one waypoints every state waypoint reset sact minimal notations sufficient define following concepts succinctly definition plan fragile state path plan plan robust set hazards path plan resilient path implies plan fragile plan one goes state hazard occur hazard consequence state one achieve intended goal state robust plan completely avoids states hazards occur resilient plan switches another starting hazard consequence state hazard occurs plans illustrated figure dotted path left corresponds robust plan one extend definitions entire system definition intelligent system fragile mission plans fragile figure fragile robust resilient plans intelligent system robust set hazards every mission robust plan intelligent system resilient every mission plan resilient corollary intelligent system fragile either robust resilient robustness resilience independent concepts system may robust hazard robust plan every mission may plans enable hazard resilient similarly plans missions may resilient may single robust plan fragility robustness resilience refers given system however antifragility points capability systems evolve fragile fragile define notion antifragile system becomes stronger due occurrence hazards definition intelligent system antifragile set hazards fragile implies eventually becomes either resilient robust corollary definition whereas plan hazard consequence goal originally needs plan resiliency similarly new paths mission robustness implies need introduce new actions agent one see definition antifragility leaves number dimensions unspecified hazard alternate plan specify whether improvement new actions happen immediately future time whether improvement limited duration say till current goal permanent whether improvement specific hazard related larger set note decisions depend upon predictive capabilities system cost considerations must left designer implementation section describe refinements loop also suggest design decisions unspecified dimensions mentioned previous section guide system designers later denoted duration improvement current denotes mode handling robust resilient note recommendations system improve value denotes urgency improvement controls whether new actions necessary longer term figure intelligent system architecture knowledge knowledge module set actions observed section key idea introducing new actions partitioning actions act classes acte empowering actv visible acth hidden actions associated visibility predicate visiblea membership action actv acth determined boolean value visiblea current state visibility predicates set true empowering actions hazards though set false actions hazards apart actions knowledge base maintains history hazards occurred set visibility predicates called internal goals latter essentially predicates possibly achieved planning execution empowering actions manager make hidden actions visible future deliberate strategic way monitor normal role monitor access current state system detect anomalies monitor detects hazard inconsistency current state desired precondition case hazard records hazard knowledge base state reached execution previous action analyzer design decisions mentioned mostly analyzer module analyzer refers history hazards domain knowledge including cost etc could based upon analytics example taking account impact hazard achievement current goal outputs following event hazard planner given current state goal state set actions set hazards recommendation analyzer planner synthesizes plan using visible actions first determines current state space predicates used actv note since actv dynamic state space also changes dynamically expected monitor query environment find correct current state level detail accessing appropriate sensors processing modules outputting predicates plan synthesis may fail several reasons visible hidden actions may sufficient provide new plans pathological case external support mandatory may plans satisfying recommendation case best effort improvement done example robust plan found planner tries find resilient one robust plan found missions finds one current mission specific behavior depends upon planner policies executor executor module pretty simple given plan labeled sequence transitions sact executes actions associated labels sequence order taking help monitor detect hazards hazard executor halts execution signals manager hazard taken place appends hazard history knowledge base best case executor finishes executing plan without countering hazard signals environment plan completion environment issue new goal handled autonomic manager manager decides issue reset action issued directly executor executes plan executor executes reset action leads waypoint special goal state reset completion signal environment allows continue game new goal autonomic manager autonomic manager essentially orchestrator components greater role antifragile intelligent systems since life cycle longer loop reactive behavior manager follows first determines set hazards including present one needs handled predicting correlating present hazard issues new goal manager passes goal along actv corresponding current state hazards recommendations planner plan synthesis outputs hazard denotes immediate future improvement introduces hazard manager gets signal executor invokes planner first find resilient plan already current sactv plan exists invokes analyzer get possibly set hazards recommendations space reasons assume single hazard handled case multiple hazards routine though tedious manager invokes planner use visible hidden actions come plan minimum number hidden actions possible depending upon mode hazard possible synthesize plan records visibility predicates redv hidden actions manager issues reset executor separate parallel thread triggers planner achieve visibility predicates execute plan subsequently planning execution missions hidden actions enabled gradually manager triggers planner first achieve redv ensures new actions actv invoking current goal current visibility predicates toggled manager soon current goal achieved thus whenever hazard possibility robust resilient system hazard using extra hidden actions actions made visible ensures whenever new goal system able find plan robust resilient goal note completely different goals system may find plans sact hence similar exercise carried make actions visible antifragility literature noted certain systems may perform better hazards adrenaline rush enabling great strength stress conditions way attribute captured setup fact hazard may lead state certain visibility predicates set true hence agent access possibly number visible actions planning hand building resources enable strength immunity deliberate process captured internal planning empowering actions case study illustrate concepts discussed paper within path planning scenario consider wheeled robot autonomously navigating world warehouse mission split sequence goals next goal planned current goal successfully achieved goal nothing position robot reach sparse grid equally sized cells simplistic scenario robot following visible actions move turn using navigates sparse grid traditional path planning used compute optimal sequence actions take robot initial position given goal robot behavior also two actions smallmove smallturn two initially visible hidden actions two require additional support environment execution example need sensors locate robot finer grid therefore assume additional empowering actions typically needed one use hidden actions preconditions actions account requirement appropriate predicates execution follows computed plan oil spill unknown time planning encountered throws robot planned path hence original plan applicable shown point figure thus robot discovers encountered hazard autonomic manager uses external information camera images image analytics etc identify hazard oil spill uses domain knowledge derive fact move turn actions used oil spill therefore makes actions hidden sets visibility predicates smallmove smallturn internal goal planner uses prerequisite empowering actions prepare current state ultimately makes smallmove smallturn actions visible robot domain view actions enabled shown figure support world view note environment needs able provide right kind sensory information fine grid positions purposes illustration assume smallmove smallturn actions deterministic intended various possibilities exist replanning additional actions depends upon planner shown figure robot navigate within oil spill smallmove smallturn actions known state earlier plan red path come oil spill early possible navigate known state yellow path could directly find path current goal abandoning earlier plan mentioned getting oil spill may treated hazard response move turn actions made visible robot four actions disposal planning new paths figure shows second goal sequence first goal successfully reached note path avoids known oil spills due awareness new entity environment following path new oil spill exist may encountered case robot uses newly visible actions easily cope figure red curve shows path using new capabilities also note new actions may also used planned actions navigate known oil spills even original plan time possibly resulting lower cost path thus reaching second goal red oil spill known could still chosen path oil spill blue line using newly visible actions stretch appropriately mentioned implementation analyzer help knowledge module generalize specific oil spill hazard hazards ice slippery sand etc reasonable assumption referring sufficiently small steps slippery surfaces originally planned path hazard mode triggered hazard figure robot path planning example first goal sequence figure robot path planning example second goal sequence identifying hazard consequence state similar also knowledge surfaces similar properties similar uncovering hidden actions done hazards shows dimension antifragile architecture occurrence one hazard builds antifragility class hazards illustrative example shown possible intelligent system develop antifragility suggested refinements suitable design decisions base architecture summary work paper defined antifragility context intelligent systems based planning shown antifragile intelligent systems built using architecture though several additional features primarily available action set divided empowering visible hidden subsets depending strategy autonomic manager module system made robust resilient hazard different manners correctly capturing essence antifragility proposed architecture illustrated robotics path planning example robotic agent performs better encountering hazard one immediate problem arises must studied optimization selection process hidden actions made visible efficiency algorithms select minimal number new actions allow critical view systems observe improvement systems triggered hazards antifragility literature indirect way defining hazards violation cost metrics one bring improvements new actions result efficient plans even though system encounter unforeseen hazards simian army abid used inject faults create hazards improve systems similar spirit note work initial effort study concept antifragility context intelligent systems therefore rather preliminary undoubtedly rigorous implementation verification developed ideas needed addition notion performance improvement antifragility albeit intuitively true must quantified current context believe real value building antifragile systems become evident develop complex systems therefore distributed antifragility built population systems interesting future direction potential applications system improves presence hazards vast hope promise antifragile intelligent systems stimulate research within intelligent systems community references abid amal abid mouna torjmen khemakhem soumaya marzouk maher ben jemaa thierry monteil khalil drira toward antifragile cloud computing infrastructures procedia computer science antifragile clancy thomas clancy complexity flow antifragile healthcare systems jona journal nursing administration fang sansavini yiping fang giovanni sansavini emergence antifragility optimum postdisruption restoration planning infrastructure networks journal infrastructure systems barbara architecture adaptive intelligent systems artificial intelligence ibm architectural blueprint autonomic computing technical report ibm johnson gheorghe john johnson adrian gheorghe antifragility analysis measurement framework systems systems international journal disaster risk science jones kennie jones engineering antifragile systems change design philosophy procedia computer science antifragile lichtman marc lichtman matthew vondal charles clancy jeffrey reed antifragile communications ieee systems journal pages monperrus martin monperrus principles antifragile software companion first international conference art science engineering programming page acm russell stuart russell daniel dewey max tegmark research priorities robust beneficial artificial intelligence magazine taleb nassim nicholas taleb antifragile things gain disorder volume random house incorporated
| 2 |
polynomial algorithm balanced clustering via graph luis evaristo nadine jan january abstract objective clustering discover natural groups datasets identify geometrical structures might reside without assuming prior knowledge characteristics data problem seen detecting inherent separations groups given point set metric space governed similarity function pairwise similarities data objects form weighted graph adjacency matrix contains necessary information clustering process consequently formulated graph partitioning problem context propose new cluster quality measure uses maximum spanning tree allows compute optimal clustering principle polynomial time algorithm applied clustering required introduction objective clustering divide given dataset groups similar objects unsupervised manner clustering techniques find frequent application various areas including computational biology computer vision data mining gene expression analysis text mining social network analysis vlsi design web indexing name commonly metric used compute similarities items clustering task formulated graph partitioning problem complete graph generated similarity matrix fact many methods developed context detecting describing inherent cluster structures arbitrary point sets using distance function propose novel clustering algorithm based quality measure uses maximum spanning tree underlying weighted graph addresses balanced grouping minmax principle specifically aim detect clusters balanced respect research received funding projects junta galgo spanish ministry economy competitiveness work also received funding european union horizon research innovation programme marie grant agreement department applied mathematics university seville spain funded spanish government fpu grant agreement email lcaraballo department applied mathematics university seville spain email dbanez department applied mathematics university seville spain email nkroher figure illustration desired cluster properties ratios inner variance distance clusters balanced among groups clusters exhibit higher variance also apart clusters ratio variance distance data instances words allow clusters weaker inner edges formed located large distance clusters figure prove optimal clustering measure computed polynomial time using dynamic programming cluster properties typically desired grouping sensors wireless sensor networks group communicates sensors cluster streams information single command node located inside cluster power consumption sensors heavily depends distance command node ideally balanced consumption among sensors desirable consequently grouping balanced respect ratio interconnection sharing information clusters sharing information within cluster similar scenario occurs context task allocation cooperative robotics goal allocate tasks robots minimizing costs example monitoring missions using cooperative team unmanned aerial vehicles uavs goal minimize elapsed time two consecutive observations point area techniques based area partitioning achieve assigning uav according capabilities scenario load balanced clustering extends life agents allows perform task distributed manner another possible application area arises field music information retrieval several applications rely unsupervised discovery similar identical melodies melodic fragments context clustering methods used explore large music collections respect melodic similarity detect repeated melodic patterns within composition related work graph clustering refers task partitioning given graph set clusters way vertices within cluster strongly connected whereas clusters well separated number exact approximate algorithms proposed task targeting different types graphs directed undirected complete incomplete etc optimizing different cluster fitness values maximizing densities minimizing cuts complete overview existing strategies taxonomy refer algorithm proposed study operates maximum spanning tree graph idea using minimum maximum spanning trees working distances similarities respectively cluster analysis goes back far zahn demonstrated various properties indicate minimum spanning tree serves suitable starting point graph clustering algorithms proposes segmentation algorithm based local edge weight inconsistency criterion criterion revisited improved asano show optimal partitioning minimizing maximum distances partitioning maximizing minimum distance computed maximum minimum spanning trees context image processing propose dynamic programming algorithm segmenting images minimizes gray level variance resulting subtrees felzenszwalb introduced comparison predicate serves evidence cluster boundary provide clustering algorithm log time context gene expression data clustering proposed three algorithms partitioning minimum spanning tree optimizing different quality criteria problem statement let set points nodes metric space suppose exists function estimate similarity two nodes let matrix similarity computed every pair elements value similarity nodes node similar goal create groups similar nodes cluster dissimilar nodes separate clusters let weighted undirected graph induced similarity matrix set nodes sequel graphs simply referred graph set edges contains edge every unordered pair nodes weight function similarity nodes connected let cluster outgoing edges set denoted set edges connecting let maximum spanning tree let max min weights heaviest lightest edges respectively use following function quality measure cluster max max case min note higher values correspond worse clusters also note max consider min max min addition max consider max min let clustering formed clusters evaluate quality use quality worst cluster max denoting set possible clustering scenarios formed clusters state following optimization problem minmax clustering problem min subject figure representation graph bipartition crossing edges bipartition indicated dashed lines value unknown problem stated follows min subject find clustering minimum among possible clusterings one cluster irrespective number clusters contained properties optimal clustering note problems stated generalized connected necessarily complete graphs simply setting set possible partitions connected components lemma let graph let optimal clustering problem proof take denote let clustering induced connected components obtained removing cutting lightest edges since edge graph min max according properties therefore result follows showing next lemma recall definition crossing edge let graph let bipartition edge crossing edge see figure lemma let graph let optimal clustering problem let cluster every bipartition crossing edge maximum spanning tree proof prove contradiction let cluster cardinality greater let partition edge maximum spanning tree let heaviest edge crosses let spanning tree due assumption therefore adding results cycle edges cycle weight equal greater given properties maximum spanning tree let edge cycle connecting node others replacing obtain another maximum spanning tree containing thus contradiction max min contradiction lemma theorem let graph let optimal clustering problem every cluster maximum spanning tree subtree maximum spanning tree heaviest outgoing edge maximum spanning tree proof let cluster obviously lemma second part theorem claiming heaviest edge deduced properties maximum spanning tree graph following result directly deduced theorem corollary let graph let optimal clustering problem every cluster maximum spanning tree subtree maximum spanning tree heaviest outgoing edge maximum spanning tree figure graph spanning tree edges bold clustering represented dotted strokes obtaining optimal clustering cutting two edges maximum spanning tree let graph let spanning tree note every possible clustering valid clustering therefore see figure however valid clustering may feasible example clustering valid graph figure feasible cluster constitute connected component let valid cluster consider outt set outgoing edges described earlier restricted set edges forming example considering figure set outt contains edge however set contains analogously apply argument set inner edges cluster use analogous notations stt denote maximum spanning tree cluster using edges also note stt subtree determined nodes consequently stg outg previous explanation needed introduce following notions let spanning tree graph let clustering evaluation function operates usual restricted set edges forming therefore optimal solution problem every clustering theorem let graph let maximum spanning tree optimal clusterings problem respectively proof let first prove suppose every cluster contains single connected component every cluster min stg min stt follows properties maximum spanning tree moreover max outg max outt subgraph therefore implies suppose cluster contain single connected component illustration see figure cluster encompasses parts two connected components take edge connecting nodes two different connected components determined adding form cycle edges exists edge cycle maximum spanning tree consequently edges cycle note cycle includes edges edges cycle inner edges would connect nodes within connected component posing contradiction max min thus hand using lemma finally let prove every cluster min stg min stt max outg max outt demonstrated using properties maximum spanning tree therefore implies optimal clustering completes proof following result directly deduced theorem corollary let graph let maximum spanning tree optimal clusterings problem respectively consequence properties optimal problem obtained cutting appropriate edges maximum spanning tree see figure edges found combinatorially time thus using naive approach solution problem found time next section show algorithm solves problems polynomial time algorithm first recall theorem corollary provide nice property allows reduce problems graph maximum spanning tree consequently given similarity figure representation graph clustering dotted black strokes mark clustering boundaries maximum spanning trees clusters drawn red maximum spanning tree drawn blue two connected components partially covered cluster shaded gray graph operate maximum spanning tree use denote set edges maximum spanning tree observe every cluster determines one subtree using denote maximum spanning tree may confusing redundant therefore instead using use set edges connecting nodes following technical lemma crucial correctness algorithm lemma let clustering tree removing edge induce two clusterings one generated subtree see figure evaluations induced clusterings proof let denote two induced clusterings let removed edge see figure sake contradiction suppose one two induced clusterings evaluation greater assume let denote cluster containing one incident nodes one cluster another cluster note also affected removed contradiction since assuming let min recall consider max observe incident nodes may cluster see figure see figure suppose incident nodes cluster note min max therefore max min another contradiction figure obtaining two clusterings one per subtree removing edge given clustering removed edge inside cluster crossing edge suppose incident nodes different clusters case also let denote weight heaviest outgoing edge max evaluation therefore contradiction completes proof proposed algorithm based dynamic programming show stated problems optimal substructure construct optimal solution optimal solutions subtrees consider tree rooted arbitrary node let set children let parent recall empty say leaf node given tree let subtree let node minimum depth say rooted sequel consider subtrees rooted contain descendants vertices figure shows subtree rooted leaves hanging considered subtree figure shows example subtree considered addition say rooted contains descendants main idea algorithm work local clusterings subtree perform dynamic programming strategy two basic operations uptoparent knowing optimal clustering subtree compute optimal clustering subtree formed adding see figure figure subtree considered considered subtree representation clustering head cluster curve edges outs edges crossed clusters constitute headless clustering addchildtree knowing optimal clustering subtree rooted knowing optimal clustering subtree compute optimal clustering subtree formed joining see figure elaborate local clustering subtree rooted see figure clustering given cutting edges call cluster containing node head cluster denoted see figure note entirely contained however entirely contained node thus convenient introduce outs set outgoing edges connecting nodes figure outs formed edges stabbed curve given clustering subtree let weight heaviest edge outs max outs contains nodes descending outgoing edges cases set hand let weight lightest edge min formed single node empty cases set convenience introduce functions restricted quality measures cluster clustering respectively work usual restricted edges subtree thus note max every cluster head cluster usual evaluation restricted one value consequently restricted evaluation headless clustering max therefore restricted evaluation clustering max let subtree let denote set weight lightest edge head cluster min ready state encoding local solution invariant allows apply dynamic programming notation suppose empty clustering encoded ordered pair following properties fulfilled max outs min max outs min max outs empty indicates infinity value lemma optimal clustering evaluation according lemma clustering subtree used build therefore set min given subtree function whose domain image remark times convenient see table rows labels columns labels labeling edges lightest one heaviest one way refers cell value corresponding ordered pair using equations obtain encodes clustering necessarily unique max sake simplicity use following notation necessarily distinct function evaluation optimal clusterings problems min min respectively following lemma useful technical result figure let two different clusterings subtree head clusters formed nodes respectively let path three pictures outs note stabbed outs note stabbed note edge outs note stabbed lemma let subtree rooted let two different clusterings min min max outs max outs proof sake contradiction suppose max outs max outs observe max outs max outs since note max outs max outs implies max outs outs empty let one heaviest edges outs let denote path see figure none edges outs outs outs figure contradiction max outs max outs figure using min yields using max outs leads therefore contradiction suppose edge outs figure note max outs also note consequently therefore max outs contradiction since max outs max outs previous lemma following result deduced directly corollary let subtree given value every fulfills max outs following lemma key proposed dynamic programming lemma let subtree rooted let let encoded let subtree rooted removing edge induced let min replacing clustering encoded restoring edge new clustering obtained also encoded see figures proof using corollary max outq max outq using lemma max outq moreover using observations yields obviously need prove min min max outs max outs divide rest proof two parts according two possible situations going removed outs see figure see figure let start first case note replacing head cluster affected consequently max outs max outs min min let prove corollary enough prove every cluster every cluster also contained every cluster observation observation deduce let analyze second case let denote induced clustering remaining subtree rooted see figure note min min min min min min min min notice min min construction therefore min min corollary enough prove max outs every cluster note max outs max max outs max outs max outs max max outs max outs figure removing edge connects nodes different clusters initial situation induced clustering removed replacing another clustering restoring edge obtaining new clustering figure removing edge inside cluster initial situation induced clustering removed replacing another clustering restoring edge obtaining new clustering since max outs max outs therefore max outs max outs finally every cluster also contained every cluster observation observation deduce next subsections show perform operations uptoparent addchildtree order simplify formulas next subsections introduce following total order let two ordered pairs say say uptoparent computing let subtree let denote tree formed union section show compute assuming already know let computing figure subtree formed subtree subtree formed joining subtree subtree rooted figure construction clustering based one edge cut edge cut claim max max min proof let clustering encoded edge cut see figure using leads max note according lemma encoded max easy see max using equation max also easy see therefore max previous equation reduced max finally max max max claim proof impossible build clustering encoding cut cut head cluster claim min max proof let clustering encoded case cut see figure according lemma formed adding head cluster encoded note max outs max outs easy see max max max result follows claim proof let clustering encoded case consequently cut see figure according lemma formed adding head cluster encoded easy see max outs max outs result follows theorem let let subtree formed adding know function function computed time figure construction clustering based one edge cut edge cut proof think table see remark lets analyze time compute values every cell table claim computing values form takes time per cell cells form resulting total time claim values form take constant time per cell given cells form total time results claim computing values form takes time per cell cells form yielding total time finally claim computing values form takes constant time per cell cells form resulting total time result follows addchildtree computing let let subtree rooted contain let denote subtree results joining show compute let let subtree formed adding using claims previous subsection computing claim min max max proof let clustering encoded since edge cut see figure based two clusterings encodings easy see max outp max max outs max easy see max note also max max max rewritten max max claim max min max proof let clustering encoded note included consequently edge cut see figure furthermore based two clusterings encodings easy see max notice moreover max focus evaluation max max max max claim min max max min max max min proof let clustering encoded case two options build first one using two clusterings encodings respectively case cut case analogous previous claim corresponds encoding second one using two clusterings encodings respectively case cut case corresponds encoding prove using ideas similar ones used previous claims claim min max max min max max min proof let clustering encoded case cut otherwise head cluster evaluation greater based clusterings respectively two possible ways build lightest edge lightest edge cases formulas verified using ideas used previous claims theorem let let subtree rooted let denote subtree formed joining know functions function computed proof analyzing number cells claim complexity compute value cell case conclude computed complexity algorithm given tree value calculate computing otv every node leaves root procedure using mentioned operations note leaf otv otv compute function otv inner node proceed follows let set children first considering compute using uptoparent operation subsequently proceed joining subtrees tvi one one using addchildtree operation children added resulting subtree corresponds note apply single operation per edge consequently algorithm takes time note also algorithm obtain evaluation optimal clustering clusters optimal solution computed navigating backwards computed functions problem solved using idea slightly complex approach use similar algorithm based functions saving corresponds number clusters computing time references akyildiz sankarasubramaniam cayirci wireless sensor networks survey computer networks asano bhattacharya keil yao clustering algorithms based minimum maximum spanning trees proceedings fourth annual symposium computational geometry caraballo maza ollero strategy task allocation case study structure assembly aerial robots european journal operational research felzenszwalb huttenlocher efficient image segmentation international journal computer vision rauber maps music clustering neural nets wirn pages springer grygorash zhou jorgensen minimum spanning tree based clustering algorithms proceedings ieee international conference tools artificial intelligence ictai kroher pikrakis discovery repeated melodic phrases folk singing recordings ieee transactions multimedia submitted pending minor revision ollero maza multiple heterogeneous unmanned aerial vehicles springer publishing company incorporated schaeffer graph clustering computer science rieveiw olman minimum spanning trees gene expression data clustering genome inform uberbacher image segmentation using minimum spanning tree image vision computing zahn methods detecting describing gestalt clusters ieee transactions computers
| 8 |
apr fsz properties sporadic simple groups marc keilberg abstract investigate possible connection properties group sylow subgroups show simple groups well sporadic simple groups order divisible neither sylow groups previously established peter schauenburg present alternative proofs sporadic simple groups sylow subgroups shown conclude considering perfect groups available gap order show sylow introduction properties groups introduced iovanov arise considerations certain invariants representation categories semisimple hopf algebras known higher indicators see detailed discussion many important uses generalizations invariants applied drinfeld doubles finite groups invariants described entirely group theoretical terms particular invariants group property concerned whether invariants always gives group properties respect direct products example currently little reason suspect particularly strong connection proper subgroups direct factors indeed symmetric groups exist groups order therefore contains subgroups sufficiently large hand groups every proper subquotient even known connection one element comment following definition relatively weak paper establish simple improvements situation proceed establish number examples groups support potential connection sylow subgroups propose connection conjecture make extensive use gap atlasrep package calculations designed completed memory much less particular using implementation though cases larger workspace necessary cases calculations mathematics subject classification primary secondary key words phrases sporadic groups simple groups monster group baby monster group group lyons group projective symplectic group higher indicators fsz groups sylow subgroups work part outgrowth extended discussion geoff mason susan montgomery peter schauenburg miodrag iovanov author author thanks everyone involved contributions feedback encouragement marc keilberg completed workspaces memory available author ran code intel core cpu machine memory statements runtime made respect computer calculations dealing particular group completed matter minutes less though calculations involve checking large numbers groups take several days across multiple processors structure paper follows introduce relevant notation definitions background information section section present simple results offer connections property certain subgroups motivates principle investigation rest paper comparing properties certain groups sylow subgroups section introduce core functions need perform calculations gap also show groups order less except possibly order remainder paper dedicated exhibiting number examples support conjecture section show simple groups well sylow section show sporadic simple groups including tits group sylow subgroups summarized theorem case simple projective symplectic group handled section establishes second smallest simple group follows investigations schauenburg third smallest simple group susceptible methods schauenburg requires modifications methods complete reasonable time finish examples section examining perfect groups available gap show sylow subgroups indeed sylow necessity results also establish various centralizers maximal subgroups groups question also taken additional examples reader interested properties simple groups note schauenburg checked simple groups order except resolve several families simple groups established iovanov caution reader constant recurrence number sylow order paper currently computationally convenient coincidence anything else reasons mentioned course paper background notation let set positive integers study groups connected following sets definition let group define note cases particular letting fsz properties sporadic simple groups following serve definition szm property equivalence definitions follows easily corollary applications chinese remainder theorem definition group szm coprime order say group szm following result useful reducing investigation properties level conjugacy classes even rational classes lemma group bijection given coprime mod also bijection proof first part proposition slightly different notation second part corollary expressions form implicitly assume coprime order free replace equivalent value coprime whenever necessary moreover computing cardinalities suffices compute cardinalities instead latter fact useful attempting work groups large order groups centralizers easy compute especially group suspected union yields remark stronger conditions called szm condition also introduced iovanov szm condition equivalent centralizer every element order szm turn equivalent sets isomorphic permutation modules two element centralizer theorem satisfying constraints szm property action conjugation note property concerned certain invariants property concerned invariants integers invariants guaranteed another area research also considered example author shown quaternion groups certain semidirect products defined cyclic groups always includes dihedral groups semidihedral groups quasidihedral groups among many others example iovanov showed several groups families groups including regular sylow irregular prime power marc keilberg direct product groups indeed direct product szm groups also szm cardinalities sets definition split direct product obvious fashion mathieu groups symmetric alternating groups see also first item susan montgomery proposed use term instead similarly regular seem reasonable choices paper author stick existing terminology example hand iovanov also established groups exist using gap show exactly isomorphism classes groups order example author constructed examples szpj primes groups order minimum possible order combined show among things minimum order least minimum order least unknown exist however example schauenburg provides several equivalent formulations szm properties uses construct gap functions useful testing property using functions shown chevalley group sporadic simple group groups attacked directly using advanced computing resources often eye computing values indicators explicitly later present alternative way using gap prove groups sylow attempt compute actual values indicators however one consequence examples smallest known order nonf group groups order divisible readily available gap small number problematically large frequently convenient representations matrix groups far proven memory intensive need need permutation polycyclic presentations accessible calculations reasons examples pursue following sections hone property groups order divisible admit known reasonably computable permutation representations examples largest power dividing order monster group projective symplectic group perfect groups order exceptions obtaining property certain subgroups first elementary result offers starting point investigating szm groups minimal order lemma let group minimal order class szm groups implies proof smaller szm group contradiction fsz properties sporadic simple groups result applies szm groups class suitably closed taking centralizers example following version corollary let minimal order class szpj implies example examples previous section know minimum possible order szp remains unknown examples szpj minimal order among szpj also know check group order suffices assume central next determine condition property normal subgroup implies property full group lemma let group suppose szm normal subgroup coprime szm proof let index assumption definitions gives desired result corollary let finite group suppose normal szpj sylow prime szpj corollary let finite group szpj sylow normalizer szpj sadly find actual use corollary examples consider paper however result lemma examples collect remainder paper suggest following conjectural relation property conjecture group sylow subgroups remarks conjecture may involve deep results establish affirmatively seems order consider group let suppose order power prime gpj gpj union runs distinct conjugates fixed sylow let ppxj gpj bijection special case szpj would bijection obviously guarantee bijection cons jugates attempting get bijection amounts via principle controlling intersections number conjugates many elements intersections contribute gpj gpj easy known way predict intersections collection sylow completely arbitrary positive affirmation conjecture impose certain constraint intersections marc keilberg moreover considered case sets one prime divisor cases order power fixed prime positive affirmation conjecture also expected show szm properties derived szpj properties prime powers dividing hand counterexample seems likely involve constructing large group exhibits complex pattern intersections sylow prime otherwise exhibits first example group szpj prime powers nevertheless example currently known groups either conjecture trivial nilpotent direct products sylow subgroups come perfect groups though relevant centralizers need perfect examples groups establish also come perfect groups process obtain via centralizers maximal subgroups considered example solvable group well example group neither perfect solvable examples course conform conjecture gap functions groups small order current gold standard general purpose testing properties gap fsztest function schauenburg certain specific situations function fsind also useful showing group however groups consider paper functions impractical apply directly principle obstruction fsztest function needs compute conjugacy classes character tables centralizers memory intensive wholly inaccessible task fsind primary obstruction beyond specialized usage case must completely enumerate store sort entire group centralizer quickly run issues memory consumption therefore need alternatives testing failure fsz properties sidestep memory consumption issues section also desire functions help detect eliminate obviously fsz groups need make various alterations fsztest incorporate things return useful value group first function need fsztestz identical uses several helper functions found instead calculating iterating rational classes group iterates center needs single input group checked finds group rather return false returns data established property particular importance values group shown test returns fail indicate test typically inconclusive fsztestz function div center order fsz properties sporadic simple groups exponent order gcd order length continue beta return return end function primarily useful testing groups minimal order class closed centralizers lemma corollary group center suspected failing property central value next desire function quickly eliminate certain types groups automatically following result groups small order helpful theorem let group proof lemma suffices run fsztestz groups smallgroups library gap library includes groups except order practice author also used function immtests introduced check size group constrained initially corollary increased whenever desired eliminate groups orders already completely tested boils quickly eliminating groups relatively small exponent using closure properties respect direct products one need consider certain subset orders question rather every single one turn avoid essentially groups note groups order take longest check entire process takes several days multiple processors otherwise straightforward marc keilberg define function immtests function implements easily checked conditions found guarantee property calls fsztestz encounters suitable function returns true test conclusively establishes group return value fsztestz conclusively determines group fail otherwise note whenever function calls fsztestz test conclusive corollary must adjust return value fail true immtests function return true return true ispgroup fsz exponent return true length exponent return true fsztestz return return true exponent fsz properties sporadic simple groups return true fsztestz return return true return true fsztestz return return true else exponent length length return true return end incorporate changes modified version fsztest give name note function also uses function beta corresponding helper functions inputs outputs fsztestz except test definitive returns true group fsztest function div immtests marc keilberg return order exponent order gcd order length continue check immtests continue return beta return return true end typical procedure follows given group take sylow find show second entry list returned fsz properties sporadic simple groups fsztest gives precisely value need provide value directly turns always take orders need necessarily hold order acquire values introduce function fsindpt variation fsind function essential limitation fsind needs completely enumerate store sort elements group could principle avoided cost increased however main use function apply sylow small enough order issue pop inputs group best one fact passes function compute centralizer regardless function looks element integer coprime order output two element list data exists otherwise returns fail indicate test normally inconclusive note lemma centrality need consider rational classes find fsindpt function elg npos enumeratorsorted elg imeresidues order npos computes marc keilberg npos npos npos intersection alist aulist check fou return fou return end lastly introduce function fszsetcards naive straightforward way computing inputs set group would could conjugacy class subset group elements integers output two element list counts number elements first entry number elements second entry left user check inputs satisfy whatever relations needed properly interpret output fszsetcards function apow aupow apow aupow apow aupow return end fsz properties sporadic simple groups long admits reasonable iterator gap function compute cardinalities minimal consumption memory polycyclic permutation group satisfies well conjugacy class therein however matrix group gap attempt convert permutation representation usually costly often speed execution permutation groups heavily impacted degree almost always worthwhile apply smallerdegreepermutationrepresentation whenever possible reader wishes use function group tested author would advise adding code would give ability gauge far along function default nothing code even interrupt execution check local variables tell calculation close completion due variety technical matters difficult precisely benchmark function checking large group advisable acquire least sense whether calculation may require substantial amounts time remark reader opt run code see results may occasionally find outputs fszsetcards occur opposite order list due certain isomorphisms presentations groups calculated gap always guaranteed identical every single time run code result values may sometimes coprime power often inverse executions code nevertheless issues function proving property thanks lemma sufficient predictability make order output variation naive fszsetcards suffice purposes uses completing hour less however section find example expected function measured weeks fsztest requires immense amounts says fsztest group consumed memory without completing therefore need slightly less naive approach achieve palatable runtime case leave section note reader method section uses also applied groups fszsetcards suffices reason bother introduce use fszsetcards method section relies able compute conjugacy classes hit memory consumption issues fszsetcards encounter goal functions find efficient procedure instead seek highlight ways computationally problematic groups may rendered tractable altering approach one takes show property groups demonstrated perhaps surprisingly short amount time little memory consumption sporadic simple groups goal section show chevalley group sporadic simple groups order divisible well sylow begin discussion general idea approach first point observation primes divides order groups indeed careful analysis marc keilberg groups order found shows several extensions normal group order denoted atlasrep notation consulting known maximal subgroups groups easily infer sylow form monster indeed maximal subgroup maximal subgroups containing copy sylow subgroups isomorphic furthermore monster sylow form extension elementary abelian group order extra special group order given suspect sylow cause groups exploit fact centers obtain centralizers parent group contain sylow case quickly find sylow show since necessarily show unfortunately turns normal either case cardinalities sets must checked directly rather simply applying corollary remaining groups require little work various reasons case monster unique conjugacy class yielding centralizer order divisible free pick subgroup contains centralizer order fortunately maximal subgroup known bray wilson also computed permutation representation available gap via atlasrep package makes necessary calculations monster accessible sylow fairly easily shown directly however centralizer get way large order sylow normal making impractical work personal computer however consultation character tables shows monster group unique conjugacy class element order whose centralizer divisible may pick convenient maximal subgroup centralizer turns maximal subgroup works construct appropriate element order using suitable elements sylow subgroups larger centralizer similarly get element turns sylow smaller subgroup normal must compute set cardinalities entire centralizer question however centralizer size initial one subsequently able calculate appropriate cardinalities hour baby monster handled using fact monster contains double cover centralizer involution obtain centralizer need centralizer author thanks robert wilson reminding fact lyons group idea much additional complication atlasrep package currently contain permutation representations resolve obtain permutation representation either computed directly gap fsz properties sporadic simple groups downloaded used construct suitable permutation representation maximal subgroup question done calculations proceed without difficulties calculations make extensive use functions given section chevalley group show sylow independently verified since relatively small order attacked quickly easily theorem simple chevalley group sylow proof claims follow running following gap code atlasgroup sylowsubgroup shows fsz fsztestz find fsindpt check fszsetcards output follows desired note normal indeed perfect group order call fszsetcards runs approximately seconds approximately amount time necessary run fsztest directly case use fszsetcards particularly efficient groups question reasonably small sizes permutation degree nevertheless demonstrates basic method employ subsequent groups group group idea proceeds similarly theorem simple group sylow proof establish claims suffices run following gap code atlasgroup sylowsubgroup work isomorphismpcgroup image marc keilberg find fsztestz fsindpt image inver segener lma pping image inver segener lma pping iso isomorphismpcgroup image iso fszsetcards image isoc image isoc code executes approximately minutes approximately spent finding final output conclude desired normal subgroup must test entire centralizer rather note indeed nonf necessity fact call isomorphismpcgroup fail means solvable particular perfect monster group consider monster group full monster group famously difficult compute detailed beginning section consulting character tables known maximal subgroups find maximal subgroup contains suitable centralizer indeed two suitable centralizers also admits known permutation representation theorem monster group sylow proof sylow order consulting character table see unique conjugacy class yielding proper centralizer order divisible unique conjugacy class element order whose centralizer order divisible moreover order latter centralizer precisely million particular divisible suffices consider maximal subgroups containing centralizers maximal subgroup shape normalizer associated class one choice first show sylow isop atlasgroup sylowsubgroup isomorphismpcgroup image fsztestz proper centralizer order divisible still impractical work use data construct element order mentioned fsz properties sporadic simple groups image inver segener lma pping sylowsubgroup center order want reducing compu ion ime iso image iso image isoc image isoc image isoc isop proceed con choice sylowsubgroup isomorphismpcgroup image fsindpt image iso image inver segener lma pping case compute relevant fszsetcards final function yields proves desired final function call takes approximately minutes complete preceding operations complete minutes conversion lower degree may take depending lower degree degree requires slightly memory acquire conversion skipped keep memory demands well execution time fszsetcards inflate approximately day half marc keilberg remark first definition containing full sylow second definition corresponding centralizer element order first centralizer thus times larger second one either one many orders magnitude smaller larger one still large work practical purposes baby monster consider baby monster theorem baby monster sylow proof baby monster well known maximal subgroup form follows isomorphic sylow theorem sylow immediately gives claim sylow character table see unique conjugacy class whose centralizer order divisible corresponds element order class centralizer order double cover centralizer covered centralizer element order centralizer necessarily order since contains maximal subgroup unique centralizer element order order divisible centralizers isomorphic already computed centralizer theorem obtain centralizer need quotient appropriate central involution notation proof theorem involution precisely gap automatically convert quotient group lower degree representation yielding permutation representation degree centralizer require much memory complete moreover image theorem quotient group yields representative class desire denoted using image quotient easily run fszsetcards get result shows desired final call completes minutes note final return values summed one values whereas sum neither zero reflects clear relationship properties group quotients even quotient cyclic central subgroup particular immediately follow quotient centralizer would yield property simply centralizer vice versa moreover also observe cardinalities computed theorem implies sylow extra elements obtained come conjugates underscores expected difficulties potential proof disproof conjecture lyons group exactly one sporadic group order divisible lyons group theorem maximal subgroup form faithful permutation representation points given action cosets fsz properties sporadic simple groups moreover maximal subgroup sylow proof contains copy maximal subgroup order divisible therefore isomorphic sylow theorem sylow subgroup checking character table find unique nonidentity conjugacy class whose corresponding centralizer order divisible particular order centralizer comes element order maximal subgroup containing element order whose centralizer order suffice maximal subgroup unique choice new difficulty default matrix group representations available though atlasrep package purposes however faithful permutation representations known constructed gap sufficient memory available provided one uses method detailed description acquire permutation representation points well downloads generators including meataxe versions courtesy thomas breuer found web courtesy pfeiffer using obtain permutation representation maximal subgroup points using programs available online atlas turn fairly easily converted permutation representation much smaller number points provided one memory available via smallerdegreepermutationrepresentation author obtained permutation representation points corresponding action cosets exact description generators fairly long reproduce author happy provide upon request one also proceed fashion similar cases handled find permutation representation smaller degree representation obtained easy apply methods show desired claims properties directly compute sylow find fsztestz fsindpt irrespectively set centralizer run fszsetcards returns gives desired nonf claims indeed fsztest applied centralizer maximal subgroup permutation representation obtained complete quickly thanks relatively low orders degrees involved also note centralizer obtained normal sylow perfect group maximal subgroup question neither perfect solvable normal sylow sporadic simple groups show sporadic simple groups sylow subgroups marc keilberg example group necessarily indeed corollary necessarily sylow subgroups satisfies conjecture implies following sporadic groups well sylow indeed mathieu groups janko groups group mclaughlin group held group rudvalis group suzuki group suz nan group conway group thompson group tits group example continuing last example follows following sporadic simple groups immediately compliance conjecture thanks corollary conway groups fischer groups monster baby monster lyons group group previous section showed last four groups sylow conform conjecture exponent considerations sylow subgroups conway fischer groups function fsztest used quickly show conform conjecture leaves largest fischer group theorem sporadic simple group sylow subgroups proof exponent calculated character table shown previously remarked automatically implies sylow subgroups indeed corollary suffices show every centralizer element order contains element order unique conjugacy class element order divisible centralizer element order isomorphic suffices consider elements order centralizer show centralizers every element centralizer order theorem result follows following gap code verifying claims atlasgroup fsz properties sporadic simple groups mod exp lcm exp setexponent exp sylowsubgroup many ways crude one random order order cents size following summarizes results sporadic simple groups theorem following equivalent sporadic simple group order divisible sylow subgroup sylow proof combine results section previous one symplectic group mentioned symplectic group likely second smallest simple group computer calculations ran issues checking particular centralizer character table needed excessive amounts memory compute methods far also place group extreme end reasonable principle procedure functions introduced far decide group estimated two weeks uninterrupted computations nominal memory usage however achieve substantial improvement completes task hours two processes hours single process maintaining nominal memory usage simple yet critical observation comes definition particular implies classcg fszsetcards acts naively possible iterates elements marc keilberg fact need iterate elements conjugacy classes whose power gap often compute conjugacy classes finite permutation polycyclic group quickly efficiently plausible finding conjugacy classes memory intensive certain centralizers nevertheless centralizers methods impractical either time memory reasons reduction conjugacy classes makes time memory consumption nonissue otherwise problematic centralizer precisely case see theorem projective symplectic group sylow proof usual first task show sylow use data obtained attack isop atlasgroup sylowsubgroup isomorphismpcgroup image show need est fsztestz get need fsindpt fsindpt one course store results fsztestz fsindpt directly see complete data returned extract specific data need show computing following code isog uinv image isog image isog image isog compute need power fsz properties sporadic simple groups one cases length length computes number uinv computes number uinv code shows therefore desired calculation takes approximately hours calculation takes approximately hours remaining calculations done significantly less combined time note calculations two cardinalities done independently allowing one calculated simultaneously separate gap processes also note centralizer consideration perfect group permutation group degree order billion sylow moreover shown found yields rational class fails one consequence combined character table unlike case monster group unable switch centralizer smaller sylow demonstrate property similarly baby monster group interesting note proof cardinalities quickly computed exactly simply restricted using slower fszsetcards primary difference multiple conjugacy classes check sum continuing next section consider small order perfect groups available gap wish note curious sorts lemma given let proof noted iovanov introducing concept group elementary consequence fact centralizes definition suppose calculated conjugacy classes whose power code iterate elements conjugacy classes order compute however preceding lemma shows could instead partition conjugacy class orbits action marc keilberg practical upshot need consider single element orbit order compute specific case preceding theorem show single conjugacy classes precisely million elements group order fact full centralizer moreover center generated order thus scenario partitioning conjugacy classes orbits result orbits elements cardinalities computed also observed multiples would constitute reduction four orders magnitude total number elements would need check scenario since also index seems plausible partition would produce substantial reduction number elements checked provided calculating orbits done reasonably quickly would expect significant reduction practical problem however problem far author tell efficient way gap actually compute partition evidently requires gap fully enumerate store conjugacy class question particular case conjugacy class million elements permutation group degree simply requires far much excess terabytes lemma sounds promising seems lacking significant practical use computer calculations seems likely author mind situation useful could handled reasonable time memory methods nevertheless author rule idea useful tool perfect groups order less look examples additional perfect groups library perfect groups stored gap perfect groups order less exceptions noted documentation iterate available groups time paper written use function immtests section show get get fectgr ispermgroup remove fsz immtests gives list perfect groups immediately dismissed fsz properties sporadic simple groups theorem perfect groups order less available gap perfect groups library exactly extensions seven four order three order perfect group ids library proof continuing preceding discussion apply fsztest groups flist obtain desired result calculation takes approximately two days total calculation time author computer easily split across multiple gap instances time spent groups orders hand also consider sylow subgroups available perfect groups test property theorem one perfect groups order less available gap perfect groups library following equivalent sylow subgroup sylow proof gap calculations need perform quick problem easily broken pieces prove difficult compute everything memory intensive case requires test significantly memory available cases simply tested fsztest masse establish result relatively matter hours sketch details leave interested reader construct relevant code recall generally worthwhile convert polycyclic groups gap via isomorphismpcgroup let glist constructed gap running perfect group easily construct sylow subgroups use immtests section eliminate cases sylow subgroups distinct perfect group immtests inconclusive exactly cases immtests definitively shows property precisely sylow perfect groups order sylow subgroups also apply fsztestz sylow perfect groups order conclude sylow subgroups remaining come perfect group order less shown applying fsztest without difficulty three remaining sylow subgroups one direct factor factor easily tested shown whence sylow subgroup leaves two cases sylow perfect groups ids second easily shown fsz fsztest first also marc keilberg tested fsztest case requires memory approximately minutes indicated case well sylow subgroups completes proof references john bray robert wilson explicit representations maximal subgroups monster journal algebra issn doi http url http pavel etingof properties quantum doubles finite groups journal algebra issn doi http url http gap gap groups algorithms programming version http jun iovanov mason montgomery indicators quantum doubles math res yevgenia kashina yorck yongchang zhu higher indicators mem amer math issn doi url http keilberg examples primes greater three arxiv september review marc keilberg higher indicators groups doubles algebra issn doi url http marc keilberg higher indicators doubles totally orthogonal groups comm algebra issn doi url http negron gauge invariants powers antipodes arxiv september peter schauenburg higher indicators pivotal categories hopf algebras generalizations volume contemp pages amer math providence doi url http peter schauenburg central invariants higher indicators semisimple algebras trans amer math issn doi url http markus pfeiffer computing faithful permutation representation lyons sporadic simple group url https schauenburg higher indicators drinfeld doubles finite groups characters centralizers arxiv april preparation fsz properties sporadic simple groups peter schauenburg quasitensor autoequivalences drinfeld doubles finite groups journal noncommutative geometry doi url http wilson parker nickerson bray breuer atlasrep gap interface atlas group representations version http mar refereed gap package robert wilson peter walsh jonathan tripp ibrahim suleiman richard parker simon norton simon nickerson steve linton john bray rachel abbott atlas finite group representations version url http accessed october address keilberg
| 4 |
sceneflowfields dense interpolation sparse scene flow correspondences oliver georg christian didier dfki german research center artificial intelligence bmw group oct dfki bmw abstract scene flow methods use either variational optimization strong rigid motion assumption show first time scene flow also estimated dense interpolation sparse matches end find sparse matches across two stereo image pairs detected without prior regularization perform dense interpolation preserving geometric motion boundaries using edge information iterations variational energy minimization performed refine results thoroughly evaluated kitti benchmark additionally compared mpi sintel application automotive context show optional model helps boost performance blends smoothly approach produce segmentation scene static dynamic parts introduction scene flow describes perceived motion field respect observer thereby considered extension optical flow comparison apparent motion field image space many applications robot navigation high level vision tasks moving object detection driver assistance systems rely accurate motion estimation surroundings especially latter ones great potential make traffic comfortable much safer scene flow detailed representation real world motion compared optical flow scene flow algorithms also reconstruct geometry environment due increased complexity compared depth optical flow estimation scene flow recently become bigger interest time simply estimating depth optical flow separately obtain scene flow see exploiting full potential underlying data splitting produces incoherent results limits exploitation inherent redundancies general yields scene flow field approaches combine stereo depth estimation optical figure present sceneflowfields uses stereo image pairs extracts scene flow boundaries computes dense scene flow field compare ground truth color point clouds encodes optical flow flow clearly outperformed huge margin kitti benchmark approaches either designed indoor scenarios describe outdoor scenes mostly stationary rigid motion assumptions contrary method versatile fact employ regularization matching process inherently encodes first order local smoothness assumption solely data term based interpolation scheme allows sharp discontinuities scene flow field optional extension incorporates additional assumptions improves accuracy challenging figure overview sceneflowfields blue color indicates optional extension fic data sets like kitti mandatory part method basic version method optional extension illustrated figure single stages approach visualized supplementary detail present new scene flow approach called sceneflowfields densely interpolates sparse scene flow matches interpolation preserves boundaries geometry moving objects edgepreserving interpolation based improved edge detector approximate scene flow boundaries matches obtained propagation random search compute dense scene flow field turn filtered remove outliers leave sparse robust correspondences across images combination matching interpolation sums novel method strong contrast existing method estimates scene flow matching optionally used estimate obtain sparse motion indicators interpolated dense motion segmentation optionally using reconstruct scene flow static parts scene directly however method rely estimation particular contribution consists novel method find scene flow matches new interpolation method scene flow preserves boundaries geometry motion improved edge detector approximate scene flow boundaries optional approach straightforward integration thorough evaluation method kitti mpi sintel comparison methods related work starting vedula among first compute scene flow many variational approaches followed first using pure color images input later using images https tional formulation typically complex achieved realtime performance framework yet approaches sensitive initialization cope large displacements use scheme turn tends miss finer details furthermore approaches rely depth sensors either perform poorly outdoor scenarios accordingly expensive since hard capture ground truth scene flow information exist data sets evaluate scene flow algorithms use virtually rendered scenes obtain ground truth data best knowledge realistic data set provides benchmark scene flow kitti vision benchmark combines various tasks automotive vision introduction played important role development stereo optical flow algorithms extension also driven progress scene flow estimation due advent rigid plane model scene flow recently achieved boost performance majority top performing methods kitti vision benchmark employ model enforce strong regularization authors encode model alternating assignment pixel plane segment segment rigid motion based discrete set planes motions complexity model lowered assumption scene consists independently moving rigid objects thus plane segment needs assigned one object segments assigned object share motion propagation objects multiple frames achieves temporal consistency authors solve assignment assignment continuous domain another promising strategy builds decomposition scene static moving parts motion dynamic objects estimated solving discrete labeling problem using sgm algorithm perceived motion static parts directly obtained geometry scene camera approach especially convenient scenes small proportion consists moving objects like typically case traffic scenarios however assumption limits versatility method rigid plane model performs poorly applied deformable objects estimation highly dynamic scenes hard scene flow method differs mentioned approaches find sparse scene flow matches interpolated dense scene flow field recovering geometry scene motion method distinguished purely variational approaches although use variational optimization considered step refinement interpolation assume geometry scene modeled small planar segments initially presume segmentation fact size plane segments depends density matches leads smoothly curved shapes matches dense planar patches matches sparse holds affine motion model used interpolate motion differences model additionally difference optimization method draw clear boundary method apply optional model conceptual overlap also uses estimate motion static parts scene however apply model methods noteworthy similarities case way estimate motion segmentation differs essentially finally one differentiate approaches especially images kitti several characteristics make matching two frame pairs much challenging setting first considerably large stereo flow displacements second difficult lighting conditions many reflective surfaces third fast combined low frame rate causes large regions move image bounds kept mind comparing results across two categories sceneflowfields scene flow computation assume typical stereo image information provided two rectified stereo image pairs times along camera intrinsics assume baseline known rectified images baseline describes relative pose left right cameras translation parallel image plane represent scene flow vector consisting two optical flow components disparity values time steps matching jointly optimize four components obtain coherent scene flow given mentioned information estimate dense scene flow field follows subscales initialize coarsest finding best correspondences build feature vectors using wht scales subscales plus full resolution iteratively propagate scene flow vectors adjust random search afterwards dense scene flow map full resolution filtered using inverse scene flow field region filter filtered scene flow map thinned taking best match overlapping block scene flow boundaries detected using structured random forest geometry motion separately interpolated based neighborhood finally refine motion variational optimization overview outlined figure sparse correspondences matching cost matching cost algorithm solely depends data term additional smoothness assumptions made like given scene flow vector define matching cost sum euclidean distances siftflow features small patches three image correspondences correspondences stereo image pair time temporal image pair left view point standard optical flow correspondence cross correspondence reference frame right frame next time step leads following cost scene flow vector pixel patch window centered pixel returning first three principal components sift feature vector image pixel principal axes computed combined sift features four images image boundaries replicate boundary pixel pad images initialization initialization based similar three trees using wht features frame reference frame compute feature vector per pixel store tree initialize pixel reference image query feature vector pixel scene flow matches obtained comparing combinations leafs queried node according matching data term introduced equation since stereo image pairs rectified images observed right camera view create regard epipolar constraint queries tree return elements lie image row query pixel way efficiently lower number leaves per node epipolar trees speeds initialization process without loss accuracy acceleration use initialization coarsest resolution let propagation fill gaps evolving next higher scale propagation initial matches get spread propagation steadily refined random search done multiple scales helps distribute rare correct initial matches whole image scale run several iterations propagation one four image quadrants direction used equally propagation scene flow vector replaced propagated vector smaller matching cost case propagation along path continues existing scene flow vector iteration perform random search means pixels add uniformly distributed random offset interval pixel units current scale four scene flow components check whether matching cost decreases propagation random search help obtain smoothly varying vector field find correct matches even initialization slightly flawed different scale spaces simulate scaling smoothing images taking every pixel subsampling factor patches consist number pixels scales way prevent sampling errors operations performed exact pixel locations full image resolution smoothing done downsampling followed upsampling using lanczos interpolation note matching method already used use optical flow apply twice many dimensions search space consistency check matching procedure yields dense map scene flow correspondences across images however many correspondences wrong occlusions motion simply mismatching due challenging image conditions remove outliers perform consistency check first compute inverse scene flow field reference image right image time temporal order well points view swapped everything else remains explained consistency check optical flow disparity maps pixel compared corresponding values inverse scene flow field either difference exceeds consistency threshold image space scene flow vector gets removed secondly form small regions remaining pixels pixel added region approximately scene flow vector afterwards check could add one already removed outliers neighborhood following rule possible region smaller pixels remove whole region way obtain filtered final scene flow correspondences high accuracy outliers table times joint filtering matches removes disparity values necessary fill gaps additional values values result separate consistency check disparity matches separate check figure sparse correspondences left dense interpolation right optical flow disparities compute second disparity map sgm use threshold additional disparity values retrieved way accurate one standard consistency check much denser shown figure table dense interpolation sparsification interpolating filtered scene flow field recover full density additional sparsification step performed helps extend spatial support neighborhood interpolation speeds whole process blocks select match lowest consistency error filtering remaining matches called seeds respect interpolation interpolation boundaries crucial part interpolation estimation scene flow boundaries approximate motion boundaries optical flow edge detector edge detector trained semantic boundaries find models geometric boundaries well motion boundaries much better image edges much robust lighting shadows coarse textures gathered images kitti data set labeled semantic class information within images merged semantic classes general neither align geometric motion discontinuities lane markings road pole panel boundaries remaining semantic labels used binary edge maps train edge detector end utilize framework structured edge detection sed train random forest parameters paper except number training patches sample twice many positive negative patches training use bigger data set images higher resolution impact novel boundary detector evaluated section interpolation models interpolation geometry motion use two different models parts interpolated separately leads accurate closest seed share local neighborhood secondly distance closest seed constant offset neighboring seeds neglected therefore sufficient find labeling assigns pixel closest seed find local neighborhood seed use method distances seeds geodesic distances directly based edge maps boundary detector variational optimization figure whereas sed detects image boundaries new boundary detector suppresses lane markings shadows construction scene due fact separate consistency check disparity leaves geometric matches motion would leave image boundaries suppose local neighborhood seeds given unknown scene flow vector pixel geometric motion seeds respectively ngeo nmotion depth pixel reconstructed fitting plane seeds neighborhood ngeo done solving linear system equations neighboring seed points disparity values known using weighted least squares weights seed obtained gaussian kernel exp distance target pixel seed missing disparity value obtained plugging coordinates estimated plane equation similar fashion using neighborhood motion seeds nmotion missing motion obtained fitting affine transformation using weighted least squares motion seeds world coordinates motion seed time affine transformation twelve unknowns weights computed gaussian kernel geometric interpolation using distances target pixel motion seeds summarize full reconstruction scene flow pixel compute using plane model reproject point world space transform according associated affine transformation project back image space obtain neighborhood find local neighborhoods follow idea using approximations first closest seeds pixel closest seeds closest seed thus pixels refine motion interpolation use variational energy minimization optimize objective low cross edata edata esmooth motion represented image space optical flow change disparity energy consists three parts two data terms one temporal correspondence one cross correspondence adaptively weighted smoothness term regularization data terms use gradient constancy assumption experiments shown term color constancy assumption neglected edata low edata edata cross edata edata data terms contribute energy function otherwise indicates scene flow leaving image domain smoothness term esmooth penalizes changes motion field weighted edge value boundary detector pixel parts use charbonnier penalty achieve robustness since smoothness term rather enforces constancy data terms zero optimize scene flow pixels interpolated scene flow field leaves energy formulation inspired use linear approximations equations objective apply framework brox without steps find solution successive sor figure example motion segmentation sparse motion indicators obtained computation dense segmentation interpolation moving ground truth objects provided kitti model section show approach described far achieves results comparable special challenges kitti data set make additional optional assumption improve performance sceneflowfields following argue parts scene static thus motion areas fully determined observer given motion segmentation static dynamic areas apply inverse egomotion static points scene using matching interpolation scheme easily estimated almost additional effort estimation filtered scene flow field interpolation provides accurate matches across images compute correspondences reference frame temporally subsequent frame triangulation stereo matches limit depth correspondences meters disparity resolution farther distances gets inaccurate way obtain problem solve iteratively using ransac find relative pose left cameras time minimizing error correspondences ransac consider correspondence outlier error pixel first estimation recompute set inliers relaxed threshold pixels pose two stage process helps avoid local optima find diverse robust correspondences motion segmentation initial sparse motion segmentation directly obtained side product egomotion estimation outliers correspondences considered motion points conformity estimated marked static use interpolation compute dense segmentation figure pixels labeled moving spread object boundaries within detected segmentation binary labeling complex interpolation model needed unknown pixel gets assigned weighted mean local neighborhood weights based geodesic distances matches interpolation method similar estimator interpolated motion field thresholded obtain dense binary motion segmentation quality segmentation evaluated section finally inverse estimated egomotion applied points labeled static experiments results use explicit values previous sections following parameters experiments even across different data sets subscales full resolution run iterations propagation random search use consistency threshold minimal region size region filter interpolation use geometry motion neighborhoods seeds respectively gaussian kernel weight geodesic distances variational energy minimization set run two outer one inner iteration optimization framework iterations sor solver using relaxation factor threshold interpolated motion field obtain binary segmentation applying model boundary detection test impact motion boundary detector evaluate different variants method twice using standard edge detection second time using structured random forest trained semantic edges results compared table major improvements visualized figure high image gradients lane markings shadows especially shadows vehicles effectively suppressed using boundary detector time accurately detects kinds objects helps greatly smoothly recover street surface interpolation sharpen discontinuities depth motion general allows accurate boundaries interpolating motion segmentation kitti scene flow benchmark main experiments taken kitti scene flow benchmark results public submission presented table compare methods time writing method rank method prsm osf csf sceneflowfields prsf dual runtime variant density full var full var sed table results kitti scene flow benchmark column dual indicates whether two frame pairs used method run times parentheses using gpu achieve third best result among methods sceneflowfields yields especially good results foreground regions edges matches disparity table evaluation different parts method kitti training data new edge detector outperforms sed egomotion model helps greatly improve overall results bottom two rows show amount outliers sparse correspondences interpolation density computed respect available kitti ground truth ranked achieved best result methods time considerably faster top three performing methods table method generalizes better data sets shown section often outperform best method figure give visual example results compare two top performing methods categories seen interpolation produces sharp edges combination matching method helps obtain accurate scene flow especially moving objects methods comparable overall performance kitti perform worse moving foreground objects sceneflowfields apart official evaluation test different components method table evaluate effect part use training images kitti data set evaluate basic method without variational optimization var full basic approach full method optional extension additionally compute accuracy densities respect kitti ground truth sparse scene flow matches matches separately filtered sparse stereo correspondences disparity variational optimization primarily useful optical flow foreground variants using improved edge detector outperform according variant using basic image edges finally use provided object maps kitti test performance motion segmentation figure end compute precision recall binary segmentation precision defined percentage estimated pixels correctly labeled motion recall relative amount ground truth pixels labeled moving covered estimation frames achieve precision recall missed ground truth foreground pixels belong objects far away moving parallel direction viewing way error correspondences estimation threshold two remarks considered regarding precision first kitti annotates cars mostly visible pedestrians cyclists vehicles partly occluded cars included ground truth marked moving motion secondly since areas wrongly classified dynamic filled basic scene flow estimation still high quality tune favor high recall mpi sintel claim proposed method versatile restricted setup therefore additionally evaluated sceneflowfields mpi sintel without changing parameters difference prsm sceneflowfields osf figure exemplary visual comparison kitti scene flow benchmark show disparity optical flow results along corresponding error maps prsm osf sceneflowfields accurately detect moving objects reconstruct sharp boundaries examples visualized supplementary video public homepage kitti evaluation kitti use semantic edge detector trained kitti imagery instead sed obtain edge maps test basic approach extension method training frames two sequences subsequent frame exists processed final rendering passes images used measure percentage outliers according kitti metric disparity optical flow sequences cave sleeping left evaluated want compare due varying camera parameters relative amounts outliers evaluated sequences given table compared using results published results keep scene flow methods although tuned method mpi sintel sequences motion ambush bandage depth estimation even beats multiframe scene flow method ranked first kitti conclusion novel approach interpolate sparse matches dense scene flow achieves performance different data sets time submission sceneflowfields ranked third kitti achieves performance mpi sintel shown stochastic matching approach works higher dimensional search sequence prsm disparity osf fsf prsm optical flow osf fsf average alley alley ambush ambush ambush ambush ambush bamboo bamboo bandage bandage cave market market market mountain shaman shaman sleeping temple temple table results mpi sintel average outliers show sceneflowfields keep spaces applied consistency filters produce robust correspondences interpolation turned powerful tool fill gaps scene flow field due filtering cope missing correspondences across images applied optional egomotion model helps overcome issue future work want improve robustness extend sceneflowfields use multiple frame pairs references bailer taetz stricker flow fields dense correspondence fields highly accurate large displacement optical flow estimation international conference computer vision iccv basha moses kiryati scene flow estimation view centered variational approach international journal computer vision ijcv brox bruhn papenberg weickert high accuracy optical flow estimation based theory warping european conference computer vision eccv brox malik large displacement optical flow descriptor matching variational motion estimation transactions pattern analysis machine intelligence pami butler wulff stanley black naturalistic open source movie optical flow evaluation european conference computer vision eccv chen koltun full flow optical flow estimation global optimization regular grids conference computer vision pattern recognition cvpr derome plyer sanfourche besnerais approach optical flow computation using stereo german conference pattern recognition gcpr zitnick structured forests fast edge detection international conference computer vision iccv gaidon wang cabon vig virtual worlds proxy tracking analysis conference computer vision pattern recognition cvpr geiger lenz urtasun ready autonomous driving kitti vision benchmark suite conference computer vision pattern recognition cvpr sun computing fields via conference computer vision pattern recognition cvpr pattern matching using projection kernels transactions pattern analysis machine intelligence pami herbst ren fox flow dense motion estimation using color depth international conference robotics automation icra hermans floros https hirschmuller stereo processing semiglobal matching mutual information transactions pattern analysis machine intelligence pami hornacek fitzgibbon rother sphereflow dof scene flow pairs conference computer vision pattern recognition cvpr huguet devernay variational method scene flow estimation stereo sequences international conference computer vision iccv jaimez souiai cremers framework dense scene flow international conference robotics automation icra lenz ziegler geiger roser sparse scene flow segmentation moving object detection urban environments intelligent vehicles symposium liu yuen torralba sift flow dense correspondence across scenes applications transactions pattern analysis machine intelligence pami beall alcantarilla kira dellaert continuous optimization approach efficient accurate scene flow european conference computer vision eccv mayer ilg hausser fischer cremers dosovitskiy brox large dataset train convolutional networks disparity optical flow scene flow estimation conference computer vision pattern recognition cvpr menze geiger object scene flow autonomous vehicles conference computer vision pattern recognition cvpr neoral object scene flow temporal consistency computer vision winter workshop cvww rabe franke gehrig fast detection moving objects complex scenarios intelligent vehicles symposium revaud weinzaepfel harchaoui schmid epicflow interpolation correspondences optical flow conference computer vision pattern recognition cvpr ros ramos granados bakhtiary vazquez lopez perception paradigm autonomous driving winter conference applications computer vision wacv schuster bailer stricker combining stereo disparity optical flow basic scene flow commercial vehicle technology symposium cvts schuster kuschk bailer stricker towards flow estimation automotive scenarios acm computer science cars symposium cscs sun roth black quantitative analysis current practices optical flow estimation principles behind international journal computer vision ijcv taniai sinha sato fast stereo scene flow motion segmentation conference computer vision pattern recognition cvpr vedula baker rander collins kanade scene flow international conference computer vision iccv vogel schindler roth piecewise rigid scene flow international conference computer vision iccv vogel schindler roth scene flow estimation piecewise rigid scene model international journal computer vision ijcv wannenwetsch keuper roth probflow joint optical flow uncertainty estimation international conference computer vision iccv schenkenberger stricker towards reconstruction adapt rigid reconstruction movements international conference computer vision theory applications visapp wedel rabe vaudrey brox franke cremers efficient dense scene flow sparse dense stereo data european conference computer vision eccv davoine bordes zhao multimodal information fusion urban scene understanding machine vision applications mva yoshida stricker sensor depth enhancement automotive exhaust gas international conference image processing icip
| 1 |
mar closeness testing discrete histogram distributions ilias university southern california diakonik daniel university california san diego dakane vladimir university edinburgh march abstract investigate problem testing equivalence two discrete histograms probability distribution piecewise constant set intervals histograms extensively studied computer science statistics given set samples two distributions want distinguish high probability cases main contribution paper new algorithm testing problem nearly matching informationtheoretic lower bound specifically sample complexity algorithm matches lower bound logarithmic factor improving previous work polynomial factors relevant parameters algorithmic approach applies general setting yields improved sample upper bounds testing closeness structured distributions well introduction work study problem testing equivalence closeness two discrete structured distributions let family univariate distributions problem closeness testing following given sample access two unknown distribution want distinguish case versus denotes distance distributions sample complexity problem depends underlying family example class distributions known optimal sample complexity max sample bound best possible family includes possible distributions may able obtain significantly better upper bounds natural settings example promised approximately algorithm test equivalence using samples sample bound independent support size dramatically better tight bound large supported nsf award career sloan research fellowship supported nsf award career sloan research fellowship supported university edinburgh pcd scholarship generally described framework obtain equivalence testers various families structured distributions continuous discrete domains results families distributions particular continuous domains known whether improved natural families discrete distributions paper work framework obtain new algorithms lower bounds state results full generality describe detail concrete application techniques case histograms family structured discrete distributions plethora applications testing closeness histograms probability distribution piecewise constant set intervals algorithmic difficulty testing properties distributions lies fact location size intervals priori unknown histograms extensively studied statistics computer science database community histograms constitute common tool succinct approximation data statistics many methods proposed estimate histogram distributions variety settings recent years histogram distributions attracted renewed interested theoretical computer science community context learning testing study following testing problem given sample access two distributions promised approximately distinguish cases versus main application techniques give new testing algorithm lower bound problem provide summary previous work problem followed description new upper lower bounds want closeness two goal understand optimal sample complexity problem function previous work summarized follows authors gave closeness tester sample complexity max best known sample lower bound max straightforwardly follows since simulate support distribution notably none two bounds depends domain size observe upper bound max tight entire range parameters example algorithm testing closeness arbitrary support distributions sample size max matching sample complexity lower bound constant factor simple example might suggest max lower bound tight general prove case main conceptual message new upper bound lower bound following sample complexity closeness two depends subtle way relation relevant parameters find fact rather surprising phenomenon occur sample complexities closely related problems specifically testing identity fixed distribution sample complexity learning sample complexity note sample bounds independent known tight entire range parameters main positive result new closeness testing algorithm sample complexity log log combined known upper bound obtain sample upper bound max min log log log main negative result prove lower bound min first term expression shows log factor appears sample complexity upper bound fact necessary constant power summary bounds provide characterization sample complexity histogram testing problem entire range parameters observations order interpret bounds goes infinity upper bound tight poly small term kick right answer sample complexity problem polylog terms log appearing sample complexity become equal exponential therefore new algorithm better sample complexity following subsection state results general setting explain aforementioned applications obtained results comparison prior work given family discrete distributions interested designing closeness tester distributions work general framework introduced instead designing different tester given family approach proceeds designing generic equivalence tester different metric metric termed positive integer interpolates kolmogorov distance turns range structured distribution families used proxy value example family distance tantamount distance thus obtain closeness tester plugging right value general closeness tester formally state results need terminology notation use denote probability mass functions distributions discrete support denote probability element distribution two discrete distributions distances ppn fix partition domain disjoint intervals partition reduced distribution pir corresponding discrete distribution assigns point mass assigns interval pir let collection partitions domain intervals def define qkak kpir qri context gave closeness testing algorithm using max samples also shown sample bound theoretically optimal constant factors adversarially constructed continuous distributions discrete distributions support size sufficiently large function results raised two natural questions optimal sample complexity testing problem function obtain tight sample lower bounds natural families structured distributions resolve open questions main algorithmic result following theorem given sample access distributions exists algorithm takes max min log log log samples distinguishes probability cases qkak explained using theorem one obtain testing algorithms closeness testing various distribution families using distance proxy distance fact univariate distribution family let smallest integer holds kak exists closeness testing algorithm sample complexity theorem applications upper bound distributions follows noting also note upper bound robust applies even finally remark general closeness tester yields improved upper bounds various families structured distributions consider example case consists kmixtures simple family discrete gaussians parameter large algorithm leads tester whose sample complexity scales theorem implies bound lower bound side show theorem let distributions let less sufficiently small constant tester distinguishes qkak must use samples min furthermore min tester distinguishes qkak must use samples even guaranteed piecewise constant distributions pieces note lower bound straightforwardly applies even khistograms dominates bounds also note general lower bound respect distance somewhat stronger matching term upper bound related work past two decades distribution property testing whose roots lie statistical hypothesis testing received considerable attention computer science community see two recent surveys majority early work field focused characterizing sample size needed test properties arbitrary distributions given support size two decades study regime many properties interest exist testers matched lower bounds many settings interest know priori underlying distributions nice structure exactly approximately problem learning probability distribution structural assumptions classical topic statistics see classical book recent book topic recently attracted interest computer scientists hand theory distribution testing structural assumptions less fully developed decade ago batu kumar rubinfeld considered specific instantiation question testing equivalence two unknown discrete monotone distributions obtained tester whose sample complexity domain size recent sequence works developed framework leverage structural assumptions obtained efficient testers number natural settings however several natural properties interest still substantial gap known sample upper lower bounds overview techniques prove upper bound use technique iteratively reducing number bins domain elements particular show merge bins together consecutive pairs significantly affect distance distributions unless large fraction discrepancy distributions supported bins near boundaries optimal partition order take advantage provide novel identity tester requires samples distinguish cases case large distance supported bins able take advantage small support essentially discrepancy supported bins implies distance distributions must reasonably large new lower bounds somewhat involved prove exhibiting explicit families pairs distributions one case large distance impossible distinguish two families small number samples cases explicit piecewise constant distributions small number pieces cases domain partitioned small number bins restrictions distributions different bins independent making analysis easier bins mass number samples bins serve purpose adding noise making harder read signal bins remaining bins either supported interval supported consecutive intervals three samples obtained one intervals order samples distributions come provide information family came unfortunately since triple collisions relatively uncommon useful unless max bins one zero samples tell nothing bins exactly two samples may provide information bins seen learn nothing ordering samples may learn something spacing particular case supported disjoint intervals would suspect two samples close far likely taken distribution rather opposite distributions hand order properly interpret information need know something scale distributions involved order know two points considered close overcome difficulty stretch distributions random exponential amount effectively conceal information scales involved long total support size distributions exponentially large closeness tester discrete domains warmup simpler algorithm start giving simpler algorithm establishing basic version theorem slightly worse parameters proposition given sample access distributions exists algorithm takes log log log log samples distinguishes probability cases qkak basic idea algorithm following distributions construct new distributions merging pairs consecutive buckets note much smaller domains size furthermore note distance partition intervals using essentially partition show kak almost large qkak fact hold unless much error supported points near endpoints intervals case turns easy algorithm detect discrepancy require following definitions definition discrete distribution merged distribution obtained def distribution partition define divided partition domain points obtained gluing together odd points even points note one simulate sample given sample letting definition let distributions integers let sum largest values begin showing either kak close qkak large lemma two distributions let merged distributions qkak kak proof let partition intervals qkak let obtained rounding upper endpoint interval except last nearest even integer rounding lower endpoint interval nearest odd integer note kak partition obtained taking points moving one interval another therefore difference twice sum points therefore combing gives result next need show two distributions large detected easily lemma let distributions positive integer exists algorithm takes samples probability least distinguishes cases note needed distinguish would require samples however optimal testers problem morally testers roughly actually distinguish viewpoint clear would easier test discrepancies since making easier tester detect difference general approach way techniques developed begin giving definition split distribution coming paper definition given distribution multiset elements define split distribution aspfollows let denote plus number elements equal thus therefore associate elements elements set define distribution support letting random sample given drawn randomly drawn randomly recall two basic facts split distributions fact let probability distributions given multiset simulate sample taking single sample respectively holds kps lemma let distribution multisets kps kps obtained taking samples kps also recall optimal closeness tester promise one distributions smal norm lemma let two unknown distributions exists algorithm input min draws samples probability least distinguishes cases proof lemma begin presenting algorithm algorithm input sample access pdf output yes let min let multiset obtained taking independent samples use tester lemma distinguish cases kps return result analysis simple lemma probability kps therefore number samples needed using tester lemma algorithm return yes appropriate probability kps since elements contribute total error least kps therefore case algorithm returns appropriate probability proof proposition basic idea algorithm following lemma qkak large either kak algorithm tests whether large recursively tests whether kak large since half support size need log rounds losing factor sample complexity present algorithm algorithm input sample access pdf output yes pkak def let distributions defined take log log samples sufficiently large use samples distinguish cases log probability error log using samples test test yields return otherwise return yes show correctness terms sample complexity note taking majority log log independent runs tester lemma run algorithm stated sample complexity taking union bound also assume tests performed step returned correct answer thus algorithm returns yes otherwise qkak repeated application lemma qkak kak last step support size kak therefore least must case log thus algorithm returns completes proof full algorithm improvement proposition somewhat technical key idea involves looking analysis lemma generally speaking choosing larger value total sample complexity decrease norm thus final complexity unfortunately taking might lead problems subdivide original bins error supported bins turn could worsen lower bounds however case total mass bins carrying difference large thus obtain improvement lemma mass bins error supported small motivates following definition definition probability distributions integer real number maximum sets size words biggest difference coming bins mass following lemma lemma let distributions let positive integer exists algorithm takes samples probability least distinguishes cases proof algorithm analysis nearly identical lemma include completeness algorithm input sample access pdf output yes let let multiset obtained taking independent samples use tester lemma distinguish cases kps return result analysis quite simple firstly assume kps happens probability choice next let set size probability choice elements land assuming case sufficient distinguish kps done samples completes proof prepared prove theorem basic idea behind improvement want avoid merging heavy bins first taking large set elements defining way involve merging elements sets proof first note given algorithm suffices provide algorithm algorithm following algorithm input sample access pdf output yes pkak let let sufficiently large constant let set log independent samples def let define distributions inductively follows flattening merging bins certain dyadic intervals intervals form obtained merging pair adjacent bins correspond intervals neither subintervals contains point obtained merging bins similar way take log log samples use samples distinguish cases log probability error log using samples test test yields return otherwise test kak using algorithm proposition return answer proceed analysis firstly note bins corresponds dyadic interval either containing element adjacent element therefore domain poly also note sample complexity log log log log log log log sufficient proceed prove correctness completeness easy see thus union bound pass every test algorithm returns yes probability remains consider soundness case case qkak case let partition intervals claim high probability choice every dyadic interval mass least contains endpoint also contains element prove note contain endpoints endpoint contained unique minimal dyadic interval mass least suffices show intervals mass least contains point follows easily union bound henceforth assume chose property let partition bins defined inductively obtained flattening assigning new bins partially overlap two intervals arbitrarily one two corresponding intervals note twice sum bins containing element turn inducting qkak kak therefore qkak either kak either case probability least algorithm detect reject completes proof nearly matching lower bound section prove nearly matching sample lower bound first show slightly easier lower bound holds even distributions piecewise constant pieces modify obtain stronger general bound testing closeness distance lower bound begin lower bound distributions moving discrete setting first establish lower bound continuous histogram distributions bound discrete distributions follow taking adversarial distribution example rounding values nearest integer order work need ensure adversarial distribution decrease much apply operation satisfy requirement guarantee distributions piecewise constant pieces length least proposition let sufficiently small fix min exist distributions pairs distributions pieces length least drawn deterministically drawn qkak probability samples insufficient distinguish whether pair drawn better probability lower bound construction proceeds follows divide domain bins information distributions samples drawn given bin ordering samples help distinguish cases otherwise unless least three samples taken bin question approximately bins mass might convey information least three samples taken bin however bins mass approximately used add noise take samples expect see approximately lighter bins least three samples however see approximately heavy bins three samples order signal overwhelm noise need ensure intuitive sketch assumes obtain information bins two samples drawn naively case distance two samples drawn bin independent whether drawn distribution however supported disjoint intervals one would expect points close far likely drawn distribution different distributions order disguise scale length intervals random exponential amount essentially making impossible determine meant two points close effect imply two points drawn bin reveal log bits information whether thus order information sufficient need log proceed formal proof proof proposition use ideas obtain lower bound using information theoretic argument may assume otherwise may employ standard lower bound samples required distinguish two distributions support size first note sufficient take distributions pairs piecewise constant distributions total mass probability running poisson process parameter insufficient distinguish pair pair construct distributions follows divide domain bins length bin independently generate random log uniformly distributed log produce interval within bin total length random offset cases supported union probability restrictions uniform time latter case drawn constant interval drawn constant interval mass coming random half coming half note cases piecewise constant pieces length least easy show high probability total mass drawn qkak least probability show one given samples taken randomly either shared information samples source family small implies one unable consistently guess whether pair taken let random variable uniformly random either let obtained applying poisson process parameter pair distributions drawn note suffices show shared information particular fano inequality lemma uniform random bit correlated random variable function least probability let samples taken ith bin note conditionally independent therefore proceed bound note integral pairs multisets representing set samples set samples thus split sum based value note distributions therefore probability selecting samples therefore contributes sum note distributions cases conditioning cases therefore case contribution note since independent note therefore probability exactly elements selected bin selected uniformly distributed although sets taken however probability elements taken least case case elements uniformly distributed uniformly therefore contribution shared information note therefore sum summing bins remains analyze case ignoring elements came identically distributed conditioned conditioned since distributions indistinguishable former case contribution terms shared information dtv dtv suffice show conditioned upon dtv log let order preserving linear function notice conditional may sample follows pick two points uniformly random assign points follows uniformly randomly assign points either distribution randomly either assign points points assign points points randomly pick apply get outputs notice four cases points coming points coming iii point preceding point point preceding point equally likely conditioned either however note ordering longer independent choice therefore sample subject subject way ordering deterministically consider running sampling algorithm select sampling sampling one four cases note dtv dtv variation distance random choices show small note distributed like means log uniform log log log similarly log uniform log log log differ total variation distance log log log taking expectation get log therefore may correlate choices made selecting two samples except probability log note conditioning uniformly distributed subintervals length least therefore distributions differ hence total variation distance conditioned conditioned log log completes proof turn lower bound testing distance discrete domains proof second half theorem assume sake contradiction case exists tester taking samples use tester come continuous tester violates proposition begin proving technical bounds parameters involved firstly note already lower bound may assume much less claim min nothing prove otherwise log thus nothing prove unless log case log log thus log log done let let specified proposition claim tester distinguish ones taken samples follows rounding nearest third integer obtain supported set size since piecewise constant pieces size least hard see kak qkak therefore tester distinguish kak used distinguish qkak contradiction proves lower bound stronger lower bound order improve bound last section need modify previous construction two ways contribution shared information coming case two samples taken bin first need different way distinguishing variation distance distributions obtained taking pair samples bin log rather log also need better method disguising errors particular current construction information coming pairs samples bin occurs two samples close happens samples usually come one poorly disguised noise coming heavier bins since particularly likely produce samples close improve way disguising different heavy bins better mask signal order solve first problems need following construction lemma let sufficiently large integer exists family pairs distributions following holds firstly deterministically supported disjoint intervals thus distance furthermore let family pairs distributions obtained taking letting words sample thought taking sample label consider distribution obtained sampling taking two independent samples let induced distribution along labels taken define similarly note equivalent taking sample labels dtv log proof note enough construct family continuous distributions deterministically supported intervals separated distance second condition holds rounding values nearest integer obtain appropriate discrete distribution construction straightforward first choose uniformly uniformly log uniformly sample take uniformly log return sample take uniformly log return clear supported disjoint intervals distance least remains prove complicated claim let distribution obtained picking pair distributions returning two independent samples let distribution obtained picking pair distributions returning independent samples claim dtv dtv sample points coming points come whereas one point comes points come hand cases pair samples comes let sample sample claim dtv dtv averaging definition particular consider following mechanism taking sample first randomly select values select two sample points finally sample defining value notice difference two final points depend choice fact making choices final distribution within uniform distribution pairs points distance thus close distributionally distribution pairs separation similar statement holds points separation thus dtv dtv desired next claim dtv dtv easily seen case averaging left bound latter distance chosen using similarly chosen using notice fix variation distance two distributions given distributions values log log therefore variation distance log times earth mover distance log log correlating variables expectation log tanh easily seen log shows dtv log completing proof ready prove first part theorem proof overall outline similar methods used last section sufficiently large integers going define families pairs probability random sample either consists two total mass distributions picked sample always two distributions picked sample distance probability letting outcome poisson process parameter run random sample either family used reliably determined unless define need define one family firstly let families distributions lemma let described lemma define another family pairs distributions follows first select point renormalized version return pair distributions equals uniform distribution define split blocks size sample assigns block independently random sample scaled factor probability probability probability sample assigns block independently probability probability probability easy see satisfy first three properties listed demonstrate fourth let uniform bernoulli random variable let obtained applying poisson process parameter sample sample show letting samples taken ith block note conditionally independent therefore information gained contribution leads total contribution remains consider contribution events note contribution cases block cancel therefore log hand least probability restriction block therefore contribution coming events hence total contribution terms completes proof references acharya diakonikolas hegde schmidt fast algorithms approximating distributions histograms acm symposium principles database systems pods pages acharya diakonikolas schmidt fast algorithms segmented regression proceedings international conference machine learning icml pages acharya diakonikolas schmidt density estimation time proceedings annual symposium discrete algorithms soda pages full version available https barlow bartholomew bremner brunk statistical inference order restrictions wiley new york batu fortnow rubinfeld smith white testing distributions close ieee symposium foundations computer science pages batu kumar rubinfeld sublinear algorithms testing monotone unimodal distributions acm symposium theory computing pages canonne survey distribution testing data big blue electronic colloquium computational complexity eccc canonne bins enough testing histogram distributions proceedings acm symposium principles database systems pods pages canonne diakonikolas gouleakis rubinfeld testing shape restrictions discrete distributions symposium theoretical aspects computer science stacs pages chan diakonikolas servedio sun learning mixtures structured distributions discrete domains soda pages chan diakonikolas servedio sun efficient density estimation via piecewise polynomial approximation stoc pages chan diakonikolas servedio sun density estimation time using histograms nips pages chan diakonikolas valiant valiant optimal algorithms testing closeness discrete distributions soda pages chaudhuri motwani narasayya random sampling histogram construction much enough sigmod conference pages daskalakis kamath tzamos clt poisson multinomials applications proceedings annual acm symposium theory computing stoc new york usa acm daskalakis diakonikolas donnell servedio tan learning sums independent integer random variables focs pages daskalakis diakonikolas servedio learning distributions via testing soda pages daskalakis diakonikolas servedio learning poisson binomial distributions stoc pages daskalakis diakonikolas servedio valiant valiant testing distributions optimal algorithms via reductions soda pages diakonikolas gouleakis peebles price testers optimal uniformity closeness electronic colloquium computational complexity eccc diakonikolas hardt schmidt differentially private learning structured discrete distributions nips pages diakonikolas kane new approach testing properties discrete distributions focs pages full version available diakonikolas kane nikishkin optimal algorithms lower bounds testing closeness structured distributions ieee annual symposium foundations computer science focs pages diakonikolas kane nikishkin testing identity structured distributions proceedings annual symposium discrete algorithms soda pages diakonikolas kane stewart efficient robust proper learning logconcave distributions corr diakonikolas kane stewart fourier transform poisson multinomial distributions algorithmic applications proceedings stoc diakonikolas kane stewart learning multivariate distributions corr diakonikolas kane stewart optimal learning via fourier transform sums independent integer random variables colt volume pages full version available diakonikolas kane stewart properly learning poisson binomial distributions almost polynomial time proceedings conference learning theory colt pages full version available devroye lugosi combinatorial methods density estimation springer series statistics springer devroye lugosi bin width selection multivariate histograms combinatorial method test freedman diaconis histogram density estimator theory zeitschrift wahrscheinlichkeitstheorie und verwandte gebiete gilbert guha indyk kotidis muthukrishnan strauss fast algorithms approximate histogram maintenance stoc pages groeneboom jongbloed nonparametric estimation shape constraints estimators algorithms asymptotics cambridge university press guha koudas shim approximation streaming algorithms histogram construction problems acm trans database indyk levi rubinfeld approximating testing distributions time pods pages jagadish koudas muthukrishnan poosala sevcik suel optimal histograms quality guarantees vldb pages klemela multivariate histograms partitions statistica sinica lugosi nobel consistency histogram methods density estimation classification ann lehmann romano testing statistical hypotheses springer texts statistics springer neyman pearson problem efficient tests statistical hypotheses philosophical transactions royal society london series containing papers mathematical physical character paninski test uniformity given discrete data ieee transactions information theory rubinfeld taming big probability distributions xrds scott optimal histograms biometrika scott multivariate density estimation theory practice visualization wiley new york thaper guha indyk koudas dynamic multidimensional histograms sigmod conference pages valiant valiant automatic inequality prover instance optimal identity testing focs willett nowak multiscale poisson intensity density estimation ieee transactions information theory
| 10 |
arxiv sep haskell overlooked object system february oleg kiselyov fleet numerical meteorology oceanography center monterey ralf microsoft redmond abstract haskell provides parametric polymorphism opposed subtype polymorphism languages java ocaml contentious question whether haskell without extensions common extensions new extensions fully support conventional programming encapsulation mutable state inheritance overriding statically checked implicit explicit subtyping first phase demonstrate far get functional programming restrict plain haskell second major phase systematically substantiate haskell common extensions supports conventional features plus advanced ones including lexically scoped classes implicitly polymorphic classes flexible multiple inheritance safe downcasts safe arguments haskell indeed support width depth structural nominal subtyping address particular challenge preserve haskell type inference even objects functions advanced type inference strength haskell worth preserving many features get free type system haskell turns great help guide rather hindrance features introduced haskell oohaskell library based hlist library extensible polymorphic records labels subtyping library sample code patterned examples found textbooks programming language tutorials including ocaml object tutorial demonstrates code translates oohaskell way essentially without requiring global transformations oohaskell lends sandbox typed language design keywords functional programming object type inference typed objectoriented language design heterogeneous collections mutable objects programming haskell haskell structural subtyping duck typing nominal subtyping width subtyping deep subtyping kiselyov contents introduction folklore shapes example reference encoding oohaskell encoding discussion example classes interfaces extensibility encapsulation subtyping technicalities alternative haskell encodings map subtype hierarchy algebraic datatype map object data record types functional objects tail polymorphism mutable objects tail polymorphism subtypes composed record types overloading variation existential quantification variation heterogeneous collections oohaskell idioms objects records labels mutable variables hlist records test cases object generators constructor arguments computations implicitly polymorphic classes nested object generators open recursion instantiation checking reuse techniques single inheritance extension functionality single inheritance override orphan methods flexible reuse schemes safe value recursion oohaskell idioms upcasting narrow fixed type methods casts based dynamics casts based unions explicit type constraints nominal subtyping types width depth subtyping method arguments subtyping discussion usability issues usability inferred types usability type errors efficiency object encoding related work object encoding haskell language extensions object encodings haskell future work concluding remarks references haskell overlooked object system february introduction topic programming functional language haskell raised time programming language mailing lists programming tutorial websites verbal communication programming language conferences remarkable intensity dedicated haskell language extensions proposed specific idioms encoded haskell hughes sparud gaster jones finne shields peyton jones nordlander bayley interest topic restricted haskell researchers practitioners since fundamental unsettled question question addressed present relation subtype polymorphism research context specifically emphatically restrict existing haskell language haskell common extensions necessary new haskell extensions proposed substantiate restriction adequate allows deliver meaningful momentous answer aforementioned question detailed level offer following motivation research programming haskell intellectual sense one may wonder whether haskell advanced type system expressive enough model object types inheritance subtyping virtual methods etc general conclusive result available far practical sense one may wonder whether faithfully transport imperative designs say eiffel java haskell without totally rewriting design without interfacing language design perspective haskell strong record prototyping semantics encoding abstraction mechanisms one may wonder whether haskell perhaps even serve sandbox design typed languages one play new ideas without immediate need write modify compiler educational sense one may wonder whether less advanced functional programmers improve understanding haskell type system concepts looking pros cons different encoding options haskell anecdotal account collected informative pointers mailing list discussions document unsettled understanding programming haskell relation classes haskell type classes http http http http http kiselyov paper delivers substantiated positive answers questions describe oohaskell library today imperative programming haskell oohaskell delivers haskell overlooked object system key result good deal exploitation haskell advanced type system combined careful identification suitable object encoding instantiate enhance existing encoding techniques pierce turner abadi cardelli aiming practical object system blends well host language haskell take advantage previous work heterogeneous collections kiselyov hlist library generally put programming work hallgren mcbride neubauer neubauer simplified story following classes represented functions fact object generators state maintained mutable variables allocated object generators objects represented records closures component method methods monadic functions access state self use hlist record calculus extensible records use functionality program object typing rules deliver faithful convenient comprehensive object system several techniques discovered combined proper effort needed preserve haskell type inference programming idioms opposed explicit type declarations type constraints classes methods obtained result oohaskell delivers amount polymorphism type inference unprecedented proper effort also needed order deploy value recursion closing object generators achieving safety approach known challenge order fully appreciate object system oohaskell also review less sophisticated less favourable encoding alternatives oohaskell provides conventional idioms also several features either unattainable mainstream languages example classes class closures statically collection classes bounded polymorphism implicit collection arguments multiple inheritance sharing safe argument subtyping remarkable familiar features introduced fiat get free example type collection bounded polymorphism elements inferred automatically compiler also abstract classes uninstantiatable say program typecheck otherwise subtyping rules safety conditions method argument types checked automatically without programming part facts suggest haskell lends prime environment typed language design haskell overlooked object system february paper sec encode tutorial example oohaskell sec review alternative object encodings haskell beyond sec sec describe oohaskell idioms first part focuses idioms subtyping object types surface program code second part covers technical details subtyping including casts variance properties sec discuss usability issues related work future work sec conclude paper main sections sec sec written tutorial style ease digestion techniques encourage programming language design experiments extended source distribution folklore shapes example one main goals paper able represent conventional code straightforward way possible implementation system may feeble heart however user system must able write conventional code without understanding complexity implementation throughout paper illustrate oohaskell series practical examples commonly found textbooks programming language tutorials section begin shapes example face type shapes two subtypes rectangles circles see fig shapes maintain coordinates state shapes moved around drawn exercise shall place objects different kinds shapes collection iterate draw shapes turns example crisp reference encoding type shapes defined class follows class shape public constructor method shape int newx int newy newx newy source code downloaded http subject liberal license style writing actual code commits specific extensions ghc implementation haskell reasons convenience principle haskell classes functional dependencies sufficient shapes problem designed jim weirich deeply explored chris rathman see collection example code jim weirich http see also even heavier collection shape examples chris rathman http kiselyov fig shapes state draw method accessors int getx return int gety return void setx int newx void sety int newy newx newy move shape position void moveto int newx int newy newx newy move shape relatively void rmoveto int deltax int deltay moveto getx deltax gety deltay abstract draw method virtual void draw private data private int int coordinates private accessed getters setters methods accessing moving shapes inherited subclasses shape draw method virtual even abstract hence concrete subclasses must implement draw subclass rectangle derived follows haskell overlooked object system february class rectangle public shape public constructor method rectangle int newx int newy int newwidth int newheight shape newx newy width newwidth height newheight accessors int getwidth return width int getheight return height void setwidth int newwidth width newwidth void setheight int newheight height newheight implementation abstract draw method void draw cout drawing rectangle getx gety width getwidth height getheight endl additional private data private int width int height brevity elide similar derivation subclass circle class circle public shape circle int newx int newy int newradius shape newx newy following code block constructs different shape objects invokes methods precisely place two shapes different kinds array scribble loop draw move shape objects shape scribble scribble new rectangle scribble new circle int scribble draw scribble rmoveto scribble draw loop scribble exercises subtyping polymorphism actually executed implementation draw method differs per element array program run produces following output due implementations draw method kiselyov drawing drawing drawing drawing rectangle width height rectangle width height circle radius circle radius oohaskell encoding show oohaskell encoding happens pleasantly mimic encoding remaining deviations appreciated notably going leverage type inference define type code shall fully statically typed nevertheless oohaskell rendering shape class object generator shapes shape newx newy self create references private state newioref newx newioref newy return object record methods returnio getx readioref gety readioref setx writeioref sety writeioref moveto newy self setx newx self sety newy rmoveto deltay self getx self gety self moveto deltax deltay emptyrecord classes become functions take constructor arguments plus self reference return computation whose result new object record methods including getters setters invoke methods object self method invocation self getx others infix operator denotes method invocation objects mutable implemented via ioref stref also suffices since systems practical use mutable state oohaskell yet offer functional objects known challenging defer functional objects future work use extensible records hlist library kiselyov hence emptyrecord denotes name promises stands record extension construction label value labels defined according trivial scheme explained later haskell overlooked object system february abstract draw method mentioned oohaskell code used method neither dare declaring type side effect object generator shape instantiatable whereas explicit declaration abstract draw method made class shape uninstantiatable later show add similar declarations oohaskell continue oohaskell code shapes example object generator rectangles rectangle newx newy width height self invoke object generator superclass super shape newx newy self create references extended state newioref width newioref height return object returnio getwidth getheight setwidth setheight record methods readioref readioref writeioref neww writeioref newh draw implementation abstract draw method putstr drawing rectangle self getx self gety width self getwidth height self getheight rectangle records start shape records super snippet illustrates essence inheritance oohaskell object generation supertype made part monadic sequence defines object generation subtype self passed subtype supertype subtype records derived supertype records record extension potentially also record updates overrides modelled case elide derivation object generators circles circle newx newy newradius self super shape newx newy self returnio super ultimately oohaskell rendering scribble loop object construction invocation monadic sequence myoop construct objects mfix rectangle mfix circle kiselyov create homogeneous list different shapes let scribble conslub conslub nillub loop list normal monadic map shape draw shape rmoveto shape draw scribble use mfix analogue new reflects object generators take self construct part open recursion enables inheritance let scribble binding noteworthy directly place rectangles circles normal haskell list following possibly type check let scribble different types homogenise types forming haskell list end use special list constructors nillub conslub opposed normal list constructors new constructors coerce list elements bound type element types incidentally intersection types objects include methods invoked later draw rmoveto get static type error literally says result original carried native haskell way normal monadic list map normal haskell list shapes hence exercised faithful model subtype polymorphism also allows almost implicit subtyping oohaskell provides several subtyping models study later discussion example classes interfaces code misunderstood suggest class inheritance design option shapes hierarchy language one may want model shape interface say ishape rectangle circle classes implementing interface design would allow reuse implementations accessors move methods one may want combine interface polymorphism class inheritance classes rectangle circle rooted additional implementation class shapes say shape hosts implementations shared among different shape classes incidentally part ishape interface remainder ishape interface namely draw method example would implemented rectangle circle generally designs employ interface polymorphism alone rare need provide encodings interface polymorphism class inheritance oohaskell one may say former mechanism essentially covered haskell type classes modulo fact would still need object encoding latter mechanism specifically covered original hlist oohaskell contributions structural subtyping polymorphism object types based polymorphic extensible records programmable subtyping constraints sec discusses nominal object types oohaskell haskell overlooked object system february extensibility encapsulation encoding oohaskell encoding shapes example faithful encapsulation premise well extensibility premise paradigm object encapsulates data state methods behaviour one may add new kinds shapes without rewriting perhaps even existing code premises subject unsettled debate programming language community especially regards functional programming basic paradigm criticised zenger odersky extensibility subtyping dimension neglect dimensions addition new functions subtyping hierarchy agree overall criticism avoid debate paper simply want oohaskell provide object encoding compatible established paradigm incidentally encodings sec show haskell supports extensibility data functionality dimension subtyping technicalities scribble loop means contrived scenario faithful instance ubiquitous composite design pattern gamma terms expressiveness typing challenges sort loop array shapes different kinds forces explore tension implicit explicit subtyping discuss relatively straightforward use polymorphism represent subtype constraints however less straightforward accumulate entities different subtypes collection explicit subtyping wrapping properly constrained existential envelope burden would side programmer key challenge oohaskell make subtyping almost implicit cases programmer would expect particular area oohaskell goes beyond ocaml leroy leading strongly typed functional language oohaskell provides range subtyping notions including one even allows safe downcasts object types something achieved ocaml date alternative haskell encodings oohaskell goes particularly far providing object system compared conservative haskell programming knowledge end put programming work section review conservative object encodings characteristics limitations require boilerplate code programmer conservative encodings come nevertheless involved enlightening fact full spectrum encodings documented certainly haskell context reckon detailed analysis kiselyov makes useful contribution furthermore several discussed techniques actually used oohaskell simply generalised advanced use haskell type classes hence present section incremental preparation main sections sec sec section limit haskell contrast oohaskell requires several common haskell extensions towards end section investigate value dismissing restriction map subtype hierarchy algebraic datatype begin trivial concise encoding distinguishing characteristic extreme uses basic haskell idioms encoding also seriously limited lacking extensibility regard new forms shapes sec define algebraic datatype shapes kind shape amounts constructor declaration readability use labelled fields instead unlabelled constructor components data shape rectangle getx gety getwidth getheight int int int int circle getx int gety int getradius int constructor declarations involve labelled fields position shape reusability dimension emphasised datatype level easily define reusable setters position issues regarding type safety address later instance setx int shape shape setx getx also define setters fields instance setwidth int shape shape setwidth getwidth also straightforward define functions moving around shapes moveto int int shape shape moveto sety setx rmoveto int rmoveto deltax getx gety int shape shape deltay moveto deltax deltay thanks lennart augustsson pointing line encoding http haskell overlooked object system february function drawing shapes properly discriminates kind shapes one equation per kind shape subtype polymorphism reduces pattern matching say draw shape draw rectangle putstrln drawing rectangle show getx show gety width show getwidth height show getheight draw circle putstrln drawing circle show getx show gety radius show getradius encoding trivial build collection shapes different kinds iterate shape drawn moved drawn main let scribble rectangle circle draw draw rmoveto scribble assessment encoding encoding ignores encapsulation premise paradigm foremost weakness encoding lack extensibility addition new kind shape would require code would also require amendments existing definitions declarations datatype declaration shape function definition draw related weakness overall scheme suggest way dealing virtual methods introduce type method base type potentially implementation define override method subtype would need scheme offers explicit implicit open recursion datatypes functions defined setters setx sety happen total constructors end defining labelled fields getx gety type system prevent forgetting labels constructor relatively easy kiselyov resolve issue slight disadvantage conciseness instance may avoid labelling entirely use pattern matching instead may also compose together rectangles circles common shape data deltas use single algebraic datatype shape implies functions defined total functions biased functions setwidth defined certain constructors beyond encoding model section possible increase type safety making type distinctions different kinds shapes also encounter challenge subtype polymorphism map object data record types folklore technique encoding extensible records burton use model shapes hierarchy haskell simple type classes let implement virtual methods meet remaining challenge placing different shapes one list making different subtypes homogeneous embedding shape subtypes union type haskell either begin datatype extensible shapes shapetail data shape shape getx int gety int shapetail convenience also provide constructor shapes shape shape getx gety shapetail define setters movers possible extensions shape simply leaving extension type parametric actual equations literally previous section show different parametrically polymorphic types setx int shape shape sety int shape shape moveto int int shape shape rmoveto int int shape shape presence type variable expresses earlier definitions shape clearly instantiated subtypes shape draw function must placed dedicated type class draw anticipate need provide implementations draw one may compare style one explicitly declares method pure virtual class draw draw shape haskell overlooked object system february shape extensions rectangles circles built according common scheme show details rectangles begin definition data delta contributed rectangles delta polymorphic tail data rectangledelta rectangledelta getwidth int getheight int rectangletail define type rectangles instance shape type rectangle shape rectangledelta convenience provide constructor rectangles fix tail rectangle delta could still instantiate rectangle define new constructors later necessary rectangle shape rectangledelta getwidth getheight rectangletail definition setters involves nested record manipulation setheight int rectangle rectangle setheight shapetail shapetail getheight setwidth int rectangle rectangle setwidth shapetail shapetail getwidth draw function defined draw instance instance draw rectangledelta draw putstrln drawing rectangle show getx show gety width show getwidth shapetail height show getheight shapetail difficult part scribble loop easily form collection shapes different kinds instance following attempt wrong homogeneous element type let scribble rectangle circle kiselyov relatively simple technique make rectangles circles homogeneous within scope scribble list clients establish union type different kinds using appropriate helper tagshape embedding shapes union type haskell either may construct homogeneous collection follows let scribble tagshape left rectangle tagshape right circle boilerplate operation embedding trivially defined follows tagshape shape shape tagshape shapetail shapetail embedding tagging clearly disturb reusable definitions functions shape however loop scribble refers draw operation defined rectangledelta circledelta union two types provide trivial boilerplate generalising draw instance draw draw draw either draw eithershape draw draw instance actually suffices arbitrarily nested unions shape subtypes eithershape variation normal fold operation unions either discriminate left right cases tail shape datum boilerplate operation independent draw specific shape eithershape shape shape shape either eithershape case shapetail left shapetail right shapetail draw instance either makes clear use union type intersection type may invoke method union may invoke method either branch union instance constraints make fact obvious assessment encoding encoding ignores encapsulation premise paradigm methods encapsulated along data encoding basic extensibility problem previous section introduce new kinds shapes without rewriting recompiling code haskell supports unions prelude type name either two constructors left right branches union haskell overlooked object system february patterns code may require revision though instance program points insert collection downcast must agree formation union type specific subtypes new subtype must covered scattered applications embedding operations must revised fail put haskell type inference work far object types concerned end defining explicit datatypes encoded classes acceptable mainstream point view since nominal types explicit types dominate paradigm however haskell would like better allowing inference structural class interface types subsequent encodings section share problem contrast oohaskell provides full structural type inference annoying enough formation collection requires explicit tagging elements left right worse tagging done delta position shape makes scheme noncompositional new base class requires functions like tagshape eithershape encoding final virtual methods differs essentially former encoded parametric polymorphic functions parameterised extension type virtual methods encoded polymorphic functions overloaded extension type changing final method virtual vice versa triggers code rewriting may overcome making methods virtual using default type class methods reuse implementations however bias increase amount boilerplate code instances either subtyping hierarchy leaks encoding accessors setwidth derivation chain base type shows nesting depth record access pattern one may factor code patterns access helpers overload accessors coded uniform way complicate encoding though approach restricted single inheritance functional objects tail polymorphism far defined methods separate functions process data records hence ignored encapsulation premise data methods divorced thereby able circumvent problems self references tend occur object encodings also avoided classic dichotomy mutable functional objects complement picture exploring functional object encoding section mutable object encoding next section continue use records functional object encoding object types necessarily recursive mutating methods modelled record components return self fact kiselyov technique use types pierce turner must use types instead since haskell lacks types extensible shapes modelled following recursive datatype data shape shape getx gety setx sety moveto rmoveto draw shapetail int int int int int int shape shape int shape int shape type reflects complete interface shapes including getters setters complex methods object constructor likewise recursive recall recursion models functional mutation construction changed object shape shape getx gety setx sety moveto rmoveto draw shapetail shape shape shape deltay shape subtypes modelled instantiations record rectangle record type instance shape record type instantiation fixes type shapetail somewhat type rectangle shape rectangledelta data rectangledelta rectangledelta getwidth getheight setwidth setheight rectangletail int int int rectangle int rectangle used primed labels wanted save unprimed names actual programmer api following implementations unprimed functions hide fact rectangle records nested getwidth getheight setwidth setheight getwidth getheight setwidth setheight shapetail shapetail shapetail shapetail constructor rectangles elaborates constructor shapes follows haskell overlooked object system february rectangle shape drawrectangle shapetail drawrectangle putstrln drawing rectangle show show width show height show shapetail rectangledelta getwidth getheight setwidth rectangle setheight rectangle rectangletail encoding subclass circle derived likewise omitted time scribble loop set follows main let scribble narrowtoshape rectangle narrowtoshape circle draw draw rmoveto scribble interesting aspect encoding concerns construction scribble list cast narrow shapes different kinds common type general option could explored previous section used embedding union type instead narrowing takes shape arbitrary tail returns shape tail narrowtoshape shape shape narrowtoshape setx sety moveto rmoveto shapetail narrowtoshape setx narrowtoshape sety narrowtoshape moveto narrowtoshape rmoveto assessment encoding encoding faithful encapsulation premise paradigm specific extensibility problem union type approach resolved assessment sec code accesses collection need revised new subtypes added elsewhere program narrowing approach frees programmer commitment specific union type kiselyov narrowing approach unlike one permit downcasting implementation narrowing operation earlier embedding helpers union types boilerplate code kind course required programmers mainstream languages mutable objects tail polymorphism also review object encoding mutable objects employ iorefs monad enable object state case oohaskell functions methods record manipulate state ioref operations continue use records extensible shapes modelled following type data shape shape getx gety setx sety moveto rmoveto draw shapetail int int int int int int int int result type methods wrapped monad methods may side effects necessary one may wonder whether really necessary getters even getter may want add memoisation logging override method subclass case result type would restrictive object generator constructor shapes parameterised initial shape position concrete implementation abstract method draw tail record contributed subtype self enable open recursion latter lets subtypes override method defined shape illustrate overriding shortly shape concretedraw tail self xref newioref yref newioref tail tail returnio shape getx readioref xref gety readioref yref setx writeioref xref sety writeioref yref moveto setx self sety self rmoveto deltay getx self gety self haskell overlooked object system february moveto self draw concretedraw self shapetail tail self type declarations rectangles following type rectangle shape rectangledelta data rectangledelta rectangledelta getwidth getheight setwidth setheight rectangletail int int int int define unprimed names hide nested status rectangle api getwidth getheight setwidth setheight getwidth getheight setwidth setheight shapetail shapetail shapetail shapetail reveal object generator rectangles step step rectangle shape drawrectangle shapetail cont invoke generator shape passing normal constructor arguments draw method tail rectangle api yet fix self reference thereby allowing subtyping rectangle define draw method follows resorting syntax daisy chaining output drawrectangle self putstr drawing rectangle getx self gety self width getwidth self height getheight self finally following rectangle part shape object shapetail wref newioref href newioref returnio rectangledelta getwidth getheight setwidth setheight rectangletail readioref wref readioref href writeioref wref writeioref href kiselyov overall subtype derivation scheme ease overriding methods subtypes illustrate capability temporarily assuming draw method abstract may revise constructor shapes follows shape tail self xref newioref yref newioref tail tail returnio shape deviate draw draw putstrln nothing draw override draw constructing rectangles rectangle self super shape shapetail self returnio super draw drawrectangle self previous section use narrowtoshape building list different shapes actual object construction ties recursive knot self references mfix hence mfix operator new main mfix rectangle mfix circle let scribble narrowtoshape narrowtoshape draw rmoveto draw scribble narrow operation trivial time narrowtoshape shape shape narrowtoshape shapetail chop tail shape objects may longer use rectangleor methods one may say chopping tail makes fields tail corresponding methods private openly recursive methods particular draw access self characterised whole object chop narrow operation becomes potentially much involved infeasible consider methods binary methods contravariance advanced idioms haskell overlooked object system february assessment encoding encoding actually close oohaskell except former uses explicitly declared record types result encoding requires substantial boilerplate code account type extension subtyping explicit furthermore oohaskell leverages typelevel programming lift restrictions like limited narrowing capabilities subtypes composed record types overloading many problems record types prompt consider alternative compose record types subtypes use type classes represent actual subtype relationships use type classes first presented shields peyton jones encoding interface polymorphism haskell generalise technique class inheritance compositional approach described follows data part class amounts record type record type includes components superclass data interface class amounts haskell type class superclasses mapped haskell superclass constraints reusable method implementations mapped default methods subtype implemented instance begin record type data part shape class data shapedata shapedata valx int valy int convenience also provide constructor shape shapedata valx valy define type class shape models interface shapes class shape getx int setx int gety int sety int moveto int int rmoveto int int draw cont would like provide reusable definitions methods except draw course fact would like define accessors shape data end need additional helper methods clear define accessors shapedata must provide generic definitions able handle records include shapedata one immediate components leads following two helpers kiselyov class shape cont earlier readshape shapedata writeshape shapedata shapedata let define generic shape accessors class shape cont earlier getx readshape valx setx writeshape valx gety readshape valy sety writeshape valy moveto sety setx rmoveto deltax deltay moveto getx deltax gety deltay define instance shape class shapedata original shape class abstract due purely virtual draw method move rectangles define data part follows data rectangledata rectangledata valshape shapedata valwidth int valheight int rectangle constructor also invokes shape constructor rectangle rectangledata valshape shape valwidth valheight rectangle provide access shape part follows instance shape rectangledata readshape valshape writeshape valshape readshape cont also implement draw method instance shape rectangledata cont draw putstrln drawing rectangle show getx show gety width show getwidth height show getheight haskell overlooked object system february also need define haskell type class class rectangles subclassing coincides haskell subclassing class shape rectangle cont type class derived corresponding class explained base class shapes class defines normal interface rectangles access helpers class rectangle cont getwidth int getwidth readrectangle valwidth setwidth int setwidth writerectangle getheight int getheight readrectangle valheight setheight int setheight writerectangle valwidth valheight readrectangle rectangledata writerectangle rectangledata rectangledata rectangle nothing instance rectangle rectangledata readrectangle writerectangle subclass circles encoded way scribble loop performed tagged rectangles circles main let scribble left rectangle right circle draw draw rmoveto scribble attach left right tags time simple tagging possible encodings still need instance shape covers tagged shapes instance shape shape shape either readshape either readshape readshape writeshape bimap writeshape writeshape draw either draw draw kiselyov map bimap pushes writeshape tagged values fold operation either pushes readshape draw tagged values completeness recall relevant facts folds either class bifunctor bimap instance bifunctor either bimap left left bimap right right either either either left either right mention minor useful variation avoids explicit attachment tags inserting collection use special cons operation conseither replaces normal list constructor far let scribble left rectangle right circle liberalised notation let scribble conseither rectangle circle cons operation conseither either conseither left map right conseither error cons empty tail assessment encoding approach highly systematic general instance multiple inheritance immediately possible one may argue approach directly encode class inheritance rather mimics object composition one might indeed convert native programs prior encoding use class inheritance use interface polymorphism combined manually coded object composition instead fair amount boilerplate code required readshape writeshape also transitive subtype relationship requires surprising boilerplate example let assume class foobar subclass rectangle transcription haskell would involve three instances one type class dedicated foobar one rectangle still except scattering implementation one shape annoying haskell overlooked object system february technique improved compared sec tagging scheme eliminates need tagging helpers specific object types also conseither operation relieves chore explicitly writing sequences tags however must assume insert list also must accept union type increases new element list matter many different element types encountered also want downcast union type still need know exact layout lift restrictions engage proper programming variation existential quantification far restricted haskell turn common extensions haskell attempt improve problems encountered upshot spot obvious ways improvement first attempt leverage existential quantification implementation collections compared earlier approach homogenise shapes making opaque cardelli wegner opposed embedding union type use existentials could combined various object encodings illustrate specific encoding previous section define existential envelope shape data data opaqueshape forall shape hideshape opaque shapes still hence shape instance instance shape readshape writeshape draw opaqueshape hideshape readshape hideshape hideshape writeshape hideshape draw building scribble list place shapes envelope let scribble hideshape rectangle hideshape circle assessment encoding compared union type approach programmers invent union types time need homogenise different subtypes instead shapes tagged hideshape narrowing approach quite similar required boilerplate face new problem existential quantification limits type inference see definition opaqueshape viz explicit constraint shape mandatory constraint quantifier subtypes whose methods may invoked reader may notice similar problem union type approach required shape constraints instance kiselyov instance shape shape shape either instance however merely convenience could disposed used fold operation either explicitly scribble loop main let scribble left rectangle right circle either scribblebody scribblebody scribble scribblebody draw draw rmoveto contrast explicit constraint existential envelope eliminated admittedly loss type inference nuance specific example general however weakness existentials quite annoying intellectually dissatisfying since type inference one added values extended type system compared mainstream languages worse kind constraints example necessary mainstream languages without type inference constraints deal subtyping normally implicit use existentials oohaskell variation heterogeneous collections continue exploration common extensions haskell fact offer another option difficult problem collections recall previously discussed techniques aimed making possible construct normal homogeneous haskell list end time engage construction heterogeneous collection first place end leverage techniques described hlist paper kiselyov heterogeneous collections rely classes chen jones jones peyton jones functional dependencies jones duck heterogeneous lists constructed dedicated constructors hcons hnil analogues one may think heterogeneous list type nested binary product hcons corresponds hnil use special hlist functions process heterogeneous lists example requires map operation scribble loop encoded follows main let scribble hcons rectangle hcons circle hnil undefined scribble haskell overlooked object system february operation heterogeneous variation normal monadic map function argument map given inline instead pass proxy undefined detour necessary due technical reasons related combination polymorphism type code body scribble loop defined trivial datatype data scribblebody constructors needed heterogeneous map function constrained apply class models interpretation function codes like scribblebody apply class instance dedicated scribblebody class apply apply instance shape apply scribblebody apply draw draw rmoveto assessment encoding approach eliminates effort inserting elements collection approach comes heavy surface encoding type code scribblebody encoding odds type inference case existentials apply instance must explicitly constrained interfaces going relied upon body scribble loop amount explicit typing yet disturbing example hand intrinsic weakness encoding sort required explicit typing goes beyond standard programming practise oohaskell idioms systematically develop important oohaskell programming idioms section restrict idioms clearly substantiate oohaskell programming require type declarations type annotations explicit casts object types thanks haskell type inference strong support polymorphism remaining heterogeneous map function encounter entities different types hence argument function must polymorphic different normal map function lists argument function typically uses polymorphic functions process entities different types trouble map function possibly anticipate constraints required different uses map function technique moves constraints type heterogeneous map function interpretation site type codes kiselyov oohaskell idioms including advanced topics related subtyping described subsequent section sections adopt following style illustrate idioms describe technicalities encoding highlight strengths oohaskell support traditional idioms well extra features due underlying record calculus status labels methods classes finally illustrate overall programmability typed language design haskell matter style somewhat align presentation oohaskell ocaml object tutorial among many systems based open records perl python javascript lua etc ocaml stands statically typed oohaskell also ocaml precise predecessor close oohaskell terms motivation aim introduction objects library functional language type inference implementation libraries sets features used required quite different sec related work discussion makes comparison even interesting hence draw examples ocaml object tutorial specifically contrast ocaml oohaskell code demonstrate fact ocaml examples expressible oohaskell roughly syntax based direct local translation also use ocaml object tutorial clear comprehensive concise objects records quoting leroy class point defines one instance variable varx two methods getx movex initial value instance variable variable varx declared mutable hence method movex change class point object val mutable varx method getx varx method movex varx varx end labels transcription oohaskell starts declaration labels occur ocaml code hlist library readily offers different models labels cases labels haskell values distinguished haskell type choose following model value label quoting portions ocaml tutorial take liberty rename identifiers massage subminor details haskell overlooked object system february type label proxy empty type empty except data varx varx proxy proxy varx data getx getx proxy proxy getx data movex movex proxy proxy movex proxies defined data proxy proxy proxy proxy proxy type empty phantom type proxy value simple syntactic sugar significantly reduce length label declaration become issue instance may think lines follows syntax extension assumed label new keyword label varx label getx label movex explicit declaration oohaskell labels blends well haskell scoping rules module concept labels private module exported imported shared models hlist labels support labels firstclass citizens particular pass functions labels type proxies idea basis defining record operations since thereby dispatch labels functionality get back record operations shortly mutable variables ocaml point class transcribed oohaskell follows point newioref returnio varx getx readioref movex modifyioref emptyrecord oohaskell code clearly mimics ocaml code use haskell iorefs model mutable variables use magic monad could well use simpler monad well formalised launchbury peyton jones source distribution paper illustrates option specific ghc extension haskell allow datatypes without constructor declarations clearly minor issue one could always declare dummy constructor used program kiselyov haskell representation point class stands revealed value binding declaration monadic record type sequence first creates ioref mutable variable returns record new point object general oohaskell records provide access public methods object iorefs public mutable variables often call record components oohaskell objects methods example varx public original ocaml code oohaskell private variable would encoded ioref made available record component private variables explored shapes example hlist records may ask haskell tell inferred type point ghci point point record hcons proxy mutablex ioref integer hcons proxy getx integer hcons proxy move integer hnil type reveals use hlist extensible records kiselyov explain details hlist make present paper inferred type shows records represented heterogeneous pairs promoted proper record type record hlist constructors data hnil hnil empty heterogeneous list data hcons hcons heterogeneous list sugar forming pairs infixr record type constructor newtype record record constructor record opaque library user instead library user library code relies upon constrained constructor record value constructor mkrecord hrlabelset record mkrecord record constraint hrlabelset statically assures labels pairwise distinct necessary precondition list pairs qualify record omit routine specification hrlabelset kiselyov implement emptyrecord used definition point emptyrecord mkrecord hnil record extension operator constrained variation heterogeneous cons operation hcons need make sure newly added pair violate uniqueness property labels readily expressed wrapping unconstrained cons term constrained record constructor haskell overlooked object system february infixr record mkrecord hcons test cases want instantiate point class invoke methods begin ocaml session shows inputs responses ocaml let new point val point obj getx int movex unit getx int haskell capture program monadic sequence method invocations involve effects including mutation objects denote method invocation ocaml operation plain record lookup hence myfirstoop point need new getx movex getx oohaskell ocaml agree ghci myfirstoop completeness outline definition sugar operator infixr obj feature hlookupbylabel feature obj operation class hasfield hlookupbylabel operation performs traversal pairs looking value component given label given record ocaml prompt indicated leading character method invocation modelled infix operator lines leading val responses interpreter kiselyov using label type key recall term field hasfield originates record terminology oohaskell fields methods omit routine specification hasfield kiselyov class declaration reveals hlist thereby oohaskell relies classes chen jones jones peyton jones functional dependencies jones duck object generators mainstream languages construction new class instances normally regulated constructor methods oohaskell instances created function serves object generator function seen embodiment class point computation defined trivial example object generator constructor arguments quoting leroy class point also abstracted initial value varx parameter course visible whole body definition including methods instance method getoffset class returns position object relative initial class object val mutable varx method getx varx method getoffset varx method movex varx varx end oohaskell objects created result monadic computations producing records parameterise computations turning functions object generators take construction parameters arguments instance parameter ocaml class ends plain function argument newioref returnio varx getx readioref getoffset queryioref movex modifyioref emptyrecord computations quoting leroy haskell overlooked object system february expressions evaluated bound defining object body class useful enforce invariants instance points automatically adjusted nearest point grid follows class let origin object val mutable varx origin method getx varx method getoffset varx origin method movex varx varx end oohaskell follow suggestion ocaml tutorial use local let bindings carry constructor computations prior returning constructed object let origin div newioref origin returnio varx getx readioref getoffset queryioref origin movex modifyioref emptyrecord prior meant temporal sense oohaskell remains language contrast ocaml implicitly polymorphic classes powerful feature oohaskell implicit polymorphism classes instance class polymorphic regard point coordinate without contribution fine difference ocaml model oohaskell transcription ocaml definition parameter type int operation ocaml deal integers oohaskell points polymorphic point coordinate example int double example illustrate mypolyoop movex movex getx getx oohaskell points actually bounded polymorphic point coordinate may type implements addition recently one could kiselyov express java expressing bounded polymorphism possible significant contortions haskell anything bounded polymorphism aka generics available eiffel languages however languages polymorphic type type bounds must declared explicitly ongoing efforts add specific bits type inference new versions mainstream languages haskell type system infers bounded polymorphism full generality implicit polymorphism oohaskell injure static typing confuse ints doubles code attempting movex get type error saying int double contrast poor men implementation polymorphic collections java element general object type inserting collection requires downcasts accessing elements nested object generators quoting leroy evaluation body class takes place object creation time therefore following example instance variable varx initialised different values two different let ref class object val mutable varx incr method getx varx method movex varx varx end test new class ocaml prompt new getx int new getx int variable viewed class variable belonging class object recall classes represented object generators oohaskell hence build class object need nested object generator newioref returnio modifyioref readioref returnio varx getx newioref readioref haskell overlooked object system february movex modifyioref emptyrecord nest generators depth since use normal haskell scopes example outer level computation point template class inner level constructs points suggestive name nested generator makeincrementingpointclass trivial example demonstrates classes oohaskell really firstclass citizens pass classes arguments functions return results following code fragment create class scope bind value variable used instantiate created class scope localclass closure mutable variable mynestedoop localclass makeincrementingpointclass localclass getx localclass getx ghci mynestedoop contrast class closure possible java let alone java supports anonymous objects anonymous classes nested classes java must linked object enclosing class named nested classes free linking restriction however support anonymous classes class computations local scope although anonymous delegates let emulate computable classes nevertheless classes citizens mainstream language open recursion methods object may send messages self support inheritance override self must bound explicitly cook otherwise inheritance able revise messages self coded superclass consequently general object generators given style open recursion take self construct part self quoting leroy method initialiser send messages self current object self must explicitly bound variable could identifier even though often choose name self dynamically variable bound invocation method particular class inherited variable correctly bound object class object val mutable varx kiselyov method getx method movex method print end varx varx varx getx ocaml code transcribed oohaskell directly self argument ends ordinary argument monadic function generating printable point objects newioref returnio varx getx readioref movex modifyioref print getx emptyrecord ocaml use follows let new val obj movex unit print unit although appear line constructs point new construct recursive knot clearly tied right oohaskell use monadic fixpoint function mfix rather special keyword new makes nature openly recursive object generators manifest myselfishoop mfix movex print ghci myselfishoop instantiation checking one potential issue open recursion oohaskell type errors messages self spotted first object construction coded instance library developer could accidentally provide object generators turn uninstantiatable library user would notice defect generators put work issue readily resolved follows program object generators may use concrete operation assured printable point generator concrete haskell overlooked object system february concrete operation concrete generator self generator self mfix generator operationally concrete identity function however constrains type generator application mfix typeable approach needs slightly refined cover abstract methods aka pure virtual methods end one would need engage local inheritance adding vacuous potentially undefined methods needed virtual method generalised concrete operation would take virtual portion record preferably proxy purpose argument documented reuse techniques status labels methods classes enables various common advanced forms reuse single inheritance boils monadic function composition object generators multiple inheritance object composition employ advanced operations record calculus single inheritance extension quoting leroy illustrate inheritance defining class colored points inherits class points class instance variables methods class point plus new instance variable color new method class color string object inherit point val color color method getcolor color end corresponding ocaml session let new red val obj getx getcolor int string red following oohaskell version employ special inherit construct compose computations instead construct colored point instantiate superclass maintaining open recursion extend intermediate record super new method getcolor use british spelling consistently paper except words enter text code samples color colored kiselyov color self super self returnio getcolor returnio color super super variable rather extra construct mycoloredoop mfix red getx getcolor oohaskell ocaml agree ghci mycoloredoop red functionality parameterise computations respect classes myfirstclassoop mfix movex print ghci myfirstclassoop function myfirstclassoop takes class object generator argument instantiates class moves prints resulting object pass myfirstclassoop object generator creates object slots movex print constraint statically verified instance colored point class derived printable point class previous section suitable ghci myfirstclassoop flip red single inheritance override override methods still refer superclass implementations akin super construct ocaml languages illustrate overriding subclass whose print method informative color self super color self return print putstr far super print putstr color color super haskell overlooked object system february first step monadic sequence constructs colored point binds super reference second step monadic sequence returns super updated new print method hlist operation denotes record update opposed familiar record extension operation makes overriding explicit example could also use hybrid record operation extension case given label yet occur given record falling back typepreserving update hybrid operation would let model implicit overriding java variation point demonstrates programmability oohaskell object system demo shows overriding properly affect print method myoverridingoop mfix red print ghci myoverridingoop far color red orphan methods program methods outside hosting class methods reused across classes without inheritance relationship instance may define method shared objects least method getx type show regardless inheritance relationships self self getx update earlier code follows inlined definition print print getx reusable orphan method print flexible reuse schemes addition single class inheritance several established reuse schemes programming including object composition different forms mixins different forms multiple inheritance given status oohaskell entities foundation powerful record calculus possible reconstruct existing reuse schemes use admittedly contrived example demonstrate challenging combination multiple inheritance object composition best knowledge example directly represented existing mainstream language going work scenario making class three different concrete subclasses first two concrete points kiselyov fig complex reuse scenario shared resulting heavy point leave open recursive knot third concrete point participate open recursion shared terminology virtual base class respect first two concrete points base class time see fig overview object template heavy points starts follows color self self self mfix continued bind ancestor objects subsequent reference pass self first two points participate open recursion fix third point place first two classes thus reused sense inheritance third class reused sense object composition heavy point carries print movex methods delegating corresponding messages three points continued let myprint putstr putstr putstr let mymove movex movex movex return print myprint movex mymove emptyrecord continued print print print haskell overlooked object system february three points many fields methods contribute heavy point means union records denoted continued demo mydiamondoop mfix blue print points still agree movex print third point lacks behind ghci mydiamondoop comparison ocaml multiple inheritance follows fixed rules last definition method kept redefinition subclass method visible parent class overrides definition parent class previous definitions method reused binding related ancestor using special notation bound name said pseudo value identifier used invoke ancestor method eiffel etc fixed rules notations multiple inheritance oohaskell allows program aspect type system programmers language designers may devise inheritance object composition rules safe value recursion support open recursion system subtle fundamental difficulty three ways emulate open recursion recursive types existential abstraction pierce turner value recursion latter simplest one one chosen oohaskell recall object generator receives self argument representing constructed object lets methods send messages object object constructed obtaining fixpoint generator variation printable point example sec illustrates potential unsafety value recursion self newioref self print unsafe kiselyov returnio varx getx readioref movex modifyioref print self getx emptyrecord object generator may tempted invoke methods received self argument self print code typechecks however attempt construct object executing mfix reveals problem looping indeed self represents object constructed proper invoke method self object generation still taking place self whole yet exist haskell accessing object leads mere looping problem parsing aho akin divergence following trivial expression mfix self return language like haskell determining fixpoint value function type always safe worst happen divergence undefined behaviour strict languages problem far serious accessing field filled accessing dummy value null pointer placed field prior evaluation recursive definition access results undefined behaviour prevented check noted problem widely discussed satisfactory solution found although problem relatively benign oohaskell never leads undefined behaviour would like statically prevent precise impose rule constructor may execute actions involve objects little changes statically enforce restriction object construction may regarded sort staged computation problem preventing use values one key challenges programming taha nielsen recently solved environment classifiers solution related principle making stage completion object part type differs technique exploit tagging monadic types rather types introduce tag notfixed mark objects constructed yet newtype notfixed notfixed data constructor opaque notfixed newtype tag imposes overhead export data constructor notfixed user may arbitrarily introduce remove tag operations tag restricted new module notfixed part oohaskell library module exports two new operations new construct former variant mfix monad construct operation variation returnio haskell overlooked object system february new notfixed record notfixed record object generator record object computation new mfix notfixed return construct notfixed record record record notfixed record construct notfixed self returnio self constructor object computation notfixed self staged object construction proceeds follows argument self passed object generator marked notfixed fixpoint computed new removes notfixed tag function construct maintaining notfixed tag lifts tag internally methods defined object generator could use self reference write example follows self newioref self print construct self mutablex getx readioref movex modifyioref print self getx emptyrecord new movex print uncomment statement self print get type error saying notfixed object method print hasfield instances notfixed type within body construct reference self available without notfixed tag one may tempted invoke methods self execute actions however second argument construct function type record record result type function include possible read write ioref actions within function haskell contrast ocaml imperativeness function manifest type extension construction inherited classes straightforward example example sec reads color self self print would typecheck construct getcolor returnio color mycoloredoop kiselyov new red getx getcolor constructor receives argument self marked pass argument gives object superclass execute methods object indeed uncommenting statement print leads type error execution superclass method may involve invocation method self case method print self constructed yet construct operation shown fully general source distribution illustrates safe object generation methods refer self super technique readily generalises multiple inheritance object composition oohaskell idioms far avoided type declarations type annotations explicit coercions object types discuss oohaskell programming scenarios benefit additional type information even require pay special attention various subtyping related cast variance issues particular cover technical details collections require amount type perceptiveness saw introductory shapes example sec upcasting important difference oohaskell subtype polymorphism share extent ocaml polymorphism mainstream latter languages object implicitly coerced object superclasses upcast one may even think object polymorphic types superclasses simultaneously hence need functions objects methods polymorphic monomorphic ocaml oohaskell way around objects monomorphic regard record structure language semantics offer implicit upcasting however functions take objects polymorphic process objects different types precise oohaskell exploits polymorphism function takes object refers methods record components inferred explicit type hasfield constraints record components function therefore accepts see however sec emulate mainstream nominal subtyping prefer term narrow emphasise act restricting interface object opposed walking explicit perhaps even nominal subtyping hierarchy haskell overlooked object system february object least components satisfy hasfield therefore time implicit explicit upcasting needed fact sec see explicit cast usually understood casting explicitly named target type discuss casts later section show established upcast dichotomy misses intermediate option admitted oohaskell namely oohaskell lets programmer specify narrowing performed without giving explicit target type though continue get without specifying types leaving type inference least oohaskell ocaml must narrow object expression context polymorphism left requires object different type archetypal example placing objects homogeneous collection list original item objects may different types therefore must establish common element type narrow items common element type specified explicitly however oohaskell compute common type add objects collection context drive narrowing oohaskell implementation shapes example sec involved sort narrowing myoop mfix rectangle mfix circle let scribble conslub conslub nillub designated list constructors nillub conslub incorporate narrowing normal constructor behaviour specific element type new element constraints ultimate bound lub element type final list elements continuously cast towards lub list constructors defined follows code empty list data nillub empty list constructor nillub nillub cons function class conslub conslub coercion needed singleton list instance conslub nillub conslub oversimplify talking operations add remove fields fair simplification though talk normal functionality opposed functionality record manipulation kiselyov narrow head tail lub type instance lubnarrow conslub conslub fst head map snd tail map lubnarrow important operation lubnarrow class lubnarrow lubnarrow given two values record types operation returns pair narrowed values type supposed bound sense structural subtyping specification lubnarrow illustrates capability oohaskell program aspects exploit reflection hlist records define narrowing instance hzip hzip htintersect aout bout hrlabelset lubnarrow record record record lubnarrow record record hprojectbylabels hprojectbylabels htintersect hunzip hunzip given two records compute intersection labels subsequently project records shared label set possible improve conslub construct lists linear time may also want consider depth subtyping addition width subtyping discuss sec narrow fixed type lub narrowing neither explicit implicit coercion shapes example explicitly apply special list constructors know perform coercion target type left implicit narrowing feature oohaskell available otherwise similar systems ocaml ocaml building scribble list shapes example requires fully explicit narrowing ocaml calls upcast haskell overlooked object system february let scribble shape list new rectangle shape new circle shape express narrowing oohaskell well mfix rectangle mfix circle let scribble shape int scribble narrow narrow applications narrow prepare shape objects insertion homogeneous haskell list need identify target type per element specifying desired type result list enough operation narrow defined dedicated class class narrow narrow record record operation narrow extracts pairs requested implementation uses kind projection records saw full previous section lubnarrow fully explicit narrowing implies must declare appropriate types something managed avoid far shape type drives narrowing example shape interface type shape record getx gety setx sety moveto rmoveto draw hnil two infix type synonyms add convenience explicitly written types infixr type hcons infixr type shape interface explicitly includes virtual operation draw loop scribble needs method see applications narrow subsequent sections methods method method whose result type type self based example clone method typing methods self known difficult issue typed object encodings kiselyov cook abadi cardelli bruce advanced treatments oohaskell must naively define method returns self self super self returnio self wrong super wrote code get type error attempt instantiate corresponding object mfix object generator haskell permit recursive types needed type self example issue recursive types returning full self discussed detail sec point simpler solution disallowing returning self requiring programmer narrow self specific desired interface case clone method mainstream programming languages typically define return type base class classes programmer supposed use downcast intended subtype resolve problem method follows self super self returnio narrow self ppinterface super type ppinterface record getx movex print hnil narrows self explicitly interface printable points relate explicit narrowing return type explicit declaration return type methods java presented narrowing approach limitation however record components occur target interface irreversibly eliminated result record would prefer make components merely private recovered safe downcast offer two options downcastable upcasts next two sections casts based dynamics turning shapes benchmark let modify loop scribble homogeneous list shapes single circles special treatment requires downcast haskell overlooked object system february maybe putstrln circle circ setradius circ draw downcast shape astypeof scribble iteration attempt downcast given shape object type recall circle object downcast may fail succeed hence maybe type result neither oohaskell narrow operation ocaml upcast support scenarios oohaskell narrow irrevocably removes record components define however forms upcast reversible begin technique exploits dynamic typing abadi abadi peyton jones new scribble list built follows let scribble upcast shape int scribble upcast upcast data upcast upcast dynamic data constructor upcast opaque library user upcast dedicated upcast operation latter saves original object embedding dynamic presume record types readily instantiate type class typeable dually downcast projection dynamic requested subtype upcast typeable record narrow record upcast record upcast upcast narrow todyn downcast typeable narrow upcast record maybe record downcast upcast fromdynamic want treat upcast objects objects add trivial hasfield instance looking record components upcast objects instance delegates narrowed part upcast value instance hasfield hasfield upcast hlookupbylabel upcast hlookupbylabel technique suffers shortcomings although downcast safe sense bad things happen unsafe casts downcast keep attempting stupid casts casts types casting possibly succeed following section describe elaborate pair statically prevents stupid downcasts method also suffers full computational overhead narrow operation coercion iterates record components kiselyov casts based unions turn subtyping technique sec refined sec used union types represent intersection types techniques several problems could easily deal empty list could minimise union type number distinct element types could downcast fully lift restrictions putting programming work make upcasts dedicated list constructors myoop mfix rectangle mfix circle let scribble conseither conseither nileither list constructors almost identical nillub conslub sec difference comes cons list see last instance code empty list data nileither empty list constructor nileither nileither cons trivial function class conseither conseither coercion needed singleton list instance conseither nileither conseither construct union type head tail instance conseither either conseither left map right extend union type ultimate element type one branch new element haskell encoding sec however programming principle minimise union type distinct element type occurs exactly straightforward optimisation omitted method generic treating union type intersection record fields union branches kind constraints covered hlist paper kiselyov typeindexed heterogeneous collections essence need iterate existing union type use type equality detect type element cons already occurred union also need determine corresponding sequence left right tags haskell overlooked object system instance hasfield hasfield hlookupbylabel hlookupbylabel february hasfield either left right hlookupbylabel hlookupbylabel downcast search operation union type also want downcast fail statically target types appear among branches hence start downcast boolean hfalse express yet seen type question downcast downcastseen hfalse downcast returns maybe intrinsically fail value level class downcastseen seen downcastseen seen maybe process union like list hence two cases one nonsingleton union one final branch indeed details definition reveal assume unions instance downcasteither seen typeeq downcastseen seen either downcastseen seen downcasteither seen instance typecastseen seen typeeq downcastseen seen downcastseen seen typecastseen seen cases test type equality target type left branch type pass computed boolean functions downcasteither union typecastseen final branch singleton union class typecastseen seen typecastseen seen maybe instance typecast typecastseen seen htrue typecastseen typecast instance typecastseen htrue hfalse typecastseen const nothing first instance applies encountered requested type last case invoke normal type cast kiselyov knowing must succeed given earlier check type equality second instance applies final branch requested type however must seen target type among branches htrue thereby rule stupid casts following function handles unions class downcasteither seen downcasteither seen either maybe kiselyov instance downcastseen htrue typecast downcasteither seen htrue downcasteither left typecast downcasteither right downcastseen htrue instance downcastseen seen downcasteither seen hfalse downcasteither left nothing downcasteither seen right downcastseen seen first instances applies case left branch union type target type htrue remains check tag left done type cast continue search otherwise seen set htrue record union type indeed contain target type second instance applies case left branch union type target type hfalse case downcast continues tail union type propagating seen flag thereby rule stupid casts explicit type constraints cases useful impose structural record type constraints arguments object generator arguments result type method constraints akin concepts siek familiar narrow turns convenient tool imposition type constraints use narrow operations good example example type constraints treatment virtual methods oohaskell quoting leroy possible declare method without actually defining using keyword virtual method provided later subclasses class containing virtual methods must flagged virtual instantiated object class created still defines type abbreviations treating virtual methods methods class virtual object self val mutable varx method print self getx method virtual getx int method virtual movex int unit end methods called pure virtual corresponding classes called abstract java flag methods classes abstract oohaskell enough leave method undefined indeed shapes example omitted mentioning draw method defined object generator shapes ocaml abstract point class may transcribed oohaskell follows haskell overlooked object system february self xref newioref returnio varx xref print self getx emptyrecord object generator instantiated mfix getx used defined haskell type system effectively prevents instantiating classes use methods neither parents defined arises question explicit designation method pure virtual would particular value case pure virtual happen used object generator oohaskell allows explicit designation means adding type constraints self designate getx movex pure virtuals change object generator follows self narrow self record getx movex hnil use familiar narrow operation time express type constraint must stress narrow type level result narrowing used operationally however affect typechecking program every instantiatable extension must define getx movex one may think effect achieved adding regular type annotations self annotations however must spell desired object type entirely furthermore regular record type annotation rigidly unnecessarily restrains order methods record well types preventing deep subtyping sec one may also think object types simply constrained specifying hasfield constraints impractical far full object types would need specified programmer haskell directly support partial signatures approach solves problems nominal subtyping ocaml default oohaskell object types engage structural subtype polymorphism many languages prefer nominal object types explicitly declared subtyping inheritance relationships enduring debate superiority either form subtyping definite strength structural subtype polymorphism naturally enables inference object kiselyov types downside potentially accidental subtyping cardelli wegner given object may admitted actual argument function structural type fits nominal types allow restrict subtyping polymorphism basis explicitly declared subclass inheritance relationships nominal named types although oohaskell biased towards structural subtyping polymorphism oohaskell general sandbox typed language design admit nominal object types nominal subtyping including multiple inheritance revisit familiar printable points colored points switching nominal types first need invent class names nominations data data printable points colored points act discipline also register types nominations class nomination instance nomination instance nomination attach nomination regular oohaskell object phantom type end using following newtype wrapper newtype nom rec rec following two functions add remove nominations operation nominate record nominal object nominate nomination nominate operation take away type distinction anonymize nomination anonymize able invoke methods nominal objects need hasfield instance often seen delegation wrapped record instance hasfield nomination hasfield hlookupbylabel hlookupbylabel anonymize programming nominal subtyping commence object generator printable points remains exactly except nominate returned object newioref returnio nominate nominal mutablex getx readioref movex modifyioref print getx emptyrecord haskell overlooked object system february nominal structural distinction becomes meaningful start annotate functions explicitly requested nominal argument type first consider request insist specific nominal type subtyping involved print function accepts nominal printable points printpp app app print demonstrate nominal subtyping define colored points color self super self returnio nominate nominal print putstr far super print putstr color color getcolor returnio color anonymize super access record need make nominal subtype designation going explicit introduce type class parents extensible function nominal types list immediate supertypes type may one parent multiple inheritance following two instances designate root hierarchy immediate subtype class nomination child nominations parents parents child parents child parents instance parents hnil instance parents hcons hnil parents colored points printable points oohaskell library also defines general relation ancestor reflexive transitive closure parents class nomination nomination anc ancestor anc position define upcast operation basis nominal subtyping operation nupcast ancestor nupcast anonymize could also define forms downcast nupcast narrowing operationally identity function consistent implementation nominal upcast mainstream languages record type oohaskell object still visible nominal type nominal objects fully oohaskell objects except subtyping deliberately restricted define print function printable points relaxing printpp function define printpp style haskell monomorphism restriction kiselyov printpp app app print accept printpp printpp nupcast accept nominal subtypes couple printpp printpp clarifies readily restrict argument types functions either precise types subtypes given base granularity type constraints provided mainstream languages also use structural subtyping body printpp hints fact blend nominal structural subtyping ease oohaskell beyond mainstream programming types previous section studied nominal types sake nominal subtyping nominal types intrinsically necessary need model recursive object types oohaskell principle type system types would convenient respect however adding types haskell debated rejected make messages nearly useless hughes consequently encode recursive object types types fact use newtypes alternative technique existential quantification pierce turner discussed sec illustrate types linked dynamic lists interface list objects methods also return list objects getter tail insertion method nominal object type newtype listobj listobj listinterface structural interface type type listinterface record isempty gethead gettail sethead inshead hnil bool listobj listobj recall define hasfield instance whenever went beyond normal objects records approach case newtype complemented trivial hasfield instance instance hasfield listinterface hasfield listobj hlookupbylabel listobj hlookupbylabel clarity chose implementation listinterface two classes empty lists single list class would sufficed objects fail getters straightforward generator empty lists haskell overlooked object system february niloo self listinterface returnio isempty returnio true gethead failio head gettail failio tail sethead const failio head inshead reusableinshead self emptyrecord reusable insert operation constructs new object consoo reusableinshead list head newcons mfix consoo head list returnio listobj newcons list objects hold reference head accessed gethead sethead object generator lists consoo head tail self href newioref returnio isempty gethead gettail sethead inshead emptyrecord head returnio false readioref href returnio listobj tail writeioref href reusableinshead self programming nominal objects commences without ado used like oohaskell objects example following recursive function prints given list one check various method invocations involve nominally typed objects printlist alist empty alist isempty empty putstrln else head alist gethead putstr show head tail alist gettail putstr printlist tail width depth subtyping used term subtyping informal sense type substitutability call object type subtype object type program typeability method invocations preserved upon replacing objects type objects type notion subtyping kiselyov distinguished behavioural subtyping also known liskov substitution principle liskov wing oohaskell subtyping enabled type method invocation operator instance function getx following inferred type hasfield proxy getx type polymorphic function accept object record provided method labelled getx whose type matches function desired return type basic form subtyping subsumption width subtyping whereupon object type subtype record type least fields exact type hlist library readily provides subtyping relation corresponding constraints added type signatures although recall sec devised constraint technique convenient oohaskell easy see subtype holds record types substituting object type object type preserves typing every occurrence program method missing method wrong type width subtyping one form subtyping subtyping relations preserve typing occurrence program particular depth subtyping width subtyping allows subtype fields supertype depth subtyping allows fields subtype relate fields supertype subtyping typed mainstream languages like java support full depth subtyping explore depth subtyping oohaskell define new object types functions class sec extension sec define onedimensional vector class specified two points beginning end accessed methods vector self newioref newioref returnio readioref readioref print self self emptyrecord print print local type annotations enforce intent two points vector type clear objects type must able respond message print otherwise type points constrained object generator vector parameterised class points close analogue class template example shows haskell normal forms haskell overlooked object system february polymorphism combined type inference allow define parameterised classes without ado construct two vector objects testvector mfix mfix mfix red mfix red mfix vector mfix vector continued former vector two printable points latter vector two colored points types obviously different type checker remind fact tried put vectors homogeneous list vectors related width subtyping indeed vectors agree method names types methods differ method type printablepoint whereas method type coloredpoint different result types printablepoint coloredpoint related width subtyping type deep subtype oohaskell may readily use functions methods exploit depth subtyping instance define following function computing norm vector pass either vector function norm getx getx return abs test code continues thus continued putstrln length norm putstrln length colored norm method invocation operations within norm remain matter vector pass function typing indeed compatible width depth subtyping fact combination thus object type subtype record type fields whose types necessarily related subtyping turn assume subtyping method types defined accordance conservative rules cardelli wegner abadi cardelli following formulation without loss generality assume oohaskell method types monadic function types method type must method type method name type following relationships hold kiselyov must subtypes must subtype vector example exercises result type getters never specifically assert types two objects related width depth subtyping every case compiler checks method invocations directly separate subtyping rules needed contrast type systems like system subsumption rules explicitly asserted place oohaskell programmer make choice subtyping relationship explicit explicit narrowing operations previously described operation narrow covers width subtyping oohaskell library also includes operation deep narrow instance place homogeneous list let vectors deep narrow operation deep narrow descends records prefixes method arguments narrowing postfixes method results narrowing deep narrowing another record operation driven structure method types refer source distribution details deep narrowing way dealing explicitly depth subtyping oohaskell may also adopt technique sec method arguments variance argument types subject significant controversy castagna surazhsky gil howard contravariant rule method arguments entails type substitutability assures type safety method invocation programs however argument type contravariance known potentially conservative often argued argument type rule suitable modelling problems method argument types happens receive objects expected types safe particular program proponents argument type rule argue idiomatic advantages rule admit programs safe job compiler warn user rule used unsafely alas case eiffel established language situation following compiler currently available fully implements checks behaviour cases ranges type errors system section demonstrate restrictiveness methodargument types show oohaskell subtyping naturally supports typesafe faithful implementation archetypal example eiffel faq contained accompanying source code continuing vector example previous section extend vector haskell overlooked object system february method moveo moving origin vector method receives new origin point object self super vector self returnio moveo self getx movex super previous section construct plain printable points colored points intend substitutable circumstances virtue depth subtyping must follow rule requires argument moveo either plain printable point instance requirement responsible implementation moveo furthermore supertyping requirement precludes moveo changing color origin point vector colored points degrades expressiveness illustrate subtyping vectors define function moves origin vector argument varg zero mfix varg moveo zero may indeed apply function either function polymorphic take plain points subtypes type truly deep subtype type oohaskell require assert relevant subtype relationship way turn method argument types experiment yet another class vectors also construct two instances self newioref newioref returnio seto writeioref fields vector testvector test case mfix vectors vector printable points mfix vectors vector colored points like provides setting origin point method seto however direct simple way also permits changing color origin point vector colored points although kiselyov method seto convenient powerful method moveo method seto argument types across vectors vector colored points argument type seto must colored point type otherwise mutation writeioref typed hence type subtype type seto breaks argument type rule system enforces rule allow write functions take example may want devise following function seto always safe apply two type oohaskell let pass either two printable points two colored points vector types substitutable despite argument type substitutability properly restricted function varg zero mfix varg seto zero apply function try apply get type error message missing method getcolor distinguishes colored points plain printable points likewise get error attempt place homogeneous list like let vectors deep narrow case narrow vectors type vector though offending method seto projected becomes private oohaskell typechecks actual operations objects therefore oohaskell permits methods argument types situations used safely type checker flag unsafe use force programmer remove offending method permitting safe uses methods argument types required programming part get behaviour free subtyping seen several approaches construction collection needed scribble loop running shapes example section encodings sec discussed two additional options use hlist heterogeneous lists use make list element type opaque haskell overlooked object system february albeit one might expected options use turned problematic programming haskell records combination oohaskell extensible records two options even less attractive first approach construct scribble list let scribble hcons hcons hnil use sec iterate list scribble must instance type class apply funonshape instance hasfield proxy draw hasfield proxy rmoveto int int apply funonshape apply draw rmoveto draw haskell type class system requires provide proper bounds instance hence list constraints hasfield form constraints strongly resembles method types listed shape interface type sec one may wonder whether somehow use full type synonym shape order constrain instance possible haskell constraints citizens haskell compute types type proxies unless willing rely heavy encoding advanced syntactic sugar doomed manually infer explicitly list constraints piece polymorphic code existential quantification approach falls short essentially reason assuming suitable existential envelope following sec build scribble let scribble hideshape hideshape declaration existential type depends function want apply opaque data iterating list via need unwrap hideshape constructor prior method invocations wrapshape shape shape draw shape rmoveto shape draw scribble operations anticipated type bound envelope data opaqueshape forall hasfield proxy draw hasfield proxy rmoveto int int hideshape kiselyov approach evidently matches technique terms encoding efforts cases need identify type class constraints correspond potentially polymorphic method invocations impractical even mainstream languages advanced type inference require sort type information programmer existential quantification also used object encoding wrapping self lets example easily implement methods without resorting infinite types use existential quantification practical oohaskell reason requires exhaustively enumerate type classes object types instances discussion first discuss usability issues current oohaskell library constrained current haskell implementations summarise related work functional programming haskell elsewhere finally list topics future work improving usability oohaskell usability issues usability inferred types far shown type inferred haskell objects one may wonder readable comprehensible used means program understanding haskell language extension needed improve presentation inferred types upshot inferred types reasonable simple programming examples fuzzy borderline beyond volume idiosyncrasies inferred types injure usefulness concern suggests important topic future work let see inferred type colored point introduced sec mfix red mfix red record hcons proxy getcolor string hcons proxy varx ioref int hcons proxy getx int hcons proxy movex int hcons proxy print hnil think type quite readable even though reveals underlying representation records heterogeneous list pairs gives away model labels may hope future haskell implementation whose customisable pretty printer types would present result type inference perhaps follows ghci mfix red mfix red haskell overlooked object system record getcolor varx getx movex print hnil february string ioref int int int example dealt monomorphic objects let also see inferred type polymorphic object generator open recursion left open type object generator colored points ghci num hasfield proxy getx show string record getcolor string varx ioref getx movex print hnil inferred type lists fields object new inherited assumptions self expressed constraints type variable object generator refers getx self entails constraint form hasfield proxy getx coordinate type point polymorphic initial value value retrieved getx since arithmetics performed coordinate value implies bounded polymorphism types permitted yet infer must eventually since open recursion still open must admit assumed relatively eager instance selection previous haskell session hugs implementation haskell eager enough recent versions ghc become quite lazy session contemporary ghc inferred type would comprise following additional constraints deal uniqueness label sets encountered record extension hrlabelset hcons hcons likewise movex likewise movex likewise movex proxy proxy print print print movex print hnil getx getx varx getx varx getcolor inspection hrlabelset instances shows constraints satisfied matter type variable instantiated ingenuity required simple form strictness analysis sufficient alas ghc consistently lazy kiselyov resolving even constraints modulo hrlabelset constraints inferred type seems quite reasonable explicitly listing relevant labels types record components usability type errors due oohaskell extensive use programming risk type errors may become complex look examples results clearly provide incentive future work subject type errors let first attempt instantiate abstract class sec object generator defined print method invoked getx self latter left defined concrete subclasses take fixpoint incomplete object generator haskell type checker ghc gives following error message ghci let mfix instance hasfield proxy getx hnil arising use interactive probable fix add instance declaration hasfield proxy getx hnil first argument mfix namely definition mfix think error message concise point message succinctly lists missing field suggested probable fix really helpful next scenario use version comprises instantiation test constraining self narrow discussed sec self narrow self record getx movex hnil take fixpoint get complex error message ghci let mfix instance hextract hnil proxy getx hextract hnil proxy movex hasfield proxy getx hnil arising use interactive probable fix first argument mfix namely definition mfix compared earlier error message two additional unsatisfied hextract constraints two three constraints refer getx complain problem missing method implementation getx constraint regarding movex deals pure virtual method used object haskell overlooked object system february generator kinds numbers error messages getx movex may lead confusion internals oohaskell end surface order improve problems haskell type system errorhandling part would need opened allow error messages would like refine haskell type checker type error messages directly refer involved concepts let consider yet another scenario turn methods discussed sec following flawed oohaskell program attempt return self right away self super self returnio self assumes types super problem unnoticed try mfix generator point get type error occurs check construct infinite type record hcons proxy hcons proxy mutablex ioref hcons proxy getx hcons proxy movex hcons proxy print hnil expected type inferred type record hcons proxy hcons proxy mutablex ioref hcons proxy getx hcons proxy movex hcons proxy print hnil application first argument mfix namely error message rather complex compared simple object types involved although actual problem correctly described programmer receives help locating offending code self volume error message consequence use structural types one may think adding type synonyms using type signatures radically improve situation true contemporary haskell type checkers keep track type synonyms however erroneous subexpression may sufficiently annotated constrained context also mere coding type synonyms inconvenient situation suggests future haskell type checker could two steps first proposal allow inference type synonyms think foo complex expression structural object types type foo typeof foo capture type alias kiselyov typeof envisaged extension second proposal use type synonyms aggressively simplification inferred types type portions error messages challenging subject given haskell forms polymorphism verbosity oohaskell error messages may occasionally compare error messages template instantiation immensely verbose spanning several dozens packed lines yet boost similar libraries extensively use templates gaining momentum general clarity error messages undoubtedly area needs research research carried sulzmann others stuckey oohaskell programmers haskell compiler writers may take advantage ultimate conclusion discussion inferred types type errors type information needs presented programmer abbreviated fashion proposal based observation ocaml development although objects types shown ocaml quite concise always case system predecessor ocaml syntactic sugar printed inferred types unlike oohaskell types seen section objects anonymous long often recursive types describe methods object receive thus usually show inferred types programs order emphasise object inheritance encoding rather typechecking details quite spirit type information optional mainly used documentation module interfaces except trying examples debugging user often wish see inferred types programs batch efficiency object encoding representation objects types deliberately straightforward polymorphic extensible records closures approach strong similarities systems self ungar smith mutable fields method pointers contained one record efficient representation based separate method field tables java possible principle although current encoding certainly optimal conceptually clearer encoding used languages perl python lua often first one chosen adding existing language efficiency current oohaskell encoding also problematic reasons separation fields methods example although record extension constant time lookup linear search clearly efficient encoding possible one representation labels hlist paper permits total order among labels types turn permits construction efficient search trees may also impose order components per record type complete record extension right labels mapped array indexes present paper chose conceptual clarity optimisations furthermore case study needed drive optimisations mere improvements object encoding may insufficient however compilation time haskell overlooked object system february oohaskell programs runtime efficiency challenged number heavily nested dictionaries implied systematic approach quite likely scalable style programming require compiler optimisations make programming efficient general related work throughout paper referenced related work whenever specific technical aspects suggested complete picture broader discussion three overall dimensions related work foundations object encoding sec haskell extensions sec encoding haskell sec literature object encoding quite extensive oohaskell takes advantage seminal work cardelli wegner abadi cardelli ohori pierce turner bruce mitchell often typed object encodings based polymorphic lambda calculi subtyping also object calculi start directly objects records due overwhelming variety narrow discussion identify mlart see also vouillon closest oohaskell motivation spirit technical approach hence sec entirely focused without discussion less similar object encodings distinguishing characteristic oohaskell use polymorphism object encoding oohaskell identify small set language features make functional programming possible projects aim able implement objects library feature therefore several styles implemented different classes users classes problems one need learn new language discover programming progressively oohaskell base object systems polymorphic extensible records oohaskell deal mutable objects oohaskell currently neglects functional objects since much less commonly used practise oohaskell aim preserving type inference adds several extensions implement objects records polymorphic access extension projective records recursive types implicit existential universal types paper reports none extensions new combination original provides enough power program objects flexible elegant way make claim oohaskell using quite different set features fundamentally sets apart different source language haskell haskell implement polymorphic extensible records kiselyov natively rather via extension use programming avoid row variables related complexities records permit introspection thus let implement various cast operations appealing different subtyping relationships instance unlike oohaskell compute common type two record types without requiring type annotations quoting paper message print sent points colored points however incompatible types never stored list languages subtyping allow would take common interface objects mixed list interface single object unlike rely existential implicitly universal types recursive types use value recursion instead representation record recursive closures abstracts internal state object value well type haskell helps overcome calls severe difficulties value recursion difficulties serious enough abandon value recursion despite attractive features supporting implicit subtyping favour complex object encodings requiring extensions type system subtle problem value recursion responsible complicated elaborate rules various mainstream languages prescribe object constructor may may paper mentions unpublished attempt pierce take advantage facts fixpoints language always safe emulated language help extra abstraction thunks however attempted implementation whole message table rebuilt every time object sends message self approach pursued simple scheme sec seems answer challenge provide clean efficient solution permits restricted form recursion uses separate method table whereas oohaskell uses single record mutable fields method pointers encoding efficient oohaskell instances object class literally share method table ocaml also efficient simply elements object encoding natively implemented contrast oohaskell type system programmed programming result oohaskellis definitely less fit practical software development rather ocaml haskell language extensions attempts bring haskell language extension early attempt hughes sparud hughes sparud authors motivated extension perception haskell lacks form fact records realisable haskell unknown hlist paper published assumed lack extensible records haskell selected prime topic discussion haskell workshop nilsson haskell overlooked object system february incremental reuse offered inheritance languages approach uses common extensions type system provide key notions way haskell fitness programming discovered contribution paper haskell nordlander nordlander comprehensive variation haskell designed nordlander haskell extends haskell reactive objects subtyping subtyping part substantial extension reactive object part combines stateful objects concurrent execution major extension development shows extension haskell necessary stateful objects details object system programmed haskell another relevant haskell variation mondrian original paper design implementation mondrian meijer claessen meijer claessen write design type system deals subtyping higherorder functions objects formidable challenge rather designing complicated language overall principle underlying mondrian obtain simple haskell dialect flavour end algebraic datatypes type classes combined simple type system real subtyping completely mondrian runtime errors kind message understood considered problem akin partial functions case discriminations oohaskell raises bar providing proper subtyping message understood concepts haskell without extending haskell type system object encodings haskell paper may claim provide authoritative analysis possible object encodings haskell sec previous published work subject addressed general functional programming focused instead import foreign libraries components haskell finne shields peyton jones pang chakravarty latter problem domain makes important simplifying assumptions object state reside haskell data opaque object ids referring foreign site state solely accessed methods properties haskell methods often generated stubs foreign code result styles deal interfaces actual sub classes written programmer restricted context one approach use phantom types recording inheritance relationships finne interface represented empty datatype type parameter extension due consideration turns approach restricted version burton called type extension polymorphism even records made extensible provision polymorphic dummy field burton maintain haskell data objects need maintain record type kiselyov extension point left becomes phantom phantom approach sec another approach set haskell type class represent subtyping relationship among interfaces shields peyton jones pang chakravarty interface modelled dedicated empty haskell type enhanced approach state sec based detailed analysis approaches submit second approach seems slightly superior first one approaches cumbersome actual functional programming publications haskell coding practise sorts encodings occasionally found instance relatively well understood haskell type classes allow interface polymorphism abstract classes type classes concrete classes type class instances writing published haskell reference solution shapes example http encoding attempt maximise reuse among data declarations accessors encoding specialised specific problem approach may fail scale encoding also uses existentials handling collections inherently problematic choice shown sec future work focused mutable objects far studying functional objects appears natural continuation work even though functional objects much less practical relevance notion object construction computation sec merits exploration well clarification relationship environment classifiers taha nielsen oohaskell elaborated cover general forms reflective programming top general forms programming simple form reflection already provided terms encoding records iterate records components generic fashion effort needed cover advanced forms reflection iteration object pool modification object generators another promising elaboration oohaskell would use reusable representation solutions concluding remarks present paper addresses intellectual challenge seeing conventional idioms implemented haskell short writing compiler language haskell peyton jones wadler paper imperative programming haskell peyton jones wadler epitomises intellectual tradition imperative paradigm kind intellectual challenge paradigm assimilation addressed mcnamara smaragdakis haskell overlooked object system february implements quintessential haskell features type inference higherorder functions present paper conversely faithfully similar syntax without global program transformation realises principal trait programming according peyton jones haskell world finest imperative programming language peyton jones submit haskell also programming language readily restrict claim mere capability much work would needed enable scalable software development haskell discovered object system haskell supports stateful objects inheritance subtype polymorphism implemented haskell library oohaskell based polymorphic extensible records introspection subtyping provided hlist library kiselyov haskell programmers use idioms suits problem hand demonstrated oohaskell programs close textbook code normally presented mainstream languages oohaskell deviations appreciated oohaskell library offers comparatively rich combination idioms notably implemented parameterised classes constructor methods abstract classes pure virtual methods single inheritance multiple inheritance object composition structural types nominal types choice haskell base language allowed deliver extensive type inference firstclass classes implicit polymorphism classes generally programmable type systems starting existing oohaskell library corresponding sample suite one explore language design without need write compiler present paper settles question hitherto open conventional idioms full generality expressible current haskell without new extensions turns haskell plus type classes functional dependencies sufficient combination reasonably understood stuckey sulzmann even overlapping instances essential yet using permits convenient representation labels concise implementation functionality fact found quite unexpected unintended use existing haskell features reminiscent accidental discovery template latter longer considered exotic accident type hack rather real feature language czarnecki used standard template library described popular books alexandrescu haskell let move beyond mere curiosity implementing idioms point making contributions open controversial problems haskell let concisely specify enforce restrictions behaviour object constructors preventing constructor access constructed objects object encoding recursive records made safe also able effortlessly implement notions width depth subtyping respect particular object operations thus safely permit methods argument subtyping oohaskell able automatically compute least general interface heterogeneous collection objects kiselyov upcasts make collection homogeneous provides means safe downcasts moreover downcasts possibly succeed flagged type errors capabilities beyond functional objectoriented programming ocaml become laboratory generative programming czarnecki lead applications mcnamara smaragdakis boost http contend haskell would fit laboratory advanced typed language design experiments shown haskell indeed supports good measure experimentation without changing type system compiler acknowledgements thank keean schupke major contributions hlist oohaskell libraries thank helpful discussions also gratefully acknowledge feedback robin green bryn keller chris rathman several participants mailing list email discussions second author presented work earlier stage meeting functional programming november west point grateful feedback received meeting references abadi cardelli theory objects monographs computer science new york abadi cardelli pierce plotkin dynamic typing language pages acm conference principles programming languages abadi cardelli pierce plotkin dynamic typing staticallytyped language toplas aho sethi ullman compilers principles techniques tools alexandrescu modern design pearson education bayley june functional programming object oriented programming http bruce mitchell per models subtyping recursive types polymorphism pages popl proc acm symposium principles programming languages acm press bruce schuett van gent fiech polytoil polymorphic language toplas burton type extension polymorphism toplas cardelli wegner understanding types data abstraction polymorphism acm computing surveys castagna covariance contravariance conflict without cause toplas chen hudak odersky parametric type classes pages proceedings acm conference lisp functional programming acm press haskell overlooked object system february frequently asked questions faq http cook denotational semantics inheritance thesis brown university cook hill canning inheritance subtyping pages popl proceedings acm symposium principles programming languages new york usa acm press czarnecki donnell striegnitz taha dsl implementation metaocaml template haskell pages lengauer batory consel odersky eds program generation lncs vol duck peyton jones stuckey sulzmann sound decidable type inference functional dependencies pages schmidt proceedings european symposium programming esop barcelona spain march april lncs vol finne leijen meijer peyton jones calling hell heaven heaven hell pages icfp proceedings fourth acm sigplan international conference functional programming new york usa acm press gamma helm johnson vlissides design patterns elements reusable software gaster jones polymorphic type system extensible records variants technical report university nottingham department computer science nilsson future haskell discussion haskell workshop message haskell mailing list http hallgren fun functional dependencies joint winter meeting departments science computer engineering chalmers university technology goteborg university varberg sweden http howard bezault meyer colnet stapf arnout keller covariance competent compilers catch catcalls work done part eiffel language standardization committee ecma draft available http hughes suitable data structure needed message haskell mailing list http hughes sparud extension haskell proc haskell workshop jolla california yale research report jones theory qualified types pages symposium proceedings european symposium programming jones simplifying improving qualified types pages proceedings international conference functional programming languages computer architecture acm press jones type classes functional dependencies pages proceedings european symposium programming languages systems kiselyov schupke strongly typed heterogeneous collections acm sigplan workshop haskell acm press see kiselyov http extended technical report source distribution peyton jones scrap boilerplate practical design pattern generic programming acm sigplan notices proc acm sigplan workshop tldi launchbury peyton jones state haskell lisp symbolic computation leroy xavier july objective caml system release documentation user manual http liskov wing behavioral notion subtyping toplas mcbride faking simulating dependent types haskell journal functional programming mcnamara smaragdakis functional programming library journal functional programming meijer claessen design implementation mondrian acm sigplan haskell workshop acm press neubauer thiemann gasbichler sperber functional notation functional dependencies pages proc acm sigplan haskell workshop firenze italy september neubauer thiemann gasbichler sperber functional logic overloading pages proceedings acm symposium principles programming languages acm press nordlander pragmatic subtyping polymorphic languages pages berman berman eds proceedings third acm sigplan international conference functional programming acm sigplan notices vol new york acm press nordlander polymorphic subtyping haskell science computer programming also proceedings appsem workshop subtyping dependent types programming ponte lima portugal ohori polymorphic record calculus compilation toplas pang chakravarty interfacing haskell languages pages trinder michaelson pena implementation functional languages international workshop ifl edinburgh september revised papers lncs vol peyton jones tackling awkward squad monadic concurrency exceptions calls haskell pages hoare broy steinbrueggen eds engineering theories software construction marktoberdorf summer school nato asi series ios press peyton jones wadler imperative functional programming pages symposium principles programming languages popl acm peyton jones jones meijer type classes exploring design space launchbury haskell workshop pierce turner simple foundations objectoriented programming journal functional programming programming objects extension abstract record types pages hagiya mitchell eds international haskell overlooked object system february symposium theoretical aspects computer software lncs sendai japan vouillon objective simple extension pages popl proceedings acm symposium principles programming languages new york usa acm press shields peyton jones style overloading haskell entcs extended available http siek gregor garcia willcock lumsdaine concepts tech rept jtc information technology subcommittee programming language stuckey sulzmann theory overloading toplas appear stuckey sulzmann wazny improving type error diagnosis pages proc haskell acm press surazhsky gil covariance pages sac proceedings acm symposium applied somputing acm press taha nielsen environment classifiers pages popl proceedings acm symposium principles programming languages new york usa acm press ungar smith self power simplicity pages oopsla conference proceedings programming systems languages applications acm press zenger odersky independently extensible solutions expression problem tech rept ecole polytechnique lausanne technical report
| 2 |
minimax estimation divergences discrete distributions yanjun han student member ieee jiantao jiao student member ieee tsachy weissman fellow ieee nov abstract refine general methodology construction analysis essentially minimax estimators wide class functionals finite dimensional parameters elaborate case discrete distributions support size comparable number observations specifically determine smooth regimes based confidence set smoothness functional regime apply unbiased estimator suitable polynomial approximation functional smooth regime construct general version maximum likelihood estimator mle based taylor expansion apply general methodology problem estimating divergence two discrete probability measures empirical data possibly large alphabet setting construct minimax estimators likelihood ratio upper bounded constant may depend support size show performance optimal estimator samples essentially mle samples estimator adaptive sense require knowledge support size upper bound likelihood ratio show general methodology results minimax estimators divergences well hellinger distance approach refines approximation methodology recently developed construction near minimax estimators functionals parameters entropy entropy mutual information distance large alphabet settings shows effective sample size enlargement phenomenon holds significantly widely previously established index terms divergence estimation divergence multivariate approximation theory taylor expansion functional estimation maximum likelihood estimator high dimensional statistics minimax lower bound ntroduction given jointly independent samples samples unknown common alphabet size consider problem estimating functional distribution following form continuous function note allowing solely depend problem generalizes functional estimation problem considered among fundamental functionals convex function serves fundamental information contained binary statistical models enjoys numerable applications information theory statistics among many focus estimation problem divergence paper general approach naturally extends hellinger distance divergence important measure discrepancy two discrete distributions defined otherwise denotes absolute continuity respect like entropy mutual information divergence key information theoretic measure arising naturally data compression communications probability yanjun han jiantao jiao tsachy weissman department electrical engineering stanford university usa email yjhan jiantao tsachy material paper presented part ieee international symposium information theory applications isita monterey usa theory statistics optimization machine learning many disciplines throughout paper use squared error loss risk function estimator defined maximum risk estimator minimax risk estimating defined rmaximum sup rminimax inf sup respectively given collection probability measures infimum taken possible estimators aim obtain minimax risk rminimax properly chosen notations sequences use notation denote exists universal constant equivalent notation equivalent notation means lim inf equivalent write min max moreover polyn denotes set polynomials degree denotes distance function space polydn uniform norm logarithms natural base background main results several attempts estimate divergence continuous case see references therein approaches usually operate minimax framework focus consistency rates convergence unless strong smoothness conditions densities imposed achieve parametric rate mean squared error discrete setting proved consistency specific estimators without arguing minimax optimality note discrete case alphabet size fixed number samples infinity standard cam theory classical asymptotics shows approach asymptotically efficient thm lemma key challenge face discrete setting regime support size comparable even larger number observations classical analyses address consider estimation divergence discrete distributions setting choice may appear natural allow distribution absolutely continuous respect alphabet size denotes set probability measures support size however case turns impossible estimate divergence minimax sense rminimax configuration intuitively observation multinomial model depends continuously divergence extremal points rigorous statement proof result given lemma appendix seems natural consider alternative uncertainty set bounded likelihood ratio upper bound likelihood ratio since results trivial case throughout assume constant main result paper follows theorem rminimax furthermore estimator section iii achieves bound poisson sampling model adaptive sense require knowledge following corollary direct consequence theorem note already implied thus corollary divergence estimator maximum mean squared error vanishes provided moreover maximum risk estimator divergence bounded away zero next consider approach context minimax since possible respective empirical probability distributions direct estimate kqn may infinity positive probability hence use following modification direct approach observe since naturally integral multiple manually change value closed lattice zero precisely define use estimator estimate divergence note may probability distribution case extended obvious way performance modified approach summarized following theorem theorem poisson sampling model modified estimator satisfies rmaximum moreover rmaximum following corollary minimum sample complexity immediate corollary mean squared error modified estimator vanishes hence compared mean squared error minimum sample complexity modified approach optimal estimator enjoys logarithmic improvement note negligible condition theorem counterpart corollary specifically performance optimal estimator samples essentially approach samples another manifestation effective sample size enlargement phenomenon note divergence example modified estimator essentially exploits idea submission work arxiv independent study problem presented isit without construction optimal estimator added full version appeared later arxiv specifically main result theorem also obtained differences first estimator agnostic support size upper bound likelihood ratio estimator requires second theorem unnecessary additional term lnms upper bound approach though minor difference choices estimator third significant dedicated exclusively divergence case paper propose general methodology estimation wide class functionals estimation divergence serving main example concrete illustration concepts additional examples following general recipe next subsection later analysis result estimating distance recovered hellinger distance otherwise similarly obtain following results optimal estimation rates setting theorem hellinger distance inf sup estimator section achieves bound without knowledge poisson sampling model theorem inf sup estimator section achieves bound without knowledge either poisson sampling model following corollaries minimum sample complexities follow directly previous theorems corollary hellinger distance exists estimator vanishing maximum mean squared error lnss corollary exists estimator vanishing maximum mean squared error approximation general recipe estimation divergence belongs large family functional estimation problems consider estimating functional parameter experiment recent wave study functional estimation high dimensional parameters scaled norm gaussian model shannon entropy mutual information power sum function entropy distance multinomial poisson models moreover effective sample size enlargement phenomenon holds examples performance minimax estimators samples essentially approach samples optimal estimators previous examples follow general methodology approximation proposed suppose consistent estimator number observations suppose functional everywhere except natural estimator know classical asymptotics lemma given benign lan local asymptotic normality condition asymptotically efficient asymptotically efficient estimation functionals discrete distributions probability simplex natural candidate empirical distribution unbiased following procedure conducted estimating classify regime compute declare regime close enough otherwise declare smooth regime estimate falls smooth regime use estimator similar estimate falls regime replace functional regime approximation gappr another functional estimated without bias apply unbiased estimator functional gappr simple may sound methodology drawbacks ambiguities recent work applied general recipe estimation distance two discrete distributions recipe proves inadequate estimation distance bivariate function segment considered completely different previous studies univariate function analytic everywhere except point always taken consideration particular two topics multivariate approximation localization via confidence sets introduced used question domain different usually larger domain question determine regime size question falls regime region gappr good approximation whole domain proper neighborhood question falls smooth regime construct estimator similar questions approximation gappr used answered detail among questions question relatively new one estimation divergence second example far arisen first example estimating support size discrete distribution explicitly propose answer question question partially addressed answer question changes view question elaborations also necessary question question previous approaches handle bias correction problems bias correction arbitrary order proved necessary answering questions begin formal definition confidence set statistical experiments motivated function analytic point taylor series converges neighborhood definition confidence set consider statistical model estimator confidence set significance level collection sets sup moreover every confidence set significance level also induce reverse confidence set significance level sup intuitively confidence set significance level observing conclude error probability precisely probability least get back based observing conversely probability least also restrict region words true parameter localized observation localized name localization via confidence sets originates note confidence set level exists statistical model estimator since always feasible confidence set level zero practice seek confidence sets small possible also remark apart confidence set used traditional hypothesis testing usually chosen fixed constant allow decay constant example binomial model measure concentration example binomial model measure concentration lemma appendix collection lemma appendix collection ifif confidence set significance level assuming universal large enough reverse theconfidence induced reverse set assuming thernuniversal constant constant large enough induced set confidence set contained contained ifif ynln ynln nln nln similar similar structure structure figure figure gives gives pictorial pictorial illustration illustration confidence confidence set set reverse reverse confidence confidence set set binomial gaussian models respectively binomial gaussian models respectively fig fig pictorial pictorial illustration illustration confidence confidence set set reverse reverse confidence confidence set set binomial binomial left left panel panel gaussian gaussian right panel models binomial model right panel models binomial model gaussian model gaussian model provide provide answers answers questions questions help help localization localization via via confidence confidence sets sets question consider region always stick question consider region always stick domain domain instead instead true parameter existence assume true parameter existence assume domain domain fact fact distinguish distinguish smooth smooth resp resp regime regime determine corresponding regimes first localize using since observed hence determine corresponding regimes first localize using since observed hence first first step step make make approach approach work work estimation estimation must must ensured ensured high high probability probability fall fall region region defined defined instead instead result result region domain correct region consider region domain correct region consider question first determine smooth regime let region standard theterminology previous answer testing question also set statistical confidence set level convergence rate constant depends specific problem usually negligible order compared minimax risk estimation problem help localization question first determine smooth regime let region previous answer question set sup sup sup convergence rate constant depends specific problem usually negligible order compared minimax risk estimation problem help localization via confidence sets set confidence set significance level fact thus result definition confidence set desired taking complement obtain regime since observe need determine smooth regime based rather natural choice given confidence set contains observations whose confidence set true parameter falls smooth regime likewise define regime based since easily seen well one problem observation attributed neither regime smooth regime solve problem expand little bit ensure form partition fact expansion done many statistical models satisfactory measure concentration properties multinomial poisson gaussian models specifically proper order negligible minimax risk exists confidence sets significance level respectively satisfy passing subsets matter note case must exists belongs smooth regime regime interpretation approach follows true parameter falls smooth regime plug approach work conversely true parameter falls regime approximation idea work implies exists intermediate regime approach approximation approach work falls regime intermediate regime unnecessary given partial information whether becomes important need infer partial information based target follows true parameter fall smooth resp regime high probability also declare based smooth resp regime mathematically high probability implies implies note falls intermediate regime either suffices estimator perform well key fact target fulfilled definition confidence sets definition implies result sup sup sup similarly hence successfully localize via confidence sets based true parameter likely belong declared regime based pictorial illustration idea shown figure question given confidence set satisfactory significance level observing always set approximation region note definition fact considerably smaller makes desirable regime approximate rather proved necessary reason sufficient follows definition confidence sets hence probability least approximation region based covers allows operate conditioned inside note order obtain good approximation iii fig pictorial explanation smooth regimes based respectively figure iii iii iii particular iii intermediate regime approximation approach performing well performance need find confidence set small possible depends statistical model question long history correcting bias mle based taylor expansion example entropy estimation one earliest investigations reducing bias mle entropy estimation due miller interestingly already observed carlton miller bias correction formula applied automatically satisfied belongs smooth regime lnnn defined result smooth regime miller idea used generalization smooth regime definition remains bounded high probability order hence shows miller approach based taylor expansion also used general however miller approach fails bias correction desired equivalently large see case take look procedure considered binomial random variable denote empirical frequency follows taylor theorem varp derivative hence estimator proposed follows however approach still used term previous estimator corrected based taylor expansion order achieve bias correction continuing approach correction still suffers problem additional corrections need done forth result previous fails generalized corrections successful approach avoid employing approach terms one way avoid approach follows instead taylor expansion near employ taylor expansion near advantage definition unbiased estimator however unknown rhs still prevents using estimator explicitly fortunately difficulty overcome standard sample splitting approach split samples obtain independent follow class distribution possibly different parameters remark sample splitting employed divisible distributions including multinomial poisson gaussian models discussed detail beginning section iii poisson models estimator unbiased estimator usually exists straightforward show estimator achieves desired order although many scenarios bias correction bias correction required smooth regime bias correction arbitrary order turns crucial recent work estimation nonparametric functionals also conjecture approach crucial construction minimax estimator entropy large alphabet setting address completely answers questions shed light detailed implementation general recipe give rise important concept localization via confidence sets leads propose refined approach denote set containing possible values estimator set let satisfactory confidence set classify regime true parameter declare regime close enough terms localization via confidence sets otherwise declare smooth regime compute declare regime confidence set falls nonsmooth regime otherwise declare smooth regime estimate falls smooth regime use estimator similar estimate falls regime replace functional regime approximation gappr another functional well approximates estimated without bias apply unbiased estimator functional gappr paper follow refined recipe construction optimal estimator estimating several divergences discrete distributions including divergence hellinger distance divergence discussed detail moreover estimation divergence encounter new phenomenon multivariate approximation polytopes highly topic approximation theory also propose general tool analyze risk approach help localization via confidence sets rest paper organized follows first analyze performance modified estimator prove theorem section section iii first follow general recipe explicitly construct estimator divergence step step show essentially achieves bound theorem adopt adapt tricks construct another estimator adaptive easier implement minimax lower bound estimating divergence proved section hellinger distance sketch construction respective minimax estimators section conclusions drawn section complete proofs remaining theorems lemmas provided appendices matlab code estimating divergence released http hjw erformance modified plug approach section give upper bound lower bound mean squared error via modified approach prove theorem throughout analysis utilize poisson sampling model component resp histogram resp distribution poi mpi resp poi nqi coordinates resp independent words instead drawing fixed sample sizes samples distributions sizes poi poi respectively consequently observed number occurrences symbol independent theorem note poisson sampling model essentially multinomial model minimax risks related via lemma appendix proof upper bounds recall empirical distribution modified modified estimator exact approach however observed quantity kqn close following natural estimator kqn view fact apply general method analyze performance approach construction obvious continuously differentiable coincides moreover since multiple differs hence may consider performance estimator estimating summarized following lemma lemma let poi poi independent var particular var hence lemma conclude kqn var kqn var combining two inequalities yields kqn kqn var kqn prove theorem remains compute difference definition kqn enm used fact sup moreover nqi nqi nqi used hence triangle inequality kqn kqn completes proof upper bound theorem proof lower bounds decomposition mean squared error prove squared term theorem serves lower bound suffices find note prove inequality based multinomial model obtain result poisson sampling model via lemma construction follows uniform distribution first recall next give lower bound term shall use following lemma approximation error bernstein polynomial corresponds bias multinomial model define bernstein operator follows lemma let even integer suppose derivative satisfies taylor polynomial order point since modification even differentiable lemma applied directly however consider following function instead construction obvious coincides moreover differs zero since lemma applied yield following lemma lemma since assumption ensures choice hence lemma concavity note used fact sup combination two inequalities yields hence combining gives gives desired remaining terms remark sup holds estimator thus modified estimator postpone proof section proof theorem complete iii onstruction ptimal stimator stay poisson sampling model section simplicity analysis conduct classical splitting operation poisson random vector obtain three independent identically distributed random vectors component distribution poi mpi coordinates independent coordinate splitting process generates random sequence tik tik pxi multinomial assign tik random variables tik conditionally independent given observation splitting operation similarly conducted poisson random vector simplicity denote remark splitting operation necessary implementation also note independent random variables poi poi proof fact refer withers example estimator construction apply general recipe construct estimator note entropy function hence optimal estimator entropy used remains estimate cross entropy target bivariate function first classify regime bivariate function entire parameter set function analytic everywhere except point possible values estimator points confidence set poisson model poi poi set universal constant use constant hence choosing respectively universal constant specified later get smooth regimes brevity omit superscripts ultimate smooth regimes given regime nln smooth regime otherwise next construct estimator regime first smooth regime estimator order becomes estimating note poisson model poi estimating smooth regime estimator becomes ensure suffices define additional value zero note sample splitting used simplicity analysis indeed necessary implementation also replace reduce variance consider case regime nln general recipe approximate approximation region given confidence set result distinguish regime two depending localization via confidence sets essentially equivalent approximation region given latter rectangle since hit zero approximation regime product consider best polynomial approximation regime result regime use estimator tns arg min max best polynomial approximation universal constant ensures approximation interval contain zero thus valid specified later note call regime regime approximation region given since may zero usual best polynomial approximation region work best polynomial approximation employed hence regime estimator tns arg min max best polynomial approximation note condition summation ensures estimator zero unseen symbols call regime regime summary following estimator construction estimator construction conduct sample splitting obtain samples estimator cross entropy constructed follows tns tns tns tns given respectively pictorial illustration three regimes estimator displayed figure estimation entropy essentially follow estimator specifically let lower part function smooth regime approach regime bias correction unbiased estimate best polynomial approximation regime unbiased estimate best polynomial approximation fig pictorial explanation three regimes estimator point falls smooth regime falls regime falls regime defined defined gets rid interpolation function compared upper part function defined entropy estimator defined finally overall estimator defined suitably chosen universal constants estimator analysis subsection prove estimator constructed achieves minimax rate theorem recall mean squared error estimator estimating decomposed squared bias variance follows var bias variance defined bias var respectively hence suffices analyze bias variance three regimes smooth regime first consider smooth regime true parameter belongs regime estimator employ approach whose bias corrected taylor expansion recall bias estimator expressed poi depends smooth bounded everywhere suffices consider upper bound derivative however reason approach version strictly suboptimal estimation functionals empirical entropy functional may points derivatives may unbounded hence direct application taylor expansion work general however smooth regime general recipe know high probability fall region thus sufficiently smooth segment connecting well controlled words bias upper bounded help localization via confidence sets motivated previous insights begin following general lemma lemma assume estimator let reverse confidence set level suppose function coincides whenever define sup sup sup sup sup sup sup sup sup universal constant depends seen previous lemma upper bounds bias variance easy compute need calculate derivatives moments usually estimators moreover help localization via confidence sets bounds depend local behavior function require coincide plus negligible term corresponding event note major difficult part analysis plugin estimator whose proof quite lengthy four pages proof lemma requires explicit construction interpolation function estimator construction note lemma implicitly use following interpolation idea essentially condition event similar interpolate function using rectangle window prevent becoming infinity note interpolation done analysis construction estimator thus remark explicit interpolation indeed unnecessary given implicit interpolation localization via confidence sets following idea although follow idea bias correction result still easily recovered without explicit interpolation lemma let poi following inequalities hold var universal constant appearing lemma apply lemma analyze estimation performance estimator natural reverse confidence set given poisson model poi lemma know reverse confidence set level sup exp exp decay faster polynomial rate provided large enough special case simplify expressions lemma poi poi times differentiable function lemma let suppose function coincides whenever define exists universal constant depending sup var sup sup sup sup sup sup note due nice property poisson model previous lemma greatly simplifies expression involving unbiased estimate monomial functions moreover becomes power function summands lemma order magnitude thus merged one term interesting observation change case order bias estimator multiplied order since hence continuing bias correction approach improve bias approach desired logarithmic multiplicative factor next apply lemma estimator smooth regime satisfied previous property power lemma let poisson random variables poi poi poi poi independent moreover let small var universal constants given lemma lemma respectively large var particular previous bounds imply var var note variance bound given lemma result whose order coincides given classical asymptotics asymptotic variance leading term obtained easily via delta method use lemma analyze property overall estimator smooth regime vector representations respectively lemma let poi poi independent moreover let var regime next consider regime construction estimator tns approximation region contains true parameter bias estimator essentially product best polynomial approximation error previous approximation region approximation error easily obtained polynomial approximation lemma gives upper bound approximation error moreover note previous event occurs holds overwhelming probability confidence sets variance suffices bound variance term form poi complicated may seem present authors showed variance explicit expression poisson models charlier polynomial hence good tools analysis bias variance presented following lemma poisson random variables poi poi lemma let poi poi independent cln cln cln var cln given lemma particular lemma var corresponds polynomial approximation error become leading term bias regime consider regime estimator construction necessary deal best polynomial approximation emphasize polynomial approximation general multivariate case extremely complicated rice wrote theory chebyshev approximation best approximation functions one real variable understood time quite elegant fifty years attempts made generalize theory functions several variables attempts failed lack uniqueness best approximations functions one variable also show cause serious trouble polynomial achieves best approximation error used general methodology functional estimation relax requirement computing best approximation multivariate case merely analyze best approximation rate best approximation error multiplicative constant turns also extremely difficult ditzian totik chap obtained error rate estimate simple balls spheres remained open recently totik generalized results general polytopes results balls spheres readers referred dai still know little regimes beyond polytopes balls spheres complicated general multivariate case still possible solve problem since approximation region convex polytope review general theory polynomial approximation convex polytopes call closed set convex polytope convex hull finitely many points let direction unit vector continuous function define symmetric difference direction understanding direction intersects point belong moreover letting line define normalized distance simple polytope polytope vertex edges denoting set unit vectors define modulus smoothness follows sup sup sup significance quantity presented following lemma lemma let convex polytope constant depends hence lemma shows compute modulus smoothness immediately obtain upper bound best polynomial approximation error moreover lemma also shows essentially also lower bound case choosing lemma gives upper bound modulus smoothness moreover tracing back proof simple polytope case suffices take supremum directions parallel edge simple polytope makes evaluation much simpler position bound bias variance summarized following lemma lemma let poisson random variables poi poi poi independent var universal constant given lemma constant depends given lemma particular lemma lnnn exists universal constant var remark condition lnnn removed later construction adaptive estimator fact reason need condition use arbitrary best polynomial approximation unique general point subtle shown present authors polynomials achieve best uniform approximation error used construct estimator actually show next subsection special approximating polynomial achieve rate without condition moreover careful design approximating polynomial require different degrees instead fixing total degree yet unknown approximation theorists analyze corresponding approximation error general polytopes overall performance analyze performance entire estimator simplicity define tns tns given vector representation independent components similarly based current notations independence different symbols var var hence suffices analyze bias variance separately add based lemma lemma next lemma first analyzes bias variance tns lemma let poi poi independent moreover assume lnnn given lemma var tns tns var tns tns var tns note lemma condition natural requirement consistency optimal estimator view theorem lnnn additional condition lemma based lemma lemma analyze bias variance tns lemma let poi poi independent moreover assume lnnn given lemma var var var based lemma analyze total bias variance estimator differentiation maximum attained var hence var require previous results upper bounded let var hence come following theorem theorem let lnss general recipe sup estimator constructed moreover require knowledge support size adaptive estimator far obtained essentially minimax estimator via general recipe however since estimator purely obtained general method surprising also subject disadvantages firstly estimator specify explicit form best polynomial approximation regime although best polynomial approximation unique efficiently obtained via remez algorithm efficiently implemented matlab best polynomial approximation unique hard compute moreover remarked forces add unnecessary condition lnnn lemma thus theorem secondly although estimator require knowledge support size remove constant term polynomial approximation requires upper bound likelihood ratio design regime practice wish obtain adaptive estimator achieves minimax rate agnostic thirdly estimator construction regime approximating polynomial depends empirical probabilities store polynomials advance incurs large computational complexity resolve issues need apply tricks explicitly construct approximating polynomial regime first suppose exists polynomial degree desired approximation property entire regime need distinguish regimes remark either approximation one approximation entire regime always doable general example estimating distance shown single approximation entire regime always fail give correct order approximation error polynomial approximation also work small nevertheless ambitious target achieved special example motivated lemma lemma correct order approximation error satisfy sup since suffices find polynomial approximation satisfies desired pointwise bound however easy show exists polynomial deg sup hence remove constant term define desired property motivated previous observations construct explicit estimator follows estimator construction conduct sample splitting obtain samples adaptive estimator divergence given given respectively tns tns coefficients given best polynomial approximation follows arg sup parameters suitably chosen universal constants pictorial illustration displayed fig recall entropy estimator require knowledge conclude always sets zero unseen symbols depend words estimator agnostic thus adaptive moreover estimator easy implement practice computational complexity coefficients obtained offline via remez algorithm observing samples analyze performance lemma let poi poi independent random variables var regime unbiased estimate special polynomial approximation smooth regime approach bias correction fig pictorial explanation adaptive estimator constant depends particular lemma var note lemma removed condition lnnn lemma moreover since upper bounds bias variance presented lemma worse lemma lemma argument lemma lemma conclude adaptive estimator thereby satisfies theorem inimax ower ound section prove minimax lower bounds presented theorem two main lemmas employ towards proof minimax lower bound first cam method helps prove minimax lower bound corresponding variance equivalently classical asymptotics suppose observe random vector distribution let two elements let arbitrary estimator function based cam method gives following general minimax lower bound lemma sec following inequality holds inf sup exp second lemma method two fuzzy hypotheses presented tsybakov suppose observe random vector distribution let two prior distributions supported write marginal distribution prior let arbitrary estimator function based following general minimax lower bound lemma thm given setting suppose exist inf sup marginal distributions priors respectively total variation distance two probability measures measurable space concretely sup dominating measure proof achievability results previous sections observe corresponds squared bias term corresponds variance term sequel also prove minimax lower bound squared bias term variance term separately minimax lower bound variance first prove inf sup applying lemma poisson sampling model poi nqi know fix feasible poi nqi kpoi poi nqi npi markov inequality yields poi mpi poi nqi kpoi mpi poi inf sup exp operating poisson sampling model fix specified later letting inf sup without loss generality assume odd integer direct computation yields hence choosing know inf sup poisson sampling model result multinomial case obtained via lemma next apply lemma show sup inf fix consider specified later argument sup exp inf poisson sampling model straightforward compute combining inequalities setting completes proof minimax lower bound squared bias employ lemma prove minimax lower bounds corresponding squared bias terms first show lnss inf sup fact choosing uniform distribution estimation divergence reduces estimation entropy since proof discrete distribution subject additional constraint minimax lower bound satisfy lnmm assumption implies additional condition automatically satisfied hence operate additional condition gives inf sup used condition lnss give prove lnss inf sup begin lemma construct two measures matching moments large difference functional value corresponds duality function space measure space lemma lemma lemma bounded interval positive integer continuous function exist two probability measures supported trl recall distance uniform norm function space spanned based lemma choose universal constants specified later following lemma presents lower bound approximation error lemma constant depends define construction nln two fuzzy hypotheses lemma constructed follows fixes assigns vector fixes note assumption thus takes positive value thus valid proper parameter configurations moreover straightforward verify probability one since may form probability measure consider set approximate probability vectors parameter specified later define minimax poisson sampling model estimating inf sup equivalence defined established following lemma lemma exp exp condition event define conditional probability distribution setting var lemma var hence union bound eic denote marginal probability prior respectively triangle inequality lemma moreover definition first two conditions lemma hold hence lemma conclude desired bound follows lemma hence combination yields inf sup proof theorem complete ptimal stimators ellinger istance ivergence analyzed minimax estimator divergence thoroughly section apply general recipe divergence functions hellinger distance specifically explicitly construct minimax estimators hellinger distance sketch proof achievability part theorem brevity omit complete proof remark obtained similar fashion analysis divergence optimal estimator hellinger distance hellinger distance bivariate function interest first classify regime case regime based confidence sets poisson models smooth regimes obtained via universal constant obtain smooth regimes based observations next estimate quantity regime smooth regime nln simply employ approach bias correction regime symmetry suffices consider case need find proper polynomial approximate recall degree approximating polynomial determined tradeoff following result careful analysis resulting polynomial degree degree resulting polynomial degree degree explicitly give expression cases terms following best approximating polynomial arg min max recall use sample splitting technique determine approximation region approximate functional respectively approximation region based polynomial evaluated using hence nln approximation region degree requirements natural choice qkm approximation region degree requirements natural choice qkm qkn summary estimator constructed follows estimator construction conduct sample splitting obtain samples estimator hellinger distance given coefficients given suitably chosen universal constants previous estimator require knowledge since assigns zero unseen remove constant symbols term expression note since hellinger distance enjoys natural separation variables pair also separated resulting estimator moreover estimator construction also merge result sample splitting analyze bias previous estimator small lemma know qkm large lemma know sup hence total bias upper bounded shown theorem variance also obtained similar fashion omit details optimal estimator bivariate function interest first classify regime since function shares similar analytic properties function used divergence case parameter set remains smooth regimes also given universal constant next estimate regime smooth regime nln seek correct bias estimator estimating based general bias correction technique simply use following bias correction since admits unbiased estimate poisson model poi overall estimator smooth regime given regime divergence case distinguish regime regime employ best polynomial approximation regimes respectively however motivated adaptive estimator divergence single polynomial approximation enough wonder whether also case estimation specifically seek polynomial degree following quantity sup small possible words seek approximate linear function using fortunately task done help chebyshev polynomial summarize result following lemma lemma let cos arccos chebyshev polynomial polynomial sup hand using modulus smoothness lemma chebyshev alternating theorem since satisfies haar condition hard prove polynomial achieve approximation error order hence polynomial defined lemma achieves uniform approximation error motivated lemma define another polynomial poi regime choose use following estimator summary arrived following estimator construction estimator construction conduct sample splitting obtain samples estimator given given respectively suitably chosen universal constants construction previous estimator require knowledge thus adaptive analysis performance nln small help lemma know moreover large applying lemma yields sup hence total bias previous estimator upper bounded coincides term theorem variance dealt analogously omit lengthy proofs onclusions uture ork proposed general detailed methodology construction minimax estimators functionals parameters especially functional interest part domain elaborate insights shows bias dominating term estimation functionals approximation key efficient bias reduction find interesting interplay functional statistical model specifically show smooth regimes determined nonanalytic region underlying functional related smoothness functional confidence sets given concentration measures solely depend statistical model moreover regime approximation region determined confidence sets approximation error determined smoothness functional region general recipe based interplay two factors successfully yields minimax estimators various divergences including divergence hellinger distance also explored ideas behind polynomial approximation approach bias reduction polynomial approximation uniform approximation error corresponds bias resulting estimator thus best approximating polynomials usually used remark highly task remains open general obtain analyze best polynomial approximation error multivariate functionals special cases general polytopes balls spheres powerful tools approximation theory approach corrects bias help taylor expansions works region functional analytic bias correction approach paper propose general unbiased estimator taylor series arbitrary order following paper presents another second step towards general theory functional estimation despite progress interplay smoothness functional statistical model yet completely revealed choice approximating polynomial regime thus far required tricks ambitious worthy goal establish general explanation effective sample size enlargement phenomenon parametric case find counterpart estimation nonparametric functionals beyond insights provided ppendix auxiliary emmas first prove mean squared error estimator infinity allow choose absolutely continuous respect lemma let configuration rminimax next lemma relates minimax risk poisson sampling model multinomial model define minimax risk multinomial model observations estimating divergence inf sup emultinomial counterpart poisson sampling model inf sup epoisson lemma minimax risks poisson sampling model multinomial model related via following inequalities exp exp next lemma gives approximation properties lemma exists universal constant cln cln cln ean lemma region given exists universal constant depending following lemma gives upper bound second moment unbiased estimate poisson model lemma suppose poi estimator unique unbiased estimator second moment given assuming stands laguerre polynomial order defined order bound coefficients best polynomial approximations need following result qazi rahman thm maximal coefficients polynomials finite interval lemma let polynomial degree bounded modulus corresponding coefficient bounded modulus corresponding coefficient chebyshev polynomials first kind moreover shown cai low lemma coefficients chebyshev polynomial upper bounded hence obtain following result approximation interval centered zero lemma let polynomial degree following lemma gives tail bounds poisson binomial random variables lemma exercise poi following lemmas deal upper bound variance different scenarios lemma independent random variables finite second moment var var var var var lemma lemma suppose indicator random variable independent var var var lemma lemma two random variables var var var particular random variable constant var var ppendix roof emmas proof lemma first give upper bound var poi note continuously differentiable var sup sup previous steps used lemma fact ecn ready bound bias independence triangle inequality pgn qgn bound two terms separately first term obtained similar via modulus smoothness defined second term first note qgn egn egn hence inequality previous bound var var combination two inequalities yields bias bound pgn together yields desired bias bound next bound variance follows var bound separately bound decompose sup sup upper bounding requires delicate analysis first note differentiation respect sup sup hence expanding expectation yields infinite sum converges hence upper bounded since proved independence clear used fact combination upper bounds yields var proof complete proof lemma braess sauer prop showed following equalities bernstein polynomials hence choosing desired inequality direct result lemma proof lemma first statement define remainder term taylor expansion denote event definition reverse confidence sets sup sup sup sup sup sup variance first note triangle inequality sup hence suffices upper bound note linear combination terms form employ triangle inequality reduce problem bounding total variance bounding variance individual terms independence lemma suffices upper bound respectively fact defining taylor expansion sup sup sup sup establishes desired variance bound finally remains bound quantity triangle inequality conclude sup sup sup sup note conduct taylor expansion yield sup sup sup sup sup desired proof lemma replacing adopt notations denote event sup sup sup used since satisfies previous inequality desired variance since constant bias correcting term affect variance applying lemma yields var proof lemma part deducing third inequality lemma prove fact since exp exp exp exp hence comparing coefficient sides yields even expressed coefficients even desired inequality follows assumption odd inequality yields hence third inequality follows first inequality also follows fact lemma remains deduce second inequality lemma know linear combination constant coefficients triangle inequality variance suffices upper bound variance individual term using approach based moment generating function conclude varq result varq thus varq finally suffices note varq completes proof second inequality proof lemma invoke lemma remark noting thus independence moreover var last step used fact also bound bias variance small large respectively defined shown lemma first note note constant corresponds constant paper applying lemma esk var var esk thus hence total bias upper bounded total variance lemma used obtain var var var desired variance bound follows triangle inequality var var var lemma triangle inequality desired bias bound variance triangle inequality gives var bound two terms separately recall lemma gives var difference two quantities upper bounded hence triangle inequality lemma employed obtain var var desired variance bound follows upper bounds rest results observation since proof lemma simplicity define lemma bias upper bounded variance lemma var var var desired var var proof lemma denote event lemma exp exp note conditioning event approximation region contains first analyze variance var tns note conditioning tns construction best polynomial approximation used since lemma increasing function conditioning approximation error upper bounded cln sup cln note conclude cln apply lemma bound coefficient lemma yields conditioning hence triangle inequality tns evaluate expectation lemma applied yield thus tns differentiation easy show note cln tns cln cln cln together variance bound start analyze bias triangle inequality tns tns tns bound separately since conditioning approximation region contains get cln since random variable finite second moment get tns cln combining completes proof proof lemma first bound variance tns recall tns first bound coefficients straightforward see sup sup distinguish two cases case lnnn hence lemma conclude using lemma case define lemma conclude nln lemma hence combining two cases yields moreover lemma triangle inequality previous inequalities tns hence lemma get var var tns tns desired variance bound bias lemma lemma give obtained setting sup hence triangle inequality get tns desired proof lemma distinguish three cases based different values simplicity define case first consider case triangle inequality bias decomposed tns used lemma lemma similarly lemma variance upper bounded var tns var var var case next consider case bounded lemma lemma bias upper tns variance obtained lemma follows var tns var var last step used case iii finally consider case lemma lemma tns variance given lemma var combination three cases completes proof var tns var var proof lemma proof lemma also distinguish three cases case first consider case lemma lemma used fact similarly lemma variance upper bounded var var tns var etns var tns case next consider case lemma lemma variance lemma used yield var var tns var var tns used fact case iii finally come case lemma lemma variance bound obtained similar fashion via lemma var var var tns etns combining three cases yields desired result proof lemma first analyze variance lemma know exists constant triangle inequality result lemma hence triangle inequality lemma var tns tns lemma var var tns bias construction hence triangle inequality get desired tns proof lemma fix let estimator multinomial model upper bound likelihood ratio note estimator obtains sample sizes observations definition sup emultinomial given let poi mpi poi nqi write use estimator estimate note conditioned multinomial psp similarly moreover implies psq construction triangle inequality exp exp used lemma result follows arbitrariness proof lemma properties chebyshev polynomials well studied particular chebyshev polynomial even function takes form even polynomial degree since triangle inequality desired result follows variable substitution ppendix roof auxiliary emmas proof lemma fix arbitrary possibly randomized estimator denote possibly randomized decision made conditioning event first symbols symbols second symbols symbols observed note probability measure choose specified sequel hence probability least event holds thus sup result denote inf median choosing exp yields sup letting yields desired result proof lemma similar proof lemma show poi poi represent bayes error given prior multinomial model poisson sampling model respectively one hand poi poi poi poi used markov inequality get poi poi hand note whenever poisson tail bound lemma also poi poi poi poi poi poi exp exp exp exp minimax theorem taking supremum priors yields desired result proof lemma apply general approximation theory convex polytopes case interval note polynomial scaling suffices consider function defined case modulus smoothness reduced sup evaluation write definition taylor expansion differentiation easy show maximum hence attained corresponding maximum conclude concavity yields pair must satisfy one following start case case still holds maximum attained corresponding inequality becomes note inequality requires hence also holds case left case result summary obtain lemma follows previous upper bounds lemma proof lemma suffices prove claim following triangle containing denote three edges triangle excluding endpoints since continuous function respect compact set assume achieves supremum let intersection line passing direction triangle either belongs say sufficiently small line connecting resp intersects direction resp hence similarity relation geometry equal similarly others hence linearity always perturb intersect assume linear zero function becomes hence modulus smoothness proof lemma used distinguish two cases taylor expansion sup otherwise since shown scaling combination previous three inequalities yields used desired result follows directly choosing proof lemma first assume write lemma related discussions know comparing coefficients yields general case desired result obtained scaling proof lemma straightforward show var var var var var var var desired eferences jiao venkat han weissman minimax estimation functionals discrete distributions information theory ieee transactions vol csisz measures difference probability distributions indirect observations studia sci math vol liese miescke statistical decision theory statistical decision theory springer cover thomas elements information theory new york wiley tsybakov introduction nonparametric estimation kullback leibler information sufficiency annals mathematical statistics vol shannon mathematical theory communication bell system technical journal vol catoni picard statistical learning theory stochastic optimization ecole springer science business media vol csiszar information theory coding theorems discrete memoryless systems cambridge university press sanov probability large deviations random variables united states air force office scientific research kullback information theory statistics courier corporation dempster laird rubin maximum likelihood incomplete data via algorithm journal royal statistical society series methodological bishop pattern recognition machine learning vol kingma welling variational bayes arxiv preprint wang kulkarni divergence estimation continuous distributions based partitions information theory ieee transactions vol lee park estimation divergence local likelihood annals institute statistical mathematics vol gretton borgwardt rasch smola kernel method advances neural information processing systems divergence estimation continuous distributions information theory isit ieee international symposium ieee wang kulkarni divergence estimation multidimensional densities distances information theory ieee transactions vol nguyen wainwright jordan estimating divergence functionals likelihood ratio convex risk minimization information theory ieee transactions vol cai kulkarni universal divergence estimation sources information theory ieee transactions vol zhang grabchak nonparametric estimation divergence neural computation vol van der vaart asymptotic statistics cambridge university press vol jiao han weissman minimax estimation distance ieee international symposium information theory isit ieee zou liang veeravalli estimation divergence distributions ieee international symposium information theory isit ieee estimation divergence optimal minimax rate arxiv preprint cai low testing composite hypotheses hermite polynomials optimal estimation nonsmooth functional annals statistics vol valiant valiant power linear estimators foundations computer science focs ieee annual symposium ieee valiant valiant estimating unseen improved estimators entropy properties advances neural information processing systems yang minimax rates entropy estimation large alphabets via best polynomial approximation arxiv preprint acharya orlitsky suresh tyagi complexity estimating soda yang chebyshev polynomials moment matching optimal estimation unseen arxiv preprint han jiao mukherjee weissman optimal estimation norm functions gaussian white noise preparation miller note bias information estimates information theory psychology problems methods vol carlton bias information psychological bulletin vol nemirovski topics ecole vol mitzenmacher upfal probability computing randomized algorithms probabilistic analysis cambridge university press jiao venkat han weissman theory rule functional estimation arxiv preprint braess sauer bernstein polynomials learning theory journal approximation theory vol tsybakov aggregation statistics lecture notes course given url http crest notes sflour pdf vol withers bias reduction taylor series communications methods vol peccati taqqu facts charlier polynomials wiener chaos moments cumulants diagrams springer rice tchebycheff approximation several variables transactions american mathematical society ditzian totik moduli smoothness springer totik polynomial approximation polytopes memoirs american mathematical society vol dai approximation theory harmonic analysis spheres balls springer remez sur des dapproximation comm soc math kharkov vol chebfun version chebfun development team http haar die minkowskische geometrie und die stetige funktionen mathematische annalen vol lepski nemirovski spokoiny estimation norm regression function probability theory related fields vol jiao han weissman minimax estimation divergence functions preparation qazi rahman coefficient estimates polynomials unit interval serdica math vol mitzenmacher upfal probability computing randomized algorithms probabilistic analysis cambridge university press mason handscomb chebyshev polynomials crc press wald statistical decision functions wiley
| 10 |
may radio transformer networks attention models learning synchronize wireless systems timothy shea latha pemula dhruv batra charles clancy virginia tech arlington oshea virginia tech blacksburg lpemula virginia tech blacksburg dbatra virginia tech arlington tcc introduce learned attention models radio machine learning domain task modulation recognition leveraging spatial transformer networks introducing new radio domain appropriate transformations attention model allows network learn localization network capable synchronizing normalizing radio signal blindly zero knowledge signal structure based optimization network classification accuracy sparse representation regularization using architecture able outperform prior results accuracy signal noise ratio identical system without attention however believe attention model implication far beyond task modulation recognition attention models recently gaining widespread adoption computer vision community number important reasons introduce learned model attention capable removing numerous variances parametric search spaces input data focuses task extracting canonical form attention patch variations removed make downstream tasks easier lower complexity first introduced recurrent networks quite expensive made significant progress since transformer networks radio communications software radio cognitive radio deep learning convolutional autoencoders neural networks machine learning attention models spatial transformer networks synchronization radioml signal processing ntroduction cognitive radio signal processing general long relied relatively well defined set expert systems expert knowledge operate unfortunately realization cognitive radio greatly limited ability systems generalize perform real learning adaptation new unknown signals tasks approaching signal recognition synchronization reasoning feature learning angle seek able build cognitive radio systems truly generalize adapt without running barriers expert knowledge many current day solutions address narrowly scoped problems prior work looked application deep convolutional neural networks task modulation recognition blind feature learning time domain radio signals able achieve excellent classification performance low high snr learning time domain features directly dataset harsh channel impairments oscillator drift clock drift fading noise however notion attention work instead forced discriminative network learn features invariant channel effects communications receivers many iterative expert modulation classification algorithms typically perform synchronization signal performing additional signal processing steps synchronization thought form attention estimates time frequency phase sample timing offset order create normalized version signal figure generalized transformer network architecture spatial transformer networks stns recently introduced provide end end model attention trained directly loss training example compactly evaluated new samples consist trained localization network performs parameter regression fixed parametric transform operation trained discriminative classifier select class estimate image domain far applied affine transform used extract attention patch shifted scaled rotated original image according parameter vector generalized figure work propose radio transformer network rtn leverages generalization stn architecture introduces specific parametric transforms attention model used learn directly synchronize wireless systems enables modulation recognition system outperform attentionless version assisting normalization received signal prior classification constructing normalized received signal attention model greatly simplify task discriminative network relaxing requirements various variations received signal must recognize reducing complexity increasing performance necessary discriminative network important result modulation recognition also widely radio communications signal processing demonstrates learn synchronize rather relying expert systems estimators derived costly analytic process believe attention models play important role forthcoming machine learning based signal processing systems earning lassify ignals figure original convnet performance expert statistics believe correction fading equalization potential also addressed attention model either jointly subsequently transformations addressed timing recovery figure original convnet architecture without attention prior work compare supervised learning using deep convolutional network expert features handful widely used machine learning techniques expert signal amplitude phase envelope moments used architecture shown figure using convolutional frontend dense backend softmax categorical training using synthetic dataset figure summarize results experiment without attention model demonstrating significant improvement moment based features conventional classifiers exciting result demonstrates feature learning raw data work case working better conventional widely used expert features iii sing attention ynchronize effectively synchronize wireless signal must develop transform right parameters able correct channel induced variation within scope paper consider channel variation due time offset time dilation frequency offset phase offset effects exist real system containing transmitters receivers whose oscillators clocks locked together address problem fading timing recovery relatively straightforward processes involving input signal correct starting offset sampling increment much akin extraction visual pixels correct offset affine transformation treat directly leveraging affine transformation used image domain represent data image two rows containing columns containing samples time full affine transformation allows translation rotation scaling given element parameter vector restrict translation scaling time dimension simply introduce following mask readily use affine transform implementations image domain phase frequency offset recovery phase frequency offset recovery task immediate analogue vision domain however transform signal processing relatively straightforward simply mix signal complex sinusoid proper initial phase frequency defined two new unknown parameters exp directly implement transform new layer keras top theano tensorflow cascade affine transform transformer module network parameter estimation task synchronization becomes task parameter estimation values passed transformer module experimentally try number different neural network architectures performing parameter regression task ultimately introduce two new domain appropriate layers keras help assist estimation complex convolution layer complex neural networks widely used still faced theoretical issues especially automated differentiation represent signal two row matrix real component first row imaginary component second row theory real valued convolutions neural network learn relationship components extent introducing complex convolution operation simplify learning task ensure learn filter properties used working complex valued input vector size define weight vector complex filters length may compute output output values conv conv conv conv allows leverage existing highly optimized real convolution operations obtain differentiable operation trained training details train network using keras top theano tensorflow using nvidia geforce titan inside digits devbox use dropout layer regularization adam method stochastic gradient descent fit network parameters training set train batch size initial learning rate train roughly epochs reducing learning rate half validation loss stops decreasing training takes hours titan gpu feedforward evaluation signal classification takes less data set ethodology classifier performance evaluation work prior work leverage open source dataset perform split train test sets consists modulations digital analog varying snr levels random walk simulations center frequency sample clock rate sample clock offset initial phase well limited fading believe critically important test real channel effects early ensure realistic assumptions early models dataset labeled snr performing supervised training use labels training set evaluate classification accuracy performance snr label step test set training validation loss along learning rate shown throughout trainin figure complex power phase creating differentiable cartesian polar operation makes easier network operate directly input phase magnitude slightly involved compute magnitude squared simply pow pow phase computation use simplified differentiable approximation without conditionals implemented keras top theano tensorflow network architecture evaluate dozens localization network architectures slight variation dense connections convolutional layers layers using various activation functions achieve best performance using composite network shown figure uses complex convolutional layer complex polar layers within localization network identical discriminative network one used without attention front comparison figure training loss learning rate lassification erformance evaluating performance rtn test set obtain slightly increased performance model without attention similar accuracies obtained slightly lower snr values high snr performance slightly improved stable shown figure suspect complexity discriminative network could reduced due lower complexity normalized signal investigate work fair comparison discriminative network figure radio transformer network architecture figure radio transformer network performance performance convolutional neural network without attention also improved prior work increasing dropout better learning rate policy match used rtn training reflected figure figure performance rtn high snr figure density plots input constellations attention earning erformance shown classification performance improved using radio localization network extract normalized patches dataset however look classification performance also interesting look radio sample data transformation observe normalization occurred difficult visualize exactly occured looking time domain data yield clean obvious performance improvement data upon attempting plot qpskclass signals clear synchronizaiton point still horribly noisy partial however plot constellation density test examples range time samples shown figure start see bit density forming around constellation points started good sign start clearly much work needs done improve quantify synchronization performance real reason expect perfect synchronization classification task enough normalization make things easier discriminative network continue investigate area additional tasks modulation recognition may improve synchronization properties demodulation beyond achieved vii onclusions developing feed forward model radio attention demonstrated effectively learn synchronize using deep convolutional neural networks domain specific transforms layer configurations normalizing time frequency phase offsets using learned estimators effectively improve modulation classification performance requires expert knowledge signals interest train training complexity network high execution actually quite compact viable use deployment platforms highly parallel low clock rate gpgpu architectures enable deployment algorithms exceptionally well suited acknowledgments authors would like thank bradley department electrical computer engineering virginia polytechnic institute state university machine learning perception group hume center darpa generous support work research developed funding defense advanced research projects agency darpa mto office grant views opinions findings expressed author interpreted representing official views policies department defense government eferences clancy hecker stuntebeck shea applications machine learning cognitive radio networks wireless communications ieee vol bergstra breuleux bastien lamblin pascanu desjardins turian bengio theano cpu gpu math expression compiler proceedings python scientific computing conference scipy oral presentation austin jun kingma adam method stochastic optimization arxiv preprint mnih heess graves recurrent models visual attention advances neural information processing systems srivastava hinton krizhevsky sutskever salakhutdinov dropout simple way prevent neural networks overfitting journal machine learning research vol abadi agarwal tensorflow machine learning heterogeneous systems software available online available http chollet keras https jaderberg simonyan zisserman spatial transformer networks advances neural information processing systems shea corgan convolutional radio modulation recognition networks corr vol online available http arxiv org abs
| 3 |
analysis planar ornament patterns via motif asymmetry assumption local connections adanova oct department computer engineering middle east technical university ankara abstract planar ornaments wallpapers regular repetitive patterns exhibit translational symmetry two independent directions exactly distinct planar symmetry groups present fully automatic method complete analysis planar ornaments groups specifically groups called cmm pgg given image ornament fragment present method simultaneously classify input one groups extract called fundamental domain minimum region sufficient reconstruct entire ornament nice feature method even given ornament image small portion contain multiple translational units symmetry group well fundamental domain still defined contrast common approach attempt first identify global translational repetition lattice though presented constructions work quite wide range ornament patterns key assumption make perceivable motifs shapes repeat alone provide clues underlying symmetries ornament sense main target planar arrangements asymmetric interlocking shapes symmetry art escher keywords ornaments wallpaper groups mosaics regular patterns escher style planar patterns introduction planar ornaments wallpapers repetitive patterns exhibit translational symmetry two independent directions form tiling plane created repeating base unit predictable manner using four primitive planar geometric operations translation rotation reflection glide reflection fig using combinations primitive operations applied base unit different patterns generated interesting fact four primitive operations combined exactly seventeen different ways tile plane forming called lane symmetry groups present illustrative example fig firstly observe pattern fig generated replicating equilateral triangular corresponding author email addresses venera adanova stari tari preprint submitted elsevier figure primitive operations translation reflection glide reflection types rotational symmetry fragment depicted fig rotations systematic manner equilateral triangle fragment smallest fragment pattern sufficient construct entire pattern fig using four primitive operations reoctober letter stands centered cell primitive cell unit cell centers highest order rotation vertices centered cell encountered two cases cmm symmetry groups chosen reflection axis normal one sides cell digit follows letters indicates highest order rotation whereas letter characters respectively stand mirror glide reflections two positions containing either understood first reflection normal axis second angle digit denoting highest order rotation symmetry group take values restriction introduced crystallographic restriction theorem states patterns repeating two dimension exhibit rotations present paper given image fragment ornament belonging either mentioned groups present robust method extract fundamental domain along underlying symmetry operations provide complete analysis ornamental pattern groups based image important feature computational scheme even given input image small portion full unit cell fit image fig symmetry group well fundamental domain still defined course symmetry group fundamental domain defined becomes trivial deduce translational unit cell remark existing methods rely first discovering underlying lattice via translational repetition structure requires global calculations autocorrelations contrast search translational repetition lattice directly look local connections among motifs protiles deduce symmetry clues later integrated via decision tree using decision tree indeed classical method grouping tiles symmetry groups based individual clues classical decision tree depicted fig observe check mirror reflections dominate question set rational behind given question sequence could easy humans spot mirror reflection rotational symmetries much harder spot glides true robustness computational case might figure example ornament image fundamental domain unit cell symmetries relation symmetries superimposed ornament image ferred fundamental domain secondly rotating twice around top corner rotating around middle point base yields rhombus depicted fig smallest translational unit pattern generated simply translating along two independent directions hence referred unit cell abstracted unit cell tile relation fundamental domain unit cell along symmetries shown fig blue hexagons four corners rhombus indicate rotation centers red triangles pink diamonds located centers side midpoints two triangles forming rhombus respectively indicate rotation centers final illustration fig sample cell shown superimposed original ornament image common naming provided crystallographic notation presented example happens belong group called indicating six fold rotations crystallographic notation remaining groups named pmm pmg pgg cmm fig depict cell structures symmetry groups interested group name character position defines group property first position either letter stands primitive cell figure unit cell structures wallpaper groups darker regions indicate fundamental domains groups pmm pmg gives groups listed groups goal obtain fundamental domain robustly mirror reflection accumulating symmetry clues resort local connections among motifs protiles many classical ornaments found islamic art constructed symmetric protiles stars provide clue symmetry group tile works symmetries protiles used clue symmetry however believe inferring symmetries motifs robust due possible noise motif extraction moreover ornament artist may using nearly symmetric motifs reflect symmetries ornament several examples exist escher art hence neither attempt recover individual motifs correctly check symmetries indeed even expect patterns analyze contain sufficient number asymmetric motifs least significant symmetries completely swallowed motifs symmetry motif asymmetry assumption restrictive may strike first consider two valid examples shown fig first one group contains exactly three symmetric protiles brown beige respectively rotational symmetries ornament contain asymmetric motif three types rotation centers coincide motif centers nevertheless local relations among rotational symmetric gray motifs reveal sixfold rotational centers providing figure example case pattern repetitions original ornament fundamental domain unit cell black quadrilateral observe ornament fragment contain full unit cell propose alternative decision tree details given initially accumulate indirect clues mirror reflection opposed searching mirror reflection axis postpone mirror reflection control till last stage last stage use mirror reflection check predicted axes eliminate possible false alarms resulting indirect clues use mirror reflection check eliminate false alarms catch missed ones means willing sacrifice mirror reflections order wrongly assume tile mirror reflection rational follows miss mirror reflection say classify tile still get correct unit cell redundant twice size fundamental domain hence pattern correctly generated however falsely classify tile extracted fundamental domain sufficient falsely assume existence mirror reflection consequence searching mirror reflection recognize groups contain sufficient symmetries mirror leaves figure classical decision tree goal revealing social structures interaction via dominant symmetries used ornament designs individual cultures geographical regions mathematics ornament patterns studied terms groups formed symmetry operations examples include dutch artist escher took particular interest patterns formed repeating asymmetric shapes discovered local structure leading wallpaper patterns work symmetry examined regular repetitive patterns wallpaper frieze groups even utilized quite practical problems example analyze human gait achieve automatic fabric defect detection patterned textures ficient clues second ornament first glance might give impression contains single form symmetries ornament rotational symmetry mirror reflection nevertheless due texture also detectable motifs form circles circle fragments local relations provide clues various rotation centers general constructions handle ornaments single motif motif contains symmetries symmetry group example checkerboard pattern uniform pack triangles remark however ornaments typically obvious ones recognize furthermore even class ornaments possible identify translation grid though full analysis revealing performed rest paper organized follows related work method details given results tile set finally summary conclusion general pool works computational symmetry main focus finding symmetry axes single objects since single object exhibit mirror reflections rotational symmetries efforts heavily focused reflections rotations finding local symmetries shape skeletons knowledge works address finding glide reflection axis image though goal study one dimensional arrangements symmetry leaves works targeting shape symmetry whether directly image segmented region fall focus focus symmetries planar related work ornament patterns always source curiosity interest arts crafts also fields including mathematics computation cultural studies etc early researchers mostly examined ornaments cultural contexts termined since possible lattice types associated restricted symmetries ornament pattern exhibits detecting lattice types reduces number symmetry groups though lattice detection commonly performed using peak heights autocorrelation alternative peak detection algorithm based called regions dominance used detect patterns translational lattice region dominance defined largest circle centered candidate peak higher peaks contained circle authors argue region dominance important height peak hough transform used detect two shortest translation vectors best explains majority point data order test whether pattern certain symmetry conjectured symmetry applied entire pattern similarity original image transformed one computed representative motif chosen symmetrical figure patterns formed regular repetition shapes via four primitive geometric transformations figure possible valid inputs analyzing planar periodic patterns translational symmetry primitive repetition operation encountered several works recurring structure discovery general flow works detect visual words cluster based appearance spatial layout among works perform image retrieval based discovered recurring structures instead directly using recurring structures image matching authors first detect translational repetition lattice image multiple lattices image thus given query image various detected lattices search database images equivalent lattices search matching score two lattices product two measurements similarity grayscale mean representative unit cell similarity color histograms detection deformed lattice given pattern proposed first propose seed lattice detected interest points using interest points commonly occurring lattice vectors extracted subsequently seed lattice refined grown outward covers whole pattern translational symmetry model based lattice estimation performed model comparison hypotheses generated via peaks autocorrelation implemented using approximate marginal likelihood works move beyond computing translational lattice address classifying repeated patterns according plane symmetry groups wallpapers first step lattice detection performed lattice detection step sequence questions answered final symmetry group recently combined lattice extraction point symmetry groups individual motifs analyse islamic patterns mosaics method specifically targets islamic ornaments motifs typically provide clues underlying plane symmetry group readily applicable motifs robustly extracted motifs reflect symmetries rotation groups detected analyze islamic rosette patterns also possible perform continuous characterization ornament comparing ornament images example encountered ornament images classified according symmetry feature vector calculated based prior lattice extraction questions lattice detection used method ornament images directly compared transformed domain applying global transformation note among works addressing planar patterns also several interesting works pattern synthesis including generate ornament certain symmetry group use given motif tile plane certain style map given wallpaper pattern curved surface number clusters specificied unlike however requires bandwidth parameter indirectly influence number detected clusters automate clustering process iteratively using mean shift clustering increasing bandwidth iteration follows method symmetry detection system three modules image processing module local connectivity analysis final symmetry detection separately explained initial step bandwidth parameter mean shift algorithm set binit number clusters observed iteration bandwidth increased bstep images binit bstep binit iterations stopped whenever number clusters drop resulting clusters used define binary images resulting bandwidth taken image dependent bandwidth estimate image processing input image processing module arbitrary ornament may noisy scanned image screen shot part ornament drawn using computer tool input ornament images acquired arbitrary imaging conditions processing proceeds three stages gamma correction initial clustering refinement end refinement step binary images ornament obtained binary images called masks ornament image number masks result adaptive clustering quite often number masks coincides number colors ornament image however always image processing module given number colors refinement computation initial clusters outlined performed color space hence spatial proximity pixels taken account next stage sequential combination median filtering pixel space mean shift clustering color space applied iteratively fixed bandwidth median filtering realized follows pixel class surrounded pixel class assigned class channels pixel cluster center cluster sequential application median filtering followed mean shift fixed bandwith performed times images five iterations seemed sufficient iterations may cause components different colors join gamma correction first step image processing module brighten black lines shadows performed component xyy color space using following formula yout ymin ymax ymin ymin ymax ymin resulting image depends parameter one chooses lightness image higher original image darker colors contrast effect original image observed makes colors darker original image final clusters pixel space may small holes holes may result either insufficient application iterative sequential filtering step outlined simply small feature eye bird remove holes background foreground connected components radius smaller given threshold converted foreground background pixels performing elimination based component radius rather component area reliable might case components join giving large areas causing necessary separate components eliminated initial clustering next step iterative application clustering algorithm till number initial clusters drop value assume number distinct colors less hence number colors color groups merged form bigger motif groups clustering use fast robust mean shift algorithm produces clusters based given feature space case features channels gamma corrected image appealing feature algorithm require sample result image processing module depicted fed mean shift small fixed bandwidth used number connection groups discovered mean shift clustering could different initial sample connectivity graph individual connection groups discovered clustering using connection length depicted figure sample result image processing module input ornament image left three masks local connectivity analysis masks connections process local connectivity analysis starts consistent keypoint detection connected components obvious means detect centroid foreground component however sensitive output image processing module separate repeating motifs may touch one another example furthermore joining motifs may inconsistent throughout ornament plane important purpose analysis detected keypoints consistent throughout pattern however critical whether really coincide true motif centers hence call keypoints nodes towards robustly locating nodes continuous labelling binary image namely mask performed yielding continuous image whose values labelling stage detailed end subsection separate paragraph nodes calculated centroids positive valued connected components image sample label images depicted node merely robustly computed keypoint nodes detected graph called connectivity graph constructed via iterative extraction local node relations first iteration minimal pairwise distance found connections similar distances extracted using fixed tolerance tol next iteration excluding extracted connections next minimal distance computed extract new connections iterations connections various sizes obtained note connections large distances much meaningful best provide redundant information hence better keep small neither choice parameter tolerance tol critical later connections using mean shift algorithm given set connections stored connectivity graph connection length feature figure sample label images observe motif centers obtain highest values continuous labelling binary masks mask ornament pixel foreground connected component assigned initial label label reflecting whether probability pixel belonging centroid higher pixel belonging component boundary purpose distance foreground pixel nearest background pixel computed bigger distance half maximum distances assigned positive constant others label initialization relaxation performed relaxation step value increased decreased depending whether current label pixel less average neighboring labels relaxation expressed follows label label relaxation relaxation constant take positive relaxation relaxation avg label avg label computed average convergence sufficient number iterations whichever comes first continuous labelling binary masks achieved label last performed iteration figure connectivity graph extracted connections mask ornament connection groups local connectivity analysis connections symmetry elements connection groups extracted described analyzed order detect node relations way nodes related gives hint various symmetry elements recall connections small sizes favored since larger connections repeat node relations larger scale thus analyses begin connection group smallest size continue ascending order analyses done following individual connection group divided connected graphs afterwards connected graph analyzed independent connected graphs connection group given connected graph following decisions made tation since connections might occur accident centers accepted number nodes involved connection connection group entire node number mask graph contains three nodes polynomial order one fit given nodes graph zigzag structure line pass centers edges two adjacent nodes case distances centers edges two adjacent nodes line computed probability graph zigzag structure computed taking product distance ratios probability glide reflection axis passing edge centers contrary nodes edge centers lie fit polynomial line representing translational symmetry graph cycle graph nodes connected closed chain probability graph either equilateral triangle square regular hexagon computed top row probability product ratios polygon edges thus graph constructed connection three four six nodes probability computed product edge ratios higher chance center triangle square hexagon taken center rotation detecting implicit node relations defining symmetry centers elimination repeating symmetry centers performed example early connection group point mask marked center rotation point marked rotation center subsequent connection group later one discarded means symmetry center one type obtained one connection group however symmetry center one type may coincide symmetry centers types example symmetry center graph acyclic nodes might related either rotation graph containing two nodes glide reflection graph containing nodes connected zigzag form bottom row graph contains two nodes center taken center figure examplar connections top row illustrates cyclic graphs acyclic ones shown bottom row equilateral triangles indicate rotations squares indicate rotations regular hexagons indicate rotations connections represent rotations zigzag structures indicate glide reflections figure implicit node relations double rotation centers triple rotation centers double rotation centers double glide reflections marked rotation center connection group may also symmetry center rotation derived connection group point mask may represent multiple symmetry centers type derived connection group thus two rotation centers double triangle point might occur detected connection group repetitions used detect implicit node relations double triangle point indicates rotation around point similarly double twofold triple rotation center point indicator rotation around point angle lines double glide reflection axes represent reflection symmetry perpendicular glide reflection axis samples connections shown another case paired connections see indicate reflection symmetry actual centers lie centers paired lines moreover indicators reflection axis passing rotation centers algorithm detects computing minimal distance rotation centers connecting nodes minimal distance graph maximal degree four obtained actual rotation centers however connections occur paired rotation centers handled accordingly figure samples paired connections actual rotational centers lie centers paired lines refinement notice symmetries mask detected steps algorithm yields mask structure contains fields various symmetry centers classes example mask contains rotation centers structure contains centers classes class symmetry element number connection group extracted thus rotational centers extracted connection group class next step collect symmetries detected individual masks necessary step since mask may contain one class symmetry type whole pattern contains classes symmetry type initially symmetries type collected without considering classes say threefold rotation centers extracted masks marked ornament center algorithm defines classes fall particular center thus rotation centers first mask class mask two mask three centers contain classes numbers define classes symmetry centers classified accordingly three groups example illustrated might case symmetry center detected one mask center detected another masks center forms group leading fourth symmetry class eliminate groups algorithm counts number centers class sorts ascending order computes minimal distance nodes one class beginning largest class propagates group nodes similar distance done iteratively changes occur symmetry types combined roneously two classes detected class intersecting unit cell edges left type eliminated maximal order rotation six one classes threefold rotations one intersecting unit cell centers left others eliminated final symmetry group detection individual cues form symmetry elements described previous subsection integrated yield final symmetry group decision via decision tree propose reduced set groups tree shown fig comparing tree classical decision tree fig observe mirror reflection checks postponed performed last stage number significantly reduced furthermore exception rotations case mirror reflection check performed cues indirectly imply example maximal order rotation six glide reflections paired rotations detected mirror reflection check order probability mirror reflection calculated ornament classified belonging group else group reason explained introduction section erroneously detected mirror reflection less desirable missed one case missed mirror reflection fundamental domain twice big really half mirror half hence whole pattern correctly generated one exception case two twofold classes detected case two possibilities first possibility ornament group pgg second possibility ornament cmm pattern formed mirror symmetric protiles latter case third center cmm missed happens glide reflection axis captured connection graphs hence cmm tile classified pgg due missed third center tolerable unlike versus versus mirror reflection distinction two groups cmm pgg hence fundamental domain correct mirror reason case two centers mirror reflection check performed even implied via indirect cues figure example symmetries detected three masks ornament mask one class rotation centers detected symmetries three masks collected three classes threefold rotation centers obtained classes determined maximal order rotation defined rotations observed maximal order rotation taken checked whether pattern contains glide reflections glide reflection observed tile group contains translational symmetry defining maximal order rotation elimination done using symmetry group information thus maximal order rotation four six one class rotation centers figure proposed decision tree mirror reflection mirror reflection checks performed single unit cell note case rotational groups three four six fold orders unit cells regular reflectional groups identical difference fundamental domain case rotations unit cell pgg group employed rate given unit cell objects lying extracted using masks ornament pattern unit cell divided two along expected reflection axis based group information object lying sides area perimeter distance object center reflection axis dref distance farthest point object reflection axis fdref point reflection axis closest object center pref computed pair objects one first part unit cell second picked probability first object reflection symmetry second object estimated via product corresponding feature ratios prob exp mirror object side mean highest scores first part taken indicate probability unit cell symmetrical along given reflection axis note might one reflection axes given symmetry group final probability mean value computed reflection axes unit cell recall rotational groups corners units cells points maximal order rotation exception pgg unit cell readily constructed connecting nearest maximal rotation centers class two groups pgg however maximal rotational symmetry center appearing unit cell corners also appears unit cell center therefore latter two groups point certain class chosen four closest points similar class selected among four points selected length equals length opposing directions rest two points also equal distance point opposing directions points exhausted algorithm reports failure experiments object first part probability computed pairs highest score picked indicates probability object rotations glide reflections lattice nodes centers connections zigzag structure similar directions detect unit cell point center closest center point selected length equal length angle two lines centers found centers searched center closest center used extract rest fundamental domain case point equal opposing direction new point actually center found using first case two centers known symmetries detected except one center algorithm fails detect fundamental domain note might case two centers found two different locations centers missed algorithm cases first case gives large fundamental domain one type symmetries detected one cases described hold fundamental domain computed cases final fundamental domain one smallest region approach extended symmetry groups using properties particular class chosen first node unit cell two closest points chosen angle lines less equal bigger equal last point chosen angle lines equal angle lines length lines equal lines respectively recall lattice nodes class points detected another point chosen operations performed according point points exhausted algorithm reports failure ornament pattern neither rotations glide reflections unit cell nodes merely motif centers fundamental domain unit cell successfully constructed fundamental domain extraction straightforward recall fig example fundamental domain ornament unit cell whereas ornament however algorithm reports failure unit cell construction algorithm returns stage attempt construct unit cell make use previously collected information individual symmetries note failure unit cell construction may arise due lack sufficient translational repetition let explain fundamental domain extraction step case failure via example suppose two centers rotation found hence unit cell constructed assuming two centers nodes unit cell distance two points computed since known unit cell ornaments rotations composed two equilateral triangles center distance points order detect two circles radius centering drawn two points two circles intersect one selected manner three points indicating nodes fundamental domain computed cases two rotations exist two closest nodes selected hand one rotation center found centers searched fundamental domain extracted using relation center experiments data formed labelled ornament data set produced different imaging conditions variety ornament artists including authors using iornament tool set enriched representative ornament fragments equivalent pgg color permutations ignored total number ornaments set add forming data set paid attention cover variety styles terms brush color tone motif choice paid attention half ornaments mimicking escher style asymmetric interlocking forms set contains enough representative elements groups parameters ornaments identical parameter values used detailed gamma correction set clustering set refinement stage set maxr maxr maximum radius connected components last two ornaments respectively pgg groups collected cues sufficient case one glide axes detected hence two cases inconclusive reason failure harder detect glides absence sufficient repetition demonstrate later examples detecting pgg possible slightly samples rest results remaining tiles organized follows note result one ornament shown previously introduction save space samples placed appendix split three figures showing results patterns illustrated section illustrative samples groups higher order rotations selected remaining selected remaining groups patterns higher order rotations organized two groups first one contains groups triangular lattice structures groups unit cell consists two equilateral triangles contains groups square lattice results respectively shown figs original ornaments shown full sizes whereas ones depicting results cropped order make symmetries unit cells fundamental domains visible samples higher order rotation groups symmetry groups correctly identified except mirror reflection samples ornament two rotation centers automatically classified correctly without need mirror reflection check examples mirror reflections implied result detected glides flow decision tree proceeded mirror reflection check hence fundamental domains correctly identified fundamental domain ornament half size one ornament half size one halves obtained mirror reflection examples fundamental domains double size samples classified belonging consequence missed mirror reflections nevertheless using generation rules group instead original tiles still recreated two samples fig first one successfully passed reflection double check implied via glides checked second sample mirror reflection test connection extraction connectivity graph constructed using iterations tol bandwidth fed mean shift connection group clustering set results first present result representative ornaments taken escher collection fig first ornaments depicted fig method works successfully group first rows show original input second rows show detected symmetries unit cells fundamental domains superimposed input third rows show fundamental domains cut automatically original patterns unit cells forms previously described fig shown red quadrilaterals illustration purposes region belonging fundamental domain made lighter rest pattern made slightly darker case ornament image small fit whole unit cell fundamental domain shown half cases like notice cases letter group names red color indicate method three cases missed mirror reflection none three examples mirror reflection checks last stage performed indirect clues imply mirror reflection result fundamental domains twice big though unit cells correct recall general mirror symmetry cause problems since centers maximal order rotations reside centers respective protiles famous mariposas pattern fig three rotation centers different classes detected second tile fig two rotation centers different classes detected yet patterns obtain enough information identify symmetry groups detect fundamental domains results fig show rotation centers detected wrong places nevertheless since symmetry group depends maximal rotation order incorrect twofold centers influence final decision figure symmetries detected ornaments painted escher observe although images small repetitions symmetries able find fundamental domains insufficient repetition seems pose problem two glide group ornaments shown last row letter group names ornaments shown red mirror reflection missed cases fundamental domains double sizes figure results ornaments rotations formed indirect clues indicate existence hence particular ornament group fundamental domain double size similar previously discussed cases ornaments using generation rules group instead original pattern still recreated ornaments fundamental mains accurately identified group ornaments observed glide reflections easily detected human hard perceive type symmetry ornament classified group enough detect one class rotation centers hence mirror reflection check unnecessary results tiles higher order figure symmetries detected ornaments rotations figure fundamental domains ornaments mirror reflections pgg pgg pgg cmm cmm figure sample results ornaments pgg cmm groups tions given appendix figs total ornaments higher order rotations excluding escher ornaments contain mirror reflections detected fundamental domains ornaments given fig fundamental domains ornaments mirror reflections detected framed black boxes serve ornaments mirror reflections missed hence classified belonging corresponding groups fundamental domains double sizes fig shows results samples remaining five symmetry groups lower order rotations pgg cmm first five ments belong group group ornaments contain four distinct classes symmetry centers second third groups respectively contain ornaments two three distinct classes centers groups pgg cmm two distinct classes glide reflection axes perpendicular second sample cmm group yellow purple ornament shows case third class centers detected employ connections indicate binary connections rotations glide fragments forming zigzag structures particular cmm example algorithm detects zigzag structures defines glide reflection axes process however algorithm loses track rotations also indicated connections general observe rotation centers lie glide axes protiles mirror symmetric case present cmm sample algorithm loses track one rotation centers one rotation centers lost symmetries cmm pattern becomes similar symmetries pgg one except mirror reflection symmetry pgg group mirror reflections cmm group thus exactly two rotation centers detected always need perform mirror reflection check present cmm sample checking mirror reflection identifies correct group performed tests additional samples presented appendix fig help mirror reflection checks algorithm achieves pgg cmm separation finally last row fig illustrates results ornaments without rotational symmetries two distinct classes glides parallel detected ornament classified group group pure translation symmetry algorithm detects grids lines indicating translations tiling pattern formed repeating shapes long ornament contains sufficient number motifs protiles either rotationally asymmetric strongly concave least less symmetric higher order rotational symmetry group method works asymmetry assumption serious restriction even cases motifs symmetric long centered corners translational unit method still works motifs symmetric centered corners translational unit method determine symmetry group nevertheless still possible extract translational repetition lattice proof concept show range ornaments method works compiled ornament database images set ornaments images painted escher classics famous mariposas angles damons lizards remaining ornament images constructed via iornament software either authors several iornament artists explicitly check existence mirror symmetry unless indirectly implied clues sometimes miss mirror reflections causes groups classified belonging respective reflectionless groups lower symmetry nevertheless since fundamental domains double sizes contain mirror reflected copies recreation original patterns using generation rule respective reflectionless groups possible indeed forms motivation postponing hard mirror reflection checks till implied indirect clues glide reflections acknowledgements work funded tubitak grant references polya die analogie der kristallsymmetrie der ebene zeitschrift kristallographie schattschneider plane symmetry groups recognition notation american mathematical monthly washburn crowe symmetries culture theory practice plane pattern analysis university washington press summary conclusion presented fully automated method detect symmetry group extract fundamental domains ornaments belonging symmetry groups focused ornaments motifs hint symmetries underling shepard symmetry moorish ornaments computers mathematics applications senechal color symmetry computers mathematics applications coxeter coloured symmetry escher art science schattschneider escher visions symmetry thames hudson liu collins tsin gait sequence analysis using frieze patterns european conference computer vision ngan pang yung motifbased defect detection patterned fabric pattern recognition asha nagabhushan bhajantri automatic extraction using superposition distance matching functions forward differences pattern recognition letters ngan pang yung ellipsoidal decision regions patterned fabric defect detection pattern recognition sun sherrah symmetry detection using extended gaussian image ieee transactions pattern analysis machine intelligence sun fast reflectional symmetry detection using orientation histograms imaging keller shkolnisky signal processing approach symmetry detection transactions image processing loy eklundh detecting symmetry symmetric constellations features european conference computer vision prasad amd yegnanarayana finding axes symmetry potential fields ieee transactions image processing prasad davis detecting rotational symmetries ieee international conference computer vision lee collins liu rotation symmetry group detection via frequency analysis ieee conference computer vision pattern recognition lee liu skewed rotation symmetry group detection ieee transactions pattern analysis machine intelligence liu liu curved reflection symmetry detection asian conference computer vision lee liu curved symmetry detection ieee transactions pattern analylsis machine intelligence liu liu grasp recurring patterns single view computer vision pattern recognition doubek matas detection lattice patterns repetitive elements use image retrieval tech department cybernetics czech technical university doubek matas perdoch chum image matching retrieval repetitive patterns national conference pattern recognition torii sivic pajdla okutomi visual place recognition repetitive structures computer vision pattern recognition gao liu yang unsupervised learning structural semantics images international conference computer vision park collins liu deformed lattice discovery via efficient belief propagation european conference computer vision park brocklehurst collins liu deformed lattice detection images using belief propagation ieee transactions pattern analysis machine intelligence han mckenna lattice estimation images patterns exhibit translational symmetry image vision computing liu collins frieze wallpaper symmetry groups classification affine perspective distortion tech robotics institute liu collins computational model repeated pattern perception using frieze wallpaper groups computer vision pattern recognition liu collins periodic pattern analysis affine distortions using wallpaper groups international workshop algebraic frames perception action cycle belin liu collins tsin computational model periodic pattern perception based frieze wallpaper groups ieee transactions pattern analysis machine intelligence liu collins skewed symmetry groups computer society conference computer vision pattern recognition albert gomis blasco valiente aleixos new method analyse mosaics based symmetry group theory applied islamic geometric patterns computer vision image understanding nasri benslimane rotation symmetry group detection technique characterization islamic rosette patterns pattern recognition letters valientegonzalez computational framework symmetry classification repetitive patterns international conference computer vision theory applications rodasjorda lattice extraction based symmetry analysis international conference computer vision theory applications adanova tari beyond symmetry groups grouping study escher euclidean ornaments graphical models kaplan salesin escherization computer graphics interactive techniques von gagern hyperbolization euclidean ornaments electronic journal binatorics dijk verbeek lightness filtering color images respect gamut european conference color graphics imaging vision fukunaga hostetler estimation gradient density function applications pattern recognition ieee transactions information theory http appendix appendix contains results ornaments included main sections paper organized according symmetry groups placed three figures figure remaining ornaments figure remaining ornaments pgg pgg pgg pgg cmm cmm figure remaining ornaments
| 1 |
distributed second order methods variable number working nodes dec krklec dragana idling mechanism introduced context distributed first order methods minimization sum nodes local convex costs generic connected network idling mechanism node iteration active updates solution estimate exchanges messages network neighborhood probability stays idle probability activations independent across nodes across iterations paper demonstrate idling mechanism successfully incorporated distributed second order methods also specifically apply idling mechanism recently proposed distributed quasi newton method dqn first show theoretically grows one across iterations controlled manner dqn idling exhibits similar theoretical convergence convergence rates properties standard dqn method thus achieving order convergence rate standard dqn significantly cheaper updates simulation examples confirm benefits incorporating idling mechanism demonstrate method flexibility respect choice compare proposed idling method related algorithms literature index optimization variable sample schemes second order methods methods linear convergence ntroduction context motivation problem distributed minimization sum nodes local costs across connected network received significant growing interest past decade problem arises various application domains including wireless sensor networks smart grid distributed control applications etc recent paper class novel distributed first order methods proposed motivated hybrid methods centralized minimization sum convex component functions main idea underlying hybrid method designed combination incremental stochastic gradient method generally incremental stochastic method full standard gradient method part results paper presented ieee global conference signal information processing globalsip washington usa work first three authors supported serbian ministry education science technological development grant first three authors department mathematics informatics faculty sciences university novi sad trg dositeja novi sad serbia fourth author department power electronics communication engineering faculty technical sciences university novi sad trg dositeja novi sad serbia authors emails djakovet natasak dbajovic standard method behaves stochastic method initial algorithm stage standard method later stage advantage hybrid method potentially inherits favorable properties standard methods eliminating important drawbacks example hybrid exhibits fast convergence initial iterations inexpensive updates like methods hand eliminates oscillatory behavior incremental methods around solution large large behaves method hybrid methods calculate search direction iteration based subset sample sample size small initial iterations mimicking method approaches full sample large essentially matching full standard method distributed first order methods sample size iteration translates number nodes participate distributed algorithm precisely therein introduce idling mechanism node network iteration active probability stays idle probability nominally increasing one activations independent across nodes iterations reference analyzes convergence rates distributed gradient method idling mechanism demonstrates simulation idling brings significant communication computational savings contributions purpose paper demonstrate idling mechanism incorporated distributed second order methods also establishing corresponding convergence rate analytical results showing simulation examples idling continues bring significant efficiency improvements specifically incorporate idling mechanism distributed quasi newton method dqn dqn method proposed analyzed scenario nodes active times dqn extension representative distributed second order methods exhibit competitive performance respect current distributed second order alternatives main results follows first carry theoretical analysis assuming twice continuously differentiable bounded hessians show long converges one least fast arbitrarily small dqn method idling converges mean square sense almost surely point standard dqn method activates nodes times furthermore converges one geometric rate dqn algorithm idling converges limit rate mean square sense simulation examples demonstrate idling bring dqn significant improvements computational communication efficiencies demonstrate simulation significant flexibility proposed idling mechanism terms tuning activation sequence simulations show idlingdqn method effective scenarios increases eventually converge one stays bounded away one even kept constant across iterations latter two cases relevant practice due failures asynchrony networked nodes full control designing increasing sequence may difficult implement also compare simulation proposed method recent distributed second order method randomized nodes activations constant activation probabilities performs similarly activation probabilities tuned proposed performs favorably technical side extending analysis either distributed gradient methods idling dqn without idling scenario considered highly nontrivial respect standard dqn without idling need cope inexact variants second order search directions respect gradient methods idling showing boundedness sequence iterates consequently bounding inexactness amounts search directions considerably challenging require different approach brief literature review significant progress development distributed second order methods past years reference proposes method based interpretation problem interest applying taylor expansions hessian involved penalty function authors develop distributed version algorithm based consensus separation principles references propose distributed second order methods based alternating direction method multipliers proximal method multipliers respectively references develop distributed second order methods problem formulations related different consider paper namely study network utility maximization type problems works assume nodes active across iterations concerned designing analyzing methods randomized nodes activations related work papers study distributed first second order methods randomized nodes links activations references consider distributed first order methods deterministically randomly varying communication topologies reference proposes distributed first order method two randomly picked nodes active iteration authors carry comprehensive analysis first order diffusion methods general model asynchrony nodes local computations communications reference proposes proximal distributed first order method provably converges exact solution general model asynchronous communication asynchronous computation relevant works first order alternating direction methods include authors propose asynchronous version second order network newton method wherein randomly selected node becomes active time performs network second order update paper see also proposes analyzes asynchronous distributed method based asynchronous implementation broydenfletchergoldfarbshanno bfgs matrix update among works discussed perhaps closest randomized activation model models studied however works still different primarily concerned establishing convergence guarantees various asynchrony effects control networked nodes aim demonstrate carefully designed nodecontrolled sparsification workload across network inspired work centralized optimization yield significant savings communication computation importantly demonstrated simulations significant savings achieved even nodes partial control workload orchestration due asynchrony link failures etc finally companion paper presented brief preliminary version current paper wherein subset results presented without proofs specifically considers convergence dqn idling activation probabilities geometrically converge one also consider scenarios activation probability converges one also include extensions parameter stays bounded away one kept constant less one across iterations paper organization section describes model assume gives necessary preliminaries section presents dqn algorithm idling section analyzes convergence convergence rate section considers dqn idling presence persisting idling considers extension necessarily converge one section provides numerical examples finally conclude section odel preliminaries subsection gives preliminaries explains network optimization models assume subsection briefly reviews dqn algorithm proposed model consider distributed optimization nodes connected network solve following unconstrained problem convex function known node impose following assumptions assumption function twice continuously differentiable exist constants every denotes identity matrix symmetric matrices means positive semidefinite assumption implies strongly convex strong convexity parameter also lipschitz continuous gradient lipschitz constant every holds notation stands euclidean norm vector argument spectral norm matrix argument assumption problem solvable unique solution nodes constitute undirected network set nodes set edges denote total number undirected edges cardinality presence edge means nodes directly exchange messages communication link let set neighbors node excluding define also assumption network connected undirected simple multiple links associate network weight matrix following properties assumption matrix stochastic elements wij wij wij wii wij exist constants wmin wmax holds wmin wii wmax let denote kronecker product identity make use following penalty reformulation diag diag diag diagonal matrix diagonal elements equal matrix decompose hessian given close subsection following result needed subsequent analysis claim see lemma claim see lemma lemma consider deterministic sequence converging zero let holds denote eigenvalues remaining eigenvalues strictly less one modulus eigenvector corresponds unit eigenvalue xtn constant see ahead plays role step size distributed algorithms consider rationale behind introducing problem function enable one interpret distributed first order method solve ordinary gradient method applied turn facilitates development second order methods details denote rnp solution shown distance desired solution order step size pof define also rnp study distributed second order algorithms minimize function hence find near optimal solution therein hessian function splitting diagonal part play important role specifically first consider splitting throughout shall use blackboard bold letters matrices size standard letters matrices size moreover converges zero sum also converges zero algorithm dqn incorporate idling mechanism algorithm dqn proposed main idea behind dqn approximate newton direction respect function way distributed implementation possible error approximating newton direction large completeness briefly review dqn nodes assumed synchronized according global clock perform parallel iterations algorithm maintains iterates xkn rnp iterations xki plays role solution estimate node dqn presented algorithm therein given also notation stands euclidean norm vector argument spectral norm matrix argument algorithm dqn vector format given rnp set chose diagonal matrix algorithm dqn distributed implementation node require initialization node sets node transmits xki neighbors receives xkj node calculates xki wij xki xkj dki aki klk set set similarly gij block position briefly comment algorithm involved parameters dqn takes step direction scaled positive step size direction approximation newton direction skn unlike newton direction skn direction admits efficient distributed implementation inequality corresponds safeguarding step needed ensure descent direction respect function nonnegative parameter controls safeguarding see details set given problem diagonal matrix controls part hessian inverse approximation various choices easy implement induce large extra computational communication costs introduced possible choices usually case second order methods step size general strictly smaller one ensure global convergence however extensive numerical simulations quadratic logistic losses demonstrate dqn dqn idling converge globally full step size choices remaining algorithm parameters follows quantity controls splitting reference shows simulation usually beneficial adopt small positive value finally defines penalty function results following tradeoff performance dqn smaller value leads better asymptotic accuracy algorithm also slows algorithm convergence rate asymptotic accuracy assume distance point convergence dqn actually equal solution see solution noted corresponding distance algorithm present dqn perspective distributed implementation therein denote aki respectively block position matrices node transmits dki neighbors receives dkj node chooses diagonal matrix node calculates ski gij dkj node updates solution estimate xki ski set step note steps skipped algorithm involves single communication round per iteration single transmission vector step node two communication rounds steps per involved iii lgorithm dqn idling subsection explains idling mechanism subsection incorporates mechanism dqn method idling mechanism incorporate dqn following idling mechanism node iteration active probability inactive probability active nodes perform updates solution estimates xki participate communication round iteration inactive nodes perform computations communications solution estimates xki remain unchanged denote bernoulli random variable governs activity node iteration probability furthermore assume mutually independent throughout paper impose following assumption sequence assumption consider sequence activation probabilities assume pmin pmin sequence moreover assume continue assume nodes synchronized according global iteration counter positive constant arbitrarily small assumption means average increasing number nodes becomes involved optimization process intuitively sense precision optimization process increases increase iteration counter extensions scenarios necessarily converge one provided section also assume converges one sufficiently fast sublinear convergence sufficient future reference also define diagonal random matrix diag identity matrix also define random matrix wij wij wii wij let analogously let wdk zkd zku zkd wdk diag zku wuk zku zkd notice wii wij wij wii wmin recall rnp using results obtain following important wmax algorithm dqn idling distributed implementation node require initialization node sets node generates node idle goes step else node active goes step active nodes steps parallel active node transmits xki active neighbors receives xkj active node calculates dki aki xki wij xki xkj diag wuk wdk aki block position gkij block position wmin wmin wmin rnp generally holds wmax wmin rnp also notice kyk every dqn idling incorporate idling mechanism dqn method avoid notational clutter continue denote algorithm iterates xkn xki node estimate solution iteration dqn idling operates follows activation variable node performs update else node stays idle lets xki algorithm presented algorithm therein throughout subsequent analysis shall state several relations equalities inequalities involve random variables relations hold either surely every outcome expectation relation holds surely clear notation two cases force also auxiliary constants arise analysis frequently denoted capital calligraphic letter subscript indicates quantity related constant question see dki node transmits active neighbors receives dkj active node chooses diagonal matrix node calculates ski gkij dkj node updates solution estimate xki ski set step make remarks algorithm first note unlike algorithm iterates xki algorithm random variables initial iterates algorithm assumed deterministic next note implicitly assume nodes agreed beforehand scalar parameters actually achieved distributed way low communication computational overhead see subsection nodes also agree beforehand sequence activation probabilities words sequence assumed available nodes example discussed detail sections let scalar parameter known nodes node aware global iteration counter node able implement latter formula nodes beforehand agreement achieved similarly agreement parameters tuning parameter discussed remark ahead parameters diagonal matrices play role dqn important difference respect standard dqn appears step local active node gradient contribution xki xki standard dqn contribution equals xki note division dqn idling makes terms two algorithms balanced average using notation represent dqn idling rithm vector format therein diag controlled activation probability order simplify notation analysis introduce following quantities algorithm dqn idling vector format given rnp set chose diagonal matrix klk set set onvergence analysis section carry convergence convergence rate analysis dqn idling two main results theorems former result states assumptions dqn idling converges solution mean square sense almost surely show activation probability converges one geometric rate mean square convergence towards occurs rate therefore order convergence rate dqn method preserved despite idling note result explicitly establish computational communication savings respect standard dqn explicit quantification savings challenging even distributed first order methods even however theoretical results current section complemented section numerical examples demonstrate communication computational savings usually occur practice analysis organized follows subsection relate search direction dqn idling search direction dqn former viewed inexact version latter subsection establishes mean square boundedness iterates dqn idling implications inexactness search directions finally subsection makes use results subsections prove main results convergence convergence rate dqn idling quantifying inexactness search directions analyze inexact search direction dqn idling respect search direction standard dqn iterate dqn idling denote search direction standard dqn evaluated search direction dqn idling viewed approximation inexact version show ahead error approximation notice therefore true following result error approximating following denote either block vector example write gik block following theorem claims theorem claim straightforward generalization mimicking proof steps theorem hence proof details omitted theorem let assumptions hold let wmax wmin wmin constant wmin consider random matrices holds rtk denotes minimal eigenvalue moreover quantity satisfies following bounds constant max moreover shown use proof subsequent result furthermore using mean value theorem lipschitz continuity obtain see proof theorem instance ksk recall unique solution lipschitz gradient continuity parameter equals wmin next prove dqn idling exhibits kind nonmonotone behavior nonmonotonicity term depends difference search directions theorem let assumptions hold comes fact every finally taking account cases fact nonnegative conclude following inequality holds constant ksk proof start considering using bounds theorem ksk ksk ksk distinguish two cases first assume ksk case implies notice convex attains minimum given also implies negative moreover strongly convex holds strong convexity parameter putting together obtain notice moreover mean square boundedness iterates search directions next show iterates dqn idling uniformly bounded mean square sense denotes expectation operator lemma let sequence random variables generated algorithm let assumptions hold exist positive constants depending wmin wmax pmin holds kxk positive constant proof suffices prove uniformly bounded since strongly convex therefore holds rnp sake proving boundedness without loss generality assume every define function rnp wij kxi notice rnp also note every conclude positive therefore holds assume ksk ksk together implies ksk function similar characteristics retains minimum since holds strong convexity imply since assumed core proof upper bound quantity involving see ahead unwind resulting recursion start notice strongly convex lipschitz continuous gradient every precisely every every rnp denote recall last inequality otherwise since lower bounded fbi constant larger equal work fbi throughout proof let define two auxiliary maps follows let rnp rnp given strongly convex respect parameters respect wij kxi denote minimizer nonnegative using fact lipschitz continuous gradient strongly convex obtain wij kxi denotes set vectors entries set introduce also notation wij kxi kxk previous using fact inequality yields hand implies ybk previous inequality obviously holds wij kxi recall node activation vector iteration hence note fixed rnp random variable measurable respect generated hand fixed deterministic value variable takes function mapping rnp analogous observations hold well interested quantities note random variables measurable respect generated also work gradients respect evaluated denote ybk ybk quantities also valid random variables measurable respect generated recall fixed independent identically distributed bernoulli random variables true minimum maximum min max also notice pnk first perform similar analysis considering notice also furthermore strongly convex parameters following holds moreover notice min thus let return function also satisfies search direction step algorithm recalling theorem obtain ybk using bounds conclude krk therefore ksk also theorem following holds given theorems respectively rtk ybk ybk ybk ybk ybk regarded functions rnp consider assume following inequality holds every positive constant ybk ksk proof first split error follows ksk khk since conclude estimate expectations separately start observing min wij xki xkj gik aki xki given theorems substituting obtain denoting using fact obtain aki xki wij xki xkj aki xki aki wij xki xkj applying expectation obtain pnk use independence furthermore recall notice pnk nuk moreover thus next unwinding recursion obtain notice implies pik aki xki pik xki defined also assumptions wij assumption convexity scalar quadratic function yield enbuk enb assumption summable since conclude finally since strongly convex desired result holds notice immediate consequence lemma gradients uniformly bounded mean square sense indeed kxk kxk wij xki xkj recall lemma next show inexactness search directions dqn idling controlled activation probabilities theorem let assumptions hold consider lemma wij kxki xkj wij kxki xkj therefore gik xki wij kxki xkj applying expectation using fact independent identically distributed across across iterations fact obtain following inequality moreover gik xki pmin wij kxki xkj note moreover consequence lemma thus wij kxki xkj finally returning previous inequality imply ksk wij kxki kxkj kyk pmin kxk pmin kxki wij kxkj wij main results next result first main result follows theorems stated let define constants min min theorem theorem pmin estimate second expectation term consider arbitrary block vector theorem let sequence random variables expectation obtain following generated algorithm let assumptions hold addition let zku zkd wij wij gjk wii wii gik constant positive constant moreover iterate sequence converges solux wij wij wij tion mean square sense almost surely given lemma combining bounds conclude wij gjk moreover applying norm convexity argument like get wij gjk wij kgjk proof claim follows taking expectation theorem remaining two claims follow similarly proof theorem briefly demonstrate main arguments completeness namely unwinding recursion obtain therefore wij ekgjk using steps similar ones obtain inequality apply lemma result follows directly assumed furthermore using inequality kxk mean square convergence towards follows remains show almost surely well using condition inequality implies shown see implies kxk applying markov inequality random variable kxk obtain kxk kxk inequality implies kxk first lemma get kxk infinitely often implies almost surely next state prove second main result theorem let sequence random variables generated algorithm let assumptions theorem hold let converges solution problem mean square sense rate proof denote specific choice obtain obviously converges zero furthermore repeatedly applying relation obtain moreover shown see lemma also converges zero implies convergence using strong convexity strong convexity constant get kxk turn implies kxk last inequality means kxk also converges zero theorem shows dqn method idling converges rate parameter plays important role practical performance method recommend tuning rationale tuning comes distributed first order methods idling showed analytically optimal appropriate sense set value set balance linear convergence factor method without idling convergence factor convergence one dqn better smaller convergence factor distributed first order method due incorporation second order information one adjust rule replacing larger constant condition restrictive observed experimentally usually one needs take smaller order achieve satisfactory limiting accuracy experimental studies suggest values order addition small values order prevent small initial iterations one hand close one hand utilize safeguarding modify max min taken xtensions dqn persisting idling section investigates dqn idling activation probability converge one asymptotically algorithm subject persisting idling scenario interest activation probability full control algorithm designer networked nodes execution example applications like wireless sensor networks messages may lost due random packet dropouts addition active node may fail perform solution estimate update certain iteration actual calculation may take longer time slot allocated one iteration simply due unavailability sufficient computational resources henceforth consider scenario may converge one words regarding assumption keep requirement sequence uniformly bounded make additional assumption iterates bounded mean square sense kxk uniformly bounded positive constant consider relation continues hold assumptions theorem therefore using previous inequality obtain pmin using strong convexity strong convexity constant letting infinity obtain pmin lim sup kxk therefore proposed algorithm converges mean square sense neighborhood solution hence additional limiting error addition error due difference solutions introduced respect case order analyze quantity unfold get pmin proof theorem recall respectively shown increasing function taking smaller step size brings closer solution however nodes working iter variable num working nodes rel error convergence factor also increasing thus tradeoff precision convergence rate furthermore considering see also increasing function however expected strictly positive words error remains positive safeguarding parameter finally size error proportional closer pmin one smaller error simulation examples section demonstrate error moderately increased respect case even presence strong persisting idling umerical results kxi evolves elapsed total number activations per node note number activations relates directly communication computational costs algorithm parameters algorithms set way difference activation schedule method idling set set clearly method without idling remaining algorithm parameters follows set lipschitz constant gradients take kai let full step size consider two choices step algorithm apply safeguarding choices total cost activations per node variable num working nodes nodes working iter rel error section demonstrates simulation significant computational communication savings incurred idling mechanism within dqn also shows persisting idling converging one induces moderate additional limiting error method continues converge solution neighborhood even persisting idling consider problem strongly convex local quadratic costs let symmetric positive definite matrix data pairs generated random independently across nodes follows entry generated mutually independently uniform distribution generated qti matrix matrix orthonormal eigenvectors independent identically distributed standard gaussian entries diagonal matrix diagonal entries drawn fashion uniform distribution network instance random geometric graph model communication radius connected weight matrix set follows wij node degree wii wij compare standard dqn method dqn method incorporated idling mechanism specifically study relative error averaged across nodes total cost activations per node fig relative error versus total cost number activations per node quadratic costs network figure figure solid lines correspond nodes working iterations dashed lines correspond method idling increasing number working nodes let note former choice corresponds algorithms single communication round per iteration latter corresponds algorithms two communication rounds per figure plots relative error versus total cost equal total number activations per node current iteration one sample path realization see incorporating idling mechanism significantly improves efficiency algorithm method achieve limiting accuracy approximately method without idling takes activations per node method idling takes activations hence idling mechanism reduces total cost approximately figure repeats plots still showing clear gains idling though smaller account randomness dqn method idling arises due random nodes activation schedule nodes working iterations variable num working nodes rel error nodes working iter const total cost reach rel error const variable num working nodes rel error total cost activations per node nodes working iter total cost reach rel error total cost activations per node fig total cost number activations per node reach relative error quadratic costs network figure figure histograms corresponds dqn algorithm idling arrow indicates total cost needed standard dqn fig relative error versus total cost number activations per node strongly convex quadratic costs network figures compare following scenarios standard dqn pmax pmax include histograms total cost needed achieve fixed level relative error specifically figure plot histograms total cost corresponding generated sample paths different realizations along iterations needed reach relative error equal figure corresponds figure corresponds figures also indicate arrows total cost needed standard dqn achieve accuracy results confirm gains idling also variability total cost across different sample paths small relative gain respect standard dqn figures investigate scenarios activation probability may asymptotically converge one network connected random geometric graph instance step size remaining system algorithmic parameters previous simulation example consider following choices standard dqn pmax pmax third fourth choices presence pmax models external effects control networked nodes due link failures unavailability computing resources certain iterations varied within set figure compares methods one sample path realization four choices pmax figure compares experiment dqn pmax pmax several important observations stand experiments first see limiting error increases converge one respect case converges one however increase deterioration moderate algorithm still manages converge good solution neighborhood despite persisting idling const rel error const total cost activations per node const rel error total cost activations per node fig relative error versus total cost number activations per node strongly convex quadratic costs network figures compare following scenarios standard dqn pmax pmax fig relative error versus total cost number activations per node strongly convex quadratic costs network figure figure red dotted line corresponds proposed dqn idling blue solid line green dashed line proposed dqn idling const black dashed line method particular figure see limiting relative error increases pmax case strong persisting idling corroborates dqn idling effective method even activation probability full control algorithm designer second limiting error decreases pmax increases expected see figure finally figure see method pmax performs significantly better method pmax particular methods limiting error approximately former approaches error much faster confirms proposed judicious design increasing opposed keeping constant significantly improves algorithm performance figure repeats experiment see analogous conclusions drawn next experiment compare proposed dqn method idling existing methods utilize randomized activations nodes specifically consider recent asynchronous second order network newton method proposed asynchronous version method refer method asynchronous network newton also consider first order gossipbased method method dqn idling matrix utilize two communications per node activation equal communication cost per node activation two methods also similar computational cost per activation method twice cheaper communication cost per activation general lower computational cost per activation due incorporating first order information updates comparison carried nected random geometric graph instance links variable dimension strongly convex quadratic generated analogously previous experiments proposed consider two choices activation probabilities previous experiment const weight matrices proposed method set way prior experiments method set step size consider two different choices parameter set method quantity method algorithm step size adjusted decreased fair comparison four different methods achieve asymptotic relative error look many activations method takes reach saturating relative error figure plots relative error versus total number activations four methods figure corresponds figure see figure proposed outperforms methods takes activations reach relative error method take method needs least activations accuracy note even half number activations account twice cheaper communication cost proposed still significantly faster compares versus versus normalized interestingly idling dqn constant method practically match performance figure repeats comparison see similar conclusions drawn experiment summary reduces communication cost respect method expected utilizes second order computations activation two second order methods randomized activations method exhibit similar performance idlingdqn uses constant policy increasing policy performs better together previous experiments demonstrates carefully designed workload orchestration leads performance improvements respect pure random activation policy respect policy vii onclusion incorporated idling mechanism recently proposed context distributed first order methods distributed second order methods specifically study dqn algorithm idling showed long converges one least fast arbitrarily small dqn algorithm idling converges mean square sense almost surely point standard dqn method activates nodes iterations furthermore grows one geometric rate dqn idling converges rate mean square sense therefore dqn idling achieves order convergence standard dqn significantly cheaper iterations simulation examples corroborate communication computational savings incurred incorporating idling mechanism show method flexibility respect choice activation probabilities proposed method also existing distributed second order methods involving local hessian computations randomized nodes activations exact sense converge solution neighborhood interesting future research direction develop analyze second order method exact convergence eferences ozdaglar distributed subgradient methods optimization ieee transactions automatic control vol cattivelli sayed diffusion lms strategies distributed estimation ieee transactions signal processing vol shi ling yin extra exact algorithm decentralized consensus optimization siam journal optimization vol mokhtari ling ribeiro network newton distributed optimization methods ieee trans signal processing vol xiao boyd lall scheme robust distributed sensor fusion based average consensus ipsn information processing sensor networks los angeles california hug kar consensus innovations approach distributed multiagent coordination microgrid ieee trans smart grid vol bullo cortes martnez distributed control robotic networks mathematical approach motion coordination algorithms princeton university press krklec distributed gradient methods variable number working nodes ieee trans signal processing vol friedlander schmidt hybrid methods data fitting siam journal scientific computing vol krklec method diagonal correction distributed optimization siam vol varagnolo zanella cenedese pillonetto schenato consensus distributed convex optimization ieee trans aut vol eisen mokhtari ribeiro decentralized methods ieee transactions signal processing vol may eisen mokhtari ribeiro decentralized method dual formulations consensus optimization ieee conference decision control cdc las vegas eisen mokhtari ribeiro asynchronous method consensus optimization ieee global conference signal information processing globalsip washington usa krklec distributed first second order methods variable number working nodes ieee global conference signal information processing washington usa mokhtari shi ling ribeiro dqm decentralized quadratically approximated alternating direction method multipliers ieee trans signal processing vol mansoori wei superlinearly convergent asynchronous distributed network newton method available https moura xavier distributed gradient algorithms cdc ieee conference decision control maui hawaii december mokhtari shi ling ribeiro decentralized second order method exact linear convergence rate consensus optimization ieee trans signal information processing networks vol zargham ribeiro jadbabaie accelerated dual descent constrained convex network flow optimization decision control cdc ieee annual conference firenze italy lobel ozdaglar feijer distributed optimization communication mathematical programming vol zhao sayed asynchronous adaptation learning modeling stability analysis ieee transactions signal processing vol zhao sayed asynchronous adaptation learning performance analysis ieee transactions signal processing vol yuan ling yin sayed decentralized consensus optimization asynchrony delays ieee transactions signal information processing networks appear doi liu wright asynchronous stochastic coordinate descent parallelism convergence properties siam vol chang hong wang distributed admm optimization part linear convergence analysis numerical performance ieee trans sig vol krklec nonmonotone line search methods variable sample size numerical algorithms vol ram nedic veeravalli distributed stochastic subgradient projection algorithms convex optimization optim theory vol ram veeravalli asynchronous gossip algorithms stochastic optimization cdc ieee international conference decision control shanghai china december wei ozdaglar jadbabaie distributed newton method network utility algorithm ieee transactions automatic control vol
| 7 |
leveraging diversity sparsity blind deconvolution ali ahmed laurent dec december abstract paper considers recovering vectors circular convolutions vector assumed known basis spread fourier domain input member known random subspace prove whenever problem solved effectively using minimization convex relaxation long inputs sufficiently diverse obey diverse inputs mean belong different generic subspaces knowledge first theoretical result blind deconvolution subspace belongs fixed needs determined discuss result context multipath channel estimation wireless communications fading coefficients delays channel impulse response unknown encoder codes message vectors randomly transmits coded messages fixed channel one decoder discovers messages channel response number samples taken received message roughly greater number messages roughly least introduction paper addresses problem recovering vector circular convolutions individually series unknown vectors consider linear lti system characterized unknown impulse response system driven series inputs one wants identify system observing outputs case convolutions inputs system impulse response problem referred blind system identification jointly discover inputs system impulse response outputs one core problems field system theory signal processing ali ahmed currently information technology university lahore pakistan associated recently department mathematics mit cambridge laurent demanet department mathematics mit cambridge email corresponding author alikhan authors sponsored afosr grants also funded nsf onr total thank augustin cosse interesting discussions preliminary results direction presented earlier conference publication namely convex approach blind deconvolution diverse inputs proc ieee camsap cancun december work submitted ieee transactions information theory possible publication copyright may transferred without notice version may longer accessible draft december expected sparse problem recast standard fashion recovery simultaneously sparse matrix relax formulation dropping sparsity contraint using minimization leverage results well understood area recovery underdetermined systems equations give conditions unknown impulse response inputs deconvolved exactly roughly results say input vectors lives known generic subspace vector incoherent fourier domain assumed known basis separable high probability provided log factors appropriate coherences appearing constants precisely state problem follows assume input lives known subspace basis matrix whose columns span subspace resides moreover vector assumed basis matrix convenient think identity upon first reading given basis matrices need know expansion coefficients discover inputs structural assumptions much weaker need sparse known basis whereas resides generic known subspace observe circular convolutions entry observation vector mod modulo makes convolution circular given information inputs impulse response clear quantities uniquely identified observations want put result perspective outset comparing related result single input blind deconvolution problem analyzed mathematically main result shows single input deconvolution problem vectors recovered circular convolution lives known generic subspace however unlike incoherent vector also lives known subspace paper known subspace assumption makes significant improvement results concrete implications important applications explained section draft december notations use upper lower case bold letters matrices vectors respectively scalars represented upper lower case letters notation denotes row vector formed taking transpose without conjugation column vector mean column vector obtained conjugating entry linear operators represented using script letters repeatedly use notation indicate index takes value range scalar use denote set notation denotes identity matrix scalar set denotes submatrix identity obtained selecting columns indexed set also use represent matrix ones along diagonal locations zeros elsewhere denote standard basis vectors conventional kronecker product write vec vector formed stacking columns matrix given two matrices denote matrix vec vec similarly takes matrix use show variable varies scalar lastly operator refers expectation operator represents probability measure lifting convex relaxation section recast blind system identification diverse inputs simultaneously sparse matrix recovery problem set semidefinite program sdp solve begin defining discrete fourier transform dft matrix let denote row fourier domain convolutions lhf wihf denotes hadamard product using fact obtain lhb hihmn last equality follows substituting equivalently expressed using fact matrices arguments notation denotes usual trace inner product denotes length vector zeros except position indexed denoting standard basis vectors clear measurements rkn linear outer product draft december since expansion coefficients shows inverse problem thought question recovering matrix columns linear measurements obtained trace inner products known measurement matrices define linear map cln number unknowns lkn linear measurements available means linear map severely underdetermined except trivial case cases infinitely many candidate solutions satisfy measurements constraint owing null space course take advantage fact unknown matrix always simultaneously sparse hence inherent dimension much smaller information theoretically speaking number unknowns log effectively solve simultaneously sparse matrices inverting system equations might possible suitable linear map log possible single unknown input certain structural assumptions would suffice identify system completely however known individually relax sparse structures namely using nuclear norms remains open question efficiently relax structures instead ignore sparsity altogether cater structure problem remains principle solvable inherent number unknowns case become smaller number observations soon number inputs exceeds therefore main idea paper use multiple inputs allow forego use sparsity penalty relaxed program formulate optimization program worth mentioning recovery matrix guarantees recovery within global scaling factor recover much concern practice inverse problem cast matrix recovery problem linear measurements follows find subject rank optimization program non convex general hard due combinatorial rank constraint owing vast literature solving optimization programs natural choice combining nuclear norms constitute convex penalty simultaneously sparse known suboptimal fact special case sparse matrices effective convex relaxation exists thus even sparse structures individually handled effective convex relaxations obvious convex penalty known simultaneously spase structure draft december form well known good convex relaxation argmin subject nuclear norm sum singular values system identification problem successfully solved guarantee minimizer convex program equals recovery determined linear map interest lately several areas science engineering growing literature concerned finding properties linear map expect obtain true solution solving optimization program main results section state main result claiming optimization program recover sparse matrix almost always inputs reside relatively dense generic dimensional subspaces satisfies nominal conditions known basis incoherence fourier domain stating main theorem define terms generic incoherence concretely recall incoherence basis introduced quantified using coherence parameter entrywise uniform norm using fact orthonormal matrix easy see simple example matrix achieves minimum would dft matrix incoherence fourier domain measured max ratio measure diffusion fourier domain spirit definition mainly captured first term scaled peak value fourier domain terms involve quantities defined sequel random perturbations present technical reasons notice first term small diffuse frequency domain otherwise large keep results general possible introduce extra incoherence parameter quantifies distribution energy among inputs defined max kmn bounded coherence achieves lower bound energy equally distributed among inputs upper bound attained energy localized one inputs rest zero draft december mentioned earlier want inputs reside generic subspace realize choosing iid gaussian matrices normal generic subspace refers subspaces entire continuum subspaces however one must also mindful generic subspaces may arise naturally applications may introduced design demonstrated stylized application section ultimately working rows matrix defined columns real orthonormal matrix columns also gaussian vectors conjugate symmetry hence rows distributed normal normal jnormal note vectors independently instantiated every hand vectors longer independent every rather independence retained however still uncorrelated fact crucial analysis follow later ready state main result theorem suppose bases constructed coherences basis matrix expansion coefficients defined furthermore ease notation set log log log log fixed exists constant max log unique solution probability least recover inputs within scalar multiple convolutions result crudely says dimensional space incoherent vector known basis separated successfully almost always vectors equal energy distribution lying known random subspaces dimension whenever application blind channel estimation using random codes stylized application blind system identification directly arises multipath channel estimation wireless communications problem illustrated figure sequence construction explicitly even easily adapted case odd draft december messages coded using taller coding matrices respectively coded messages transmitted one unknown multipath channel characterized sparse impulse response transmitted message arrives receiver multiple paths path introduces delay fading delayed scaled copies overlap free space communication medium received signal modeled convolution action repeated delay fading coefficients every words assuming channel impulse response less fixed duration transmission coded messages justifies use fixed impulse response convolutions task decoder discover impulse response messages observing convolutions using knowledge coding matrices main result theorem took vector sparse incoherent basis application discussed last paragraph simply take basis standard basis perfectly incoherent location entry depicts delay arrival time copy coded message receiver certain path value entry known fading coefficient incorporates attenuation phase change encountered path coherence parameter roughly peak value normalized frequency spectrum channel response particular application assume transmitter energy equally distributed among message signals results prove message coded using random coding matrix channel response approximately flat spectrum recover messages channel response jointly almost always solving whenever length messages sparsity channel impulse response codeword length obey number messages convolve instantiation channel roughly exceed results thought extension blind deconvolution result appeared look unknown channel observe single convolution impulse response randomly coded message consequently fading coefficients could resolved delays impulse response channel words one needs know subspace support advance general fading coefficient delays equally important pieces information decipher received message wireless communications paper take advantage several looks channel remains fixed transmission messages enables estimate fading coefficients unknown delays time general assume vector lives known subspace case related work nutshell knowledge paper first literature theoretically deal impulse response belongs subspace fixed ahead time needs discovered lifting strategy linearize bilinear blind deconvolution problem proposed rigorously shown two convolved vectors separated blindly dimensional subspaces known one subspace generic incoherent fourier domain shown using dual certificate approach draft december channel encode decode figure blind channel estimation meassage block messages coded corresponding tall coding matrix block coded messages sequentially transmitted arbitrary unknown channel results convolution coded messages unknown impulse response decoder receives convolutions discovers messages unknown channel within global scalar rank matrix recovery literature vectors deconvolved exactly paper extends single input blind deconvolution result multiple diverse inputs observe convolutions vectors known subspaces fixed vector known sparse known basis natural question arises whether multiple inputs necessary problem identify answer specific case even single input case random subspace assumption replacing nuclear norm standard norm sum absolute entries separate however sample complexity suboptimal order within log factors general single input case random subspace assumption shown identifiable related question sense dual presented previous section multichannel blind deconvolution see figure discrete time problem modeled follows unknown noise source feeds unknown multipath channels characterized impulse responses receiver channel observes several delayed copies overlapped amounts observing convolutions noise modeled gaussian vector well dispersed frequency domain vector incoherent according definition fading coefficients multipath channels unknown however assume delays known amounts knowing subspace channels unknown impulse responses expressed every columns known coding matrices trivial basis vectors contain unknown fading coefficients channel indices delays every impulse response modeled random case coding matrices composed random subset columns identity matrix coding matrix known random multichannel blind deconvolution problem spirit dual blind system identification diverse inputs presented paper roles channel source signal reversed however results theorem explicitly derived dense gaussian coding matrices draft december random sparse matrices worth mentioning many practical situations non zeros channel impulse response concentrated top indices making assumption known subspaces delays plausible series results blind deconvolution appeared different sets assumptions inputs example result considers image debluring problem receiver observes subsampled circular convolutions image modulated random binary waveforms bandpass blur kernel lives known subspace possible recover image incoherent blur kernel using lifting nuclear norm minimization whenever log log also number subsampling factor convolution result shows possible deconvolve two unknown vectors observing multiple time one vectors randomly modulated convolved vector living known subspace also observing multiple convolutions one vectors convolved pair changing every time subspace also unknown makes result much broader another relevant result blind deconvolution plus demixing one observes sum different convolved pairs vectors lying dimensional known subspaces one generic incoherent fourier domain generic basis chosen independently others blind deconvolution plus demixing problem cast matrix recovery problem algorithm successful important recent article group settles recovery guarantee regularized gradient descent algorithm blind deconvolution case scaling result however makes assumption fixed subspace sparse impulse response note gradient descent algorithms expected much favorable runtimes semidefinite programming basin attraction established wide enough multichannel blind deconvolution first modeled recovery problem experimental results show successful joint recovery gaussian channel responses known support fed single gaussian noise source interesting works include least squares method proposed approach deterministic sense input statistics assumed known though channel subspaces known results various assumptions input statistics found owing importance blind deconvolution problem expansive literature available discussion possibly cover related material however interested reader might start nice survey articles references therein also worth mentioning related line research phase recovery problem phaseless measurements happen quadratic unknowns bilinear problems also possible lift quadratic phase recovery problem higher dimensional space solve matrix minimal rank satisfies measurement constraints draft december figure blind multichannel estimation unknown noise source feeds unknown multipath channels characterized sparse impulse responses observe convolutions receivers task recover channel responses together noise signal problem thought dual blind channel estimation problem roles channels source signals reversed fixed incoherent vector fed channels channel impulse responses reliably modeled bernoulli gaussian distribution numerical simulations alternative computationally expensive semidefinite program rely heuristic program argmin subject solves matrices ckn semidefinite constraint always satisfied substitution program proposed results therein showed local minima global minima rank optimal solution since case optimal solution solve declare recovery rank deficient best approximation constitutes solution program considerably speeds simulations instead operating lifted space like lkn variables involved operates almost natural parameter space much fewer number variables use implementation lbfgs available solve additional advantage suitable initialization required comparatively recently proposed gradient descent scheme bilinear problems requires solve separate optimization program initialize well also gradient updates involve additional unnatural regularizer control incoherence present phase transitions validate sample complexity results theorem shade phase transitions represents probability failure determined counting frequency failures twenty five experiments pixel phase transitions classify recovered solution failure draft december phase transitions take gaussian vectors observe constructed dense vector sparse model recall theorem restricts sparse vector however simulation results show successful recovery general case dense observation conformation belief sparsity assumption result merely technical requirement due proof method similar phase transitions obtained restrictive sparse model present two sets phase transitions set contains three phase transition diagrams diagram fix one variables vary two small increments compute probability failure every time outlined earlier section first set mimic channel estimation problem discussed section shown figure take gaussian matrices figure shows fixed able recover inputs soon phase diagrams figure mainly show performance algorithm become roughly oblivious number inputs soon particular range considered phase transition diagrams second set shown figure simulate blind channel estimation problem discussed section shown figure set contains similar phase diagrams first set assumptions difference matrices random subsets columns identity words take support random known results almost exactly first set proof theorem observing linear measurements unknown vector define supp show solution sdp equals high probability establish existence valid dual certificate proof recovery using dual certificate method standard approach employed literature many times construction dual certificate uses golfing scheme unusually technical probabilistic dependence iterates turn precludes use matrix concentration inequalities let rkn arbitrary vectors defined earlier let linear space matrices rank two defined space matrices rows supported index set defined note matrix interest member space let define related projection operators start defining takes matrix vector rows sets draft december figure empirical success rate deconvolution recall every experiments vectors gaussian matrices also independent gaussian fix vary successful reconstruction obtained probability one fix vary successful reconstruction obtained probability one fix vary successful reconstruction occurs probability one rows indexed index set zero mathematically define projection index set denotes submatrix identity matrix columns indexed set orthogonal projector onto defined projector onto orthogonal complement simply note definition projection assume without loss generality draft december figure empirical success rate deconvolution experiments vector gaussian every sparse vector random support entries gaussian fix vary successful reconstruction obtained probability one fix vary successful reconstruction obtained probability one fix vary successful reconstruction occurs probability one optimality conditions presented lemma success nuclear norm minimization involve normalized following lemma gives sufficient conditions dual certificate range guarantee minimization program produces solution proof lemma almost exactly difference instead working space matrices dealing space matrices repeat proof show details also work space draft december lemma optimality conditions let defined positive number kak null matrix unique minimizer exists range proof let denote solution optimization program implies given enough show null establish exact recovery since two conflicting requirements would directly mean nuclear norm point see details since definition every obtain every range using fact also maximizing inner product respect gives inequality kar null implies kar turn means inequality implies firstly whenever secondly using results bound earlier inequality gives conditions right hand side strictly positive means enough exhibit uniqueness draft december next lemma provides upper bound operator norm linear map lemma operator norm let defined kak log probability least proof operator norm calculated using fact implies kak max write random variables degree degree cases maximum taken unique max max last choose log gives inequality follows fact kak log probability least following section focus constructing dual certificating using golfing scheme shown satisfy uniqueness conditions lemma linear operators golfing partition prove uniqueness conditions use dual certificate constructed using variation golfing scheme end partition index set disjoint sets defined subset chosen uniformly random every every parameter adjusted proof assume without loss generality words partition set obtained dividing randomly chosen disjoint sets given repeat process independently every obtain total sets define every disjointness among sets critical importance ensures dependence arises due reuse different partitioned sets define linear map cqn returns measurements indexed zib assuming factor achieved worst case increasing number measurements convolution factor affects measurements bounds theorem multiplicative constant draft december tandem partitioning measurements also require partition rows sets submatrices behave roughly isometry sparse vectors quantitatively want rows matrix sets obey vectors supported set reader familiar compressive sensing readily recognize restricted isometry property rip submatrices result compressive sensing says submatrix rows coherence defined index set chosen uniformly random exists constant log log log implies rip holds probability exceeding given partition chosen uniformly random result log means log log sup probability least supported given simple union bound number sets shows max sup max holds probability least rest article take results given constructing dual certificate define nomenclature let matrix containing columns identity matrix indexed set direct conclusion gives max ksn max ksn addition set linear operator defined action matrix follows xdn draft december coherence development position precisely define coherence parameter first introduced section diffusion impulse response quantified using following definition max max quantities section easily read expression norm equivalences resulting following lemma presents upper lower bounds lemma range let defined respectively assume holds proof assume without loss generality since orthonormal matrix easy see far concerned upper bound max max ksn max max max sksn first inequality result second one follows definition coherence equivalence fact vector last one result lower bound obtained summing follows max ksn max max max equality due fact orthonormal matrix last inequality follows similar manner compute upper lower bounds result combining results claim lemma follows spirit incoherence captured first term maximum namely small diffuse frequency domain large otherwise two terms mainly due technical reasons proof presented later hard question characterize exactly qualitatively expected order first term matrices random construction expected make vectors aligned rows defined illustration listed introduction equivalent within inconsequential multiplicative factor draft december construction dual certificate via golfing iteratively build dual certificate range iterations initial value follows note range every approach build dual certificate first developed projecting sides results denoting results recursion turn implies running iteration till gives candidate dual certificate establish unique solution need show kwp light frobenius norm upper bounded kwp difference construction iterates mainly avoid technical difficulties arise proofs later owing dependencies dependencies stem fact although every random set independent however every sets dependent construction directly implies sets dependent therefore hence dependent shown detail proofs follow introduction avoid dependence problem ensures critical importance controlling random quantities proofs follow however introduce unlike matrix fixed bias draft december bound operator norm term achieved using simple triangle inequality followed application lemma section obtain probability least constant upfront comes union bound parameter controls choice note expectation random construction randomness due sets operator norm remaining terms expression bounded using lemma section conclude kwp every holds probability least means using crude union bound holds probability least kwp choosing log sufficient imply kwp value dictated lemma proves first half prove second half use construction write since every means taking operator norm triangle inequality followed application fact shows note adding subtracting first term similarly adding subtracting every term summation subsequently using triangle inequality obtain note expectation random construction randomness due sets term right hand side draft december controlled using corollary lemma section respectively hold choices conform theorem give upper bound using union bound holds probability least remark lemma control relies uniform result sparse vectors overcome statistical dependence able control term one main reasons working instead since columns always enabling employ uniform result technical requirement restricts results however think proof technique may improved work completely dense vector suggested numerical experiments section also need show holds end note corollary section linear map hence null kar rkkr corollary shows using fact inequality proves finally choice upper bounds statement theorem tightest upper bound conforms lemmas corollaries uses fact log derived nuclear norm minimization recovers true solution conclusion hold true failure probability lemmas corollaries less equal hence using union bound probability none items fail completes proof theorem concentration inequalities lemmas require application either uniform version version matrix bernstein inequality control operator norm sum independent random matrices stating give overview results used later exposition begin giving basic facts subgaussian subexponential random variables used throughout proofs proofs facts found standard source see example orlicz norms scalar random defined inf exp draft december vector matrix defined setting kzk respectively definition therefore restrict discussion scalar random variables trivially extended vectors matrices using mentioned equivalence key facts relating orlicz norms subgaussian subexponential random variables follows subgaussian random variable characterized fact norm always finite similarly subexponential random variable subgaussian iff subexponential furthermore points proof interested bounding coarse bound obtained using triangle inequality followed jensen inequality implies also find handy generalized version fact namely product two subgaussian random variables subexponential working gaussian random variables mostly proofs useful identities gaussian vector normal fixed vector random variable also gaussian hence must subexponential tail behavior easily verified every scalar moreover strongly concentrated mean exists tail behavior random variable completely defines specifically subexponential random variable completes required overview state matrix bernstein inequalities heavily used proofs proposition uniform version let iid random matrices dimensions satisfy suppose kzq almost surely constant define variance max draft december exists probability least max log log version bernstein listed depends orlicz norms matrix defined inf exp proposition version let iid random matrices dimensions satisfy suppose kzq constant exists define variance probability least max log log log key lemmas section provides important lemmas constitute main ingredients establish uniqueness conditions construction dual certificate previous section conditioning results section concern conditioning linear maps restricted space lemma let coherences defined respectively fix choose subsets constructed section sufficiently large linear operators defined obey max probability least lemma let coherences lemma fix choose sufficiently large linear operator defined obeys probability least draft december corollary corollary lemma let coherences lemma fix assume sufficiently large linear operator defined obeys probability least lemma let lemma fix exists constant log implies linear operator defined obeys probability least coherences iterates section define coherences iterates show coherences bounded terms respectively coherences defined among variables terms partition defined section assume implications restricted isometry property given moreover also take results lemma true following results order lemma max every every lemma define max max every fix exists constant implies every probability least use index variables already reserved index set proofs lemmas draft december lemma define max every let lemma sufficiently large every probability least range finally results section help establish dual certificate mostly lies one uniqueness conditions let respectively addition let coherences defined respectively shall take given following results order lemma fix exists constant max log log implies holds probability least corollary corollary lemma let coherences respectively fix exist constant max log log implies probability exceeding lemma assume restricted isometry property holds probability least lemma let respectively fix exists constant log log implies probability least draft december proofs key lemmas section provides proofs key lemmas laid section main lemmas involve bounding operator norm sum independent random matrices high probability matrix bernstein inequality used repeatedly compute probability tail bounds proof lemma following calculations come handy using definition projection see also follows definition another quantity interest expanded hpb kpb kpb moreover hpsn ksn ksn ready move proof lemma given proof lemma proof lemma concerns bounding quantity using definition respectively expand quantity evaluate expectation sets fixed follows construction sets section clear means every index traverses set means split summation outer sum inner sum moreover definition psn following equality order psn ikn draft december easy see using definition section psn implies given need control term definitions section write note action linear map matrix clear definition operators thus asking question bounding operator norm sum independent operators subtracting expectation get operator norm sum controlled using bernstein inequality variance main ingredient compute bernstein bound case max max last inequality follows fact two positive semidefinite psd matrices kak whenever psd matrix first term maximum variance expression simplified mentioned earlier linear operator visualized matrix product matrix vectorized vec easy see thus draft december remind reader expectation randomness construction partition using expansion quantity upper bounded ksn max max ksn max max last equality made use linearity kronecker operator bound max ksn max kpb ksn max turn used cauchy schwartz inequality fact rows columns supported index set last inequality result together easy verify kmn applying facts krk expression simplifies max max kmn using definition coherences fact arbitrary matrix kak max max draft december ikn last inequality follows computation second term maximum variance expression follows similar route short max psn last line result definition completes calculation variance term second ingredient bernstein bound calculation orlicz norms summands follows since operator norm simplifies also easy show let begin showing random variable subgaussian enough prove kkr using kkr kkr similar manner one shows kkr kkr kbsn using obtain ckkr kkr thus random variable hence gives log log draft december last inequality result plugging upper bound variance using log max log probability least result statement lemma follows choosing statement lemma sufficiently large constant proof lemma proof lemma considers bounding quantity proof corollary similar proof lemma basically follows replacing therefore lay steps briefly start expressing quantity interest sum independent random linear maps define since linear maps thought symmetric matrices means therefore variance suffices compute using calculation goes along lines employing compute upper bound variance max kmn max orlicz norm turns kkr shows thus using definition proposition one obtains log log log draft december ingredients compute deviation bound apply bernstein inequality proposition choose log obtain max log log probability least choice sufficiently large constant guarantees right hand side smaller proof corollary proof proof exactly lemma except instead using defined subset instead use linear operator defined entire set measurements means need replace defined addition note identity operator matrix making changes proof lemma one obtains result claimed statement corollary proof lemma proof note using definition operator norm quantity interest simplified using definition section facts rpr follows krk krk max apply uniform result bound last quantity really weaker nonuniform result compressive sensing literature suffices also results overall draft december tighter bound simple application uniform result would give nonuniform result says exists constant whenever max log log max probability least specifically taking says taking log enough guarantee probability least union bound sets partition means max probability least plugging result proves lemma proof lemma proof direct implication restricted isometry property later using one obtains kwp definition cauchy schwartz inequality kwp max using verify max finally definition conclude plugged back bound completes proof lemma draft december proof lemma proof exposition different owing difference iterative construction dual certificate choice start considering following lemma provides upper bound turn used bound lemma let max max max max let defined assume restricted isometry property holds fix exists constant max log every probability least lemma established section using clear upper bound obtained evaluating maximum quantities putting together directly implies max max max max choosing lemma results probability least remaining case max max draft december using definition max max max max max max following corollary provides bound first term sum corollary corollary lemma let max max max max max max let lemma let assume restricted isometry property holds fix exists constant max log probability least proof corollary provided section easy see using definitions max max max max using calculations first term bounded applying corollary choosing lemma large enough constant achieve max max second term sum lemma provides upper bound draft december lemma define assume holds max proof proof lemma provided section second term appeal lemma directly obtain evaluating maximum using definition coherences max max plugging shows combining fact completes proof lemma proof lemma proof first consider case using lemma bound terms respectively quantity corresponding bound max max log max log using definitions coherences iterates definition coherence fact one verify max max max plugging back choosing lemma large enough sufficient guarantee draft december arbitrarily small number lies zero one bounding iteratively using using fact relation gives remains bound max using definition followed application simple identity application corollary gives max max max log max using fact easy see max max max max kmn max max used fact fact upper bound definitions max similar manner one show max choice statement lemma suitably large constant one show max draft december last inequality follows remaining term using lemma taking summation followed maximum sides using definition obtain max plugging back returns combining small enough means bound completes proof proof lemma proof proof concerns bounding deviation mean resort matrix bernstein inequality control definition section expected value given quantity interest expressed sum mean zero independent random matrices application matrix bernstein inequality proposition requires compute bound end variance first inequality result fact kak psd application lemma shows draft december taking summation returns thus operator norm produces variance max last line follows definition similar manner compute first inequality follows exact reasoning summand simplifies expectation moved inside obtain using orthogonality operator norm simplifies max max follows definition coherence per maximum variance fact proven showing max first note second operator norm matrix consideration draft december subgaussian random variable arbitrary matrix also subgaussian implies proposition max max max max means log kqn log applying coherence bounds cln last inequality follows fact combining ingredients gives final result choosing log bernstein inequality shows max log holds probability least using union bound follows lemma least one coherence bounds fails probability means coherence bounds hold probability least plugging coherence bounds choosing lemma appropriately large constant ensures holds probability least using union bound choices show conclusion holds probability least proof corollary proof proof follows essentially proof lemma taking afterwards taking equivalently final bound obtained making changes log log max constant may differ one choosing corollary large enough proves corollary draft december proof lemma proof begin noting clear fixed one takes random construction account dependence turn means dependent means simple way write quantity sum independent random matrices apply matrix bernstein inequality control size fortunately work uniform bound using restricted isometry property works matrices thus overcome issues intricate dependencies equivalence operator frobenius norm projection operator defined orthogonal complement note psn using definition clear psn implies using fact means reduces far second term expression concerned using one obtains last equality follows fact orthonormal vectors orthogonal definition matrix dependent sets turn dependent construction however avoid dependence issue result uniform nature sense draft december holds vectors employing result every column obtain ksn max ksn holds probability least implies using calculation decay rate kwp statement lemma sufficient guarantee every using union bound statement extended probability least proof lemma proof using definition evaluating expectation easy see section know sets every independent chosen uniformly random define set bernoulli sets defined independently chosen bernoulli number every takes value one probability since probability failure event number nonincreasing function every follows orthogonality fact increasing increases range projector hence distance operator norm either decrease stay follows using lemma probability failure event less equal twice probability failure event draft december therefore rest proof suffices consider event index sets defined write using definition bernoulli sets right hand side summation note used fact simple application matrix bernstein enough show event holds desired high probability end calculation variance laid follows denote centered random matrices matrices independent every set chosen maximum operator norms two independently variance quantities firstly kmn kmn max second last inequality follows fact obtained applying definitions secondly last inequality max kmn used facts complete orthonormal basis operator norm block diagonal matrix nth block upper bounded maxn kmn maximum operator last inequality result thus variance norm two results bounded last ingredient required apply bernstein inequality proposition max max max kmn max max draft december ingredients place application uniform version bernstein bound log tells log log max right hand side driven desired small number choosing log log appropriately large constant probability inequlaity holds follows plugging choice log proposition supporting lemmas section proves lemma corollary lemmas proof lemma proof start proof lemma concerns bounding quantity let quantity expanded using definition second equality follows previously shown fact thus random vector expanded sum independent random vectors using definition map follows use matrix bernstein inequality find range random vector lies high probability let define random vectors proposition suffices compute following upper bound variance max use index variables avoid conflict reserved index set proof draft december note vectors rewritten scalar times vector follows using vector part expanded easily shown upper bounded last line follows fact standard gaussian vector easily verified using fact one max max note change index variable right hand side moreover using kpb inclusion projection operator defined vector due fact rows matrix supported furthermore using definitions finally bound result max putting identities together kmn one directly obtains max kmn max max draft december ingredient left apply bernstein bound proposition orlicz norm summands end norm vector evaluated follows since gaussian vectors discussion section tells ckmn means kmn kpb kmn used fact kpb gaussian vector orlicz norm inner product fixed vector using facts show random summands vectors computing showing bounded note inequality follows using identity note max max max change indices justified zero using result write kpb setting proposition one obtains upper bound max max max max draft december last inequality follows plugging bound calculated earlier proof lemma logarithmic factor bernstein bound crudely bounded follows log log constant follows fact completes ingredients apply bernstein bound log obtain max log holds probability least completes proof lemma proof corollary proof note equivalently write using fact comparison lemma concerned bounding norm term need replace repeat argument proof lemma leads bound norm statement corollary terms defined respectively compared lemma replaced result respectively quantities evaluated afterwards therein replaced proof lemma proof using definition triangle inequality using definition defined section kmn kmn max kmn kmn draft december second last equality results definition directly imply choice lemma since kmn ikn result simplifies max finally operator norm quantity returns max using obtain max squaring sides results max completes proof lemma let defined fixed matrix xdn proof note xdn xdn references aghasi bahmani romberg tightest convex envelope heuristic row sparse rank one matrices globalsip page ahmed recht romberg blind deconvolution using convex programming ieee trans inform theory draft december bahmani romberg lifting blind deconvolution random mask imaging identifiability convex relaxation siam imag burer monteiro nonlinear programming algorithm solving semidefinite programs via factorization math restricted isometry property implications compressed sensing comptes rendus mathematique eldar strohmer voroninski phase retrieval via matrix completion siam review recht exact matrix completion via convex optimization found comput romberg sparsity incoherence compressive sampling inverse problems romberg tao robust uncertainty principles exact signal reconstruction highly incomplete frequency information ieee trans inform theory february romberg tao stable signal recovery incomplete inaccurate measurements commun pure appl strohmer voroninski phaselift exact stable signal recovery magnitude measurements via convex programming commun pure appl sunav choudhary urbashi mitra sparse blind deconvolution done ieee int symp inform theory isit pages fazel matrix rank minimization applications phd thesis stanford university march fazel hindi boyd rank minimization heuristic application minimum order system approximation american control volume pages ieee gross recovering matrices coefficients basis ieee trans inform theory nikias evam algorithm multichannel blind deconvolution input colored signals ieee trans sig koltchinskii lounici tsybakov penalization optimal rates noisy matrix completion ann levin weiss durand freeman understanding blind deconvolution algorithms ieee trans patt analys mach ling strohmer wei rapid robust reliable blind deconvolution via nonconvex optimization arxiv preprint draft december shuyang ling thomas strohmer blind deconvolution meets blind demixing algorithms performance bounds ieee trans inform theory liu tong kailath recent developments blind channel equalization cyclostationarity subspaces ieee trans sig oymak jalali fazel eldar hassibi simultaneously structured models application sparse matrices ieee trans inform theory recht simpler approach matrix completion mach learn recht fazel parrilo guaranteed solutions linear matrix equations via nuclear norm minimization siam romberg tian sabra multichannel blind deconvolution using low rank recovery spie defense security sensing pages int soc opt rudelson vershynin sparse reconstruction fourier gaussian measurements commun pure appl schmidt minfunc unconstrained differentiable multivariate optimization matlab http tong perreau multichannel blind identification subspace maximum likelihood methods proc ieee tong kailath blind identification equalization based statistics time domain approach ieee trans inform theory tropp tail bounds sums random matrices found comput vershynin compressed sensing theory applications cambridge university press watson characterization subdifferential matrix norms linear algebra liu tong kailath approach blind channel identification ieee trans sig draft december
| 7 |
jsj decompositions groups vincent guirardel gilbert levitt jul abstract account theory jsj decompositions finitely generated groups developed last twenty years give simple general definition jsj decompositions rather bassserre trees maximal universally elliptic trees general preferred jsj decomposition right object consider whole set jsj decompositions forms contractible space jsj deformation space analogous outer space prove jsj decompositions exist finitely presented group without assumption edge groups edge groups slender describe flexible vertices jsj decompositions quadratically hanging extensions groups similar results hold presence acylindricity particular splittings csa groups abelian groups splittings relatively hyperbolic groups virtually cyclic parabolic subgroups using trees cylinders obtain canonical jsj trees invariant automorphisms introduce variant property universally elliptic replaced restrictive rigid property universally compatible yields canonical compatibility jsj tree deformation space show exists finitely presented group give many examples work throughout relative decompositions restricting trees certain subgroups elliptic introduction jsj decompositions first appeared topology theory characteristic submanifold johannson terminology jsj popularized sela start quick review restricting manifolds without boundary groups let closed orientable given finite collection disjoint embedded one may cut open along glue balls boundary pieces make boundaryless expresses connected sum closed manifolds prime decomposition theorem asserts one may choose spheres either irreducible every embedded bounds ball homeomorphic moreover permutation summands uniquely determined homeomorphism group level one obtains decomposition free product fundamental group irreducible free group rank coming summands decomposition grushko decomposition following sense freely indecomposable written free product isomorphic guaranteed conjecture proved perelman finitely generated group grushko decomposition conjugacy permutation prime decomposition implies one focus irreducible manifolds since spheres bound balls one considers embedded tori order avoid trivialities torus bounding tubular neighborhood curve tori incompressible embedding induces injection fundamental groups theory characteristic submanifold says given irreducible exists finite family disjoint incompressible tori component manifold boundary obtained cutting along either atoroidal every incompressible torus boundary parallel seifert fibered space singular fibration circles surface better viewed orbifold moreover incompressible torus may isotoped disjoint groups mind let point important feature decomposition two incompressible tori made disjoint isotopy may isotoped contained seifert piece conversely seifert fibered space preimages intersecting simple curves intersecting tori thus see presence intersecting tori forces surface appear one remarkable facts jsj theory groups similar phenomenon occurs instance finitely generated group admits two splittings intersect essential must contain fundamental group compact surface attached rest group along boundary see theorem section particular proposition also point family mentioned unique isotopy hand family spheres defining prime decomposition usually unique similarly grushko decompositions group usually form large outer space whereas one may often construct canonical splittings groups particular invariant automorphisms see theorems topological ideas carried group theory kropholler duality groups dimension least sela hyperbolic groups constructions jsj decompositions given general settings many authors bowditch papasogluswenson vast influence range applications isomorphism problem structure group automorphisms hyperbolic groups diophantine geometry groups many others context one finitely generated group class subgroups cyclic groups abelian groups one tries understand splittings graph groups decompositions groups family tori replaced splitting groups authors construct splitting enjoying long list properties rather specific case first goal give simple general definition jsj decompositions stated means universal maximality property together general existence uniqueness statements terms deformation spaces see jsj decompositions constructed jsj decompositions sense see subsection regular neighbourhood different nature one looks thurston geometrization conjecture whose proof completed perelman asserts geometric structure particular every atoroidal infinite fundamental group hyperbolic interior admits complete metric finite volume constant curvature precisely edge groups splitting hyperbolic tree splitting almost invariant sets rather splittings closer analogy situation one wants understand immersed tori embedded ones one obtains canonical splitting rather canonical deformation space see parts canonical splittings relation usual jsj decompositions definition jsj decompositions motivate definition let first consider free decompositions group decompositions fundamental group graph groups trivial edge groups equivalently actions simplicial tree trivial edge stabilizers let grushko decomposition defined nontrivial freely indecomposable one may view fundamental group one graphs groups pictured figure corresponding trees trivial edge stabilizers vertex stabilizers precisely conjugates call tree properties grushko tree freely indecomposable grushko trees points figure graph groups decompositions corresponding two grushko trees since freely indecomposable grushko trees following maximality property tree acts trivial edge stabilizers fixes point therefore dominates sense map words among free decompositions grushko tree far possible trivial tree point vertex stabilizers small possible conjugates maximality property determine uniquely shared grushko trees come back key fact later discussing uniqueness general decompositions allowed instance one considers splittings may exist tree maximality property fundamental example following consider orientable closed surface two simple closed curves intersection number let tree associated splitting since positive intersection number hyperbolic fix point using fact freely indecomposable easy exercise check splitting dominates case hope splitting cyclic groups similar overcome difficulty one restricts universally elliptic splittings defined follows consider trees action finitely generated group require edge stabilizers given family subgroups closed conjugating taking subgroups call tree theory corresponds splittings groups unless otherwise indicated trees assumed definition universally elliptic edge stabilizers elliptic every recall elliptic fixes point terms graphs groups contained conjugate vertex group free decompositions universally elliptic trees introduced definition jsj decomposition jsj tree universally elliptic dominates universally elliptic tree call quotient graph groups jsj splitting jsj decomposition recall dominates equivariant map equivalently group elliptic also elliptic second condition definition maximality condition expressing vertex stabilizers small possible elliptic every universally elliptic tree consists subgroups given property cyclic abelian slender refer say cyclic trees cyclic jsj decompositions working contains trivial group jsj trees grushko trees family cyclic subgroups jsj decomposition trivial point jsj tree existence jsj trees always exist finitely generated inaccessible group constructed dunwoody jsj tree finite groups group jsj decomposition virtually cyclic subgroups hand follows rather easily dunwoody accessibility finitely presented group jsj decompositions class subgroups emphasize assumption smallness needed theorem theorem let arbitrary family subgroups stable taking subgroups conjugation finitely presented jsj decomposition fact exists jsj tree whose edge vertex stabilizers finitely generated part shall present different way constructing jsj decompositions based sela acylindrical accessibility applies general situations existence jsj decompositions limit groups mentioned used give complete proof give details later introduction mention two typical results group csa maximal abelian subgroups malnormal small free subgroup see subsection variations theorem theorem let finitely generated csa group jsj decomposition abelian subgroups theorem theorem let hyperbolic relative finite family finitely generated small subgroups either family virtually cyclic subgroups family small subgroups jsj decomposition uniqueness jsj trees unique returning example free decompositions one obtains trees maximality property precomposing action automorphism one may also change topology quotient graph see figure canonical object single tree set grushko trees trees trivial edge stabilizers vertex stabilizers conjugate deformation space definition deformation space deformation space tree set trees dominates dominates equivalently two trees deformation space elliptic subgroups generally given family subgroups one considers deformation spaces restricting trees edge stabilizers instance outer space set free actions trees deformation space like outer space deformation space may viewed complex natural way contractible see jsj tree another tree jsj tree universally elliptic dominates dominates words belong deformation space aell aell family universally elliptic groups definition set jsj trees deformation space aell called jsj deformation space denote djsj canonical object therefore particular jsj decomposition jsj deformation space instance jsj deformation space outer space see subsection general fact two trees belong deformation space one pass one applying finite sequence moves certain types see remark may viewed connectedness statement mentioned deformation spaces actually contractible statements uniqueness jsj decomposition certain moves appear well results special cases general fact another general fact following two trees belonging deformation space vertex stabilizers provided one restricts groups thus makes sense study vertex stabilizers jsj trees part iii see invariant group automorphisms particular defined restricting isomorphism type deformation space djsj case outer space precomposing actions trees automorphisms yields action aut djsj contractible complex action factors action thus providing information stress general canonical object associated deformation space consisting jsj trees may quite large sometimes canonical jsj tree nice situations one construct canonical jsj tree djsj canonical essentially mean defined natural uniform way particular given isomorphism sending canonicity implies unique isomorphism canonical jsj trees applying assuming invariant automorphisms one gets action aut canonical tree fixed point action jsj deformation space existence canonical splitting gives precise information see applications particularly nice example due bowditch construction canonical jsj decomposition hyperbolic group virtually cyclic subgroups structure local cut points gromov boundary consequence sole fact one considers splittings virtually cyclic groups cyclic splittings correspondence free splittings jsj deformation space outer space generalized groups striking examples strong occurs surprising algebraic consequences like fact due group outer automorphisms group finitely generated method produce canonical tree deformation space given part using construction called tree cylinders particular yields canonical jsj decompositions csa groups relatively hyperbolic groups see theorems compatibility jsj decomposition introduced also yields canonical tree description quadratically hanging vertex groups mentioned grushko trees strong maximality property vertex stabilizers elliptic free splitting hold longer one considers jsj decompositions infinite groups particular cyclic groups vertex stabilizer jsj tree may fail elliptic splitting chosen family happens say vertex stabilizer corresponding vertex vertex group quotient graph groups flexible stabilizers elliptic every splitting called rigid particular vertices grushko trees rigid stabilizers freely indecomposable hand example unique vertex stabilizer flexible jsj decompositions lie deformation space aell flexible vertex stabilizers rigid vertex stabilizers essential feature jsj theory description flexible vertices particular fact flexible vertex stabilizers often see theorem precise statements words example trees given using intersecting curves surface often source flexible vertices formalized notion quadratically hanging groups terminology due rips sela cyclic splittings group vertex group may viewed fundamental group possibly compact surface boundary way incident edge group trivial contained conjugacy boundary subgroup fundamental group boundary component terminology quadratically hanging describes way attached rest group since boundary subgroups generated elements quadratic words suitable basis free group general setting one extends notion follows extension compact hyperbolic usually boundary arbitrary group called fiber condition attachment image incident edge group finite contained boundary subgroup see section details recall group slender subgroups finitely generated theorem see corollary let class slender subgroups finitely presented group let flexible vertex group jsj decomposition either slender slender fiber one may replace subfamily provided satisfies suitable stability condition particular may family cyclic subgroups virtually cyclic subgroups polycyclic subgroups failure stability condition explains result apply jsj decompositions abelian groups general see subsection hand flexible vertex groups theorem trivial fiber theorem flexible vertex groups finite fiber theorem says flexible subgroups jsj decomposition one say maximal following sense proposition see corollary let let class virtually cyclic groups let vertex stabilizer finite fiber arbitrary contained vertex stabilizer cyclic jsj decomposition hold without assumption free groups contain many subgroups proof theorem based approach fujiwara papasoglu using products trees several simplifications particular construct group enclosing two splittings characteristic property slender groups whenever act tree fixed point invariant line using lines one may construct subsurfaces product two trees explains least philosophically appearance surfaces hence vertex groups theorem content proposition approach work edge groups slender unless acylindricity theorem following problem open problem describe flexible vertices jsj decompositions finitely presented group small subgroups relative decompositions many applications important consider subgroups belonging given family elliptic every fixes point tree say tree relative call working relative setting important applications see section theorems also needed proofs proof theorem describing flexible vertex groups slender jsj decompostions instance work splittings rather splittings relative incident edge groups definitions extend naturally relative trees tree universally elliptic edge groups elliptic every jsj decomposition relative universally elliptic maximal domination theorems stated remain true must finite family finitely generated subroups theorems one must take account defining vertices see definition text consistently work relative setting reader take act faith arguments also work relative trees simplicity though limit case introduction except theorems obtaining canonical trees definitely requires working relative setting acylindricity explained dunwoody accessibility may used construct jsj deformation space contains preferred tree general part use different approach based trees cylinders sela acylindrical accessibility yields precise results applicable unlike dunwoody accessibility acylindrical accessibility requires finite generation see subsection let group tree necessarily infinite virtually cyclic edge stabilizers say two edges equivalent stabilizers commensurable infinite intersection one easily checks equivalence classes connected subsets call cylinders two distinct cylinders intersect one point dual partition subtrees another tree called tree cylinders sometimes necessary use collapsed tree cylinders see definition neglect construction works equivalence relations among infinite edge stabilizers two examples relatively hyperbolic group equivalence relation among infinite elementary subgroups elementary virtually cyclic parabolic csa group relation commutation among abelian subgroups figure jsj splitting toral relatively hyperbolic group tree cylinders example already appears let tree graph groups pictured left figure punctured tori boundary subgroup equal edge groups fundamental group group toral relatively hyperbolic csa cyclic jsj decomposition case previous equivalence relations set edge stabilizers commensurability commutation reduce equality quotient graph groups tree cylinders pictured right figure new vertex group isomorphic three main benefits passing jsj tree first two trees deformation space always tree cylinders particular tree invariant automorphisms whereas invariant deformation second acylindrical segments length trivial stabilizer whereas contains lines infinite cyclic pointwise stabilizer third trees cylinders enjoy nice compatibility properties important later small price pay order replace better tree namely changing deformation space creating new vertex group example typical method use part prove theorems using trees cylinders show one may associate tree acylindrical tree way groups elliptic also elliptic groups elliptic small smally dominated sense definition applying acylindrical accessibility trees one key ingredients construction jsj decompositions proof involves much bound complexity acylindrical splittings example tree cyclic jsj tree preferred tree slightly different deformation space jsj tree relative subgroup general show theorem theorem let finitely generated csa group canonical jsj tree abelian subgroups relative abelian subgroups flexible vertex stabilizers trivial fiber csa property maximal abelian subgroups malnormal holds torsionfree hyperbolic group group hyperbolic group torsion groups weaker property integer defined subsection theorem generalizes setting see theorem implies following result theorem let hyperbolic group let group canonical jsj tree virtually abelian subgroups relative virtually abelian subgroups virtually cyclic flexible vertex stabilizers virtually abelian finite fiber similar statement relatively hyperbolic groups theorem theorem let hyperbolic relative finite family finitely generated small subgroups either family virtually cyclic subgroups family small subgroups canonical jsj tree relative parabolic subgroups flexible vertex stabilizers small finite fiber trees produced theorems defined uniform natural way canonical discussed particular invariant automorphims hyperbolic group canonical jsj tree case coincides tree constructed bowditch using topology compatibility jsj refinement tree tree obtained blowing vertices beware refinement call tree dominating hand elementary unfoldings refinements sense map dominates map special maps segment onto segment particular fold call map collapse map obtained collapsing certain edges points tree universally elliptic given tree refinement dominates equivariant map see proposition finitely generated edge stabilizers one may obtain finite sequence folds collapses sense one read particular one may read tree jsj tree general map collapse map folds say compatible exists refinement collapse map words compatible common refinement implies edge stabilizers tree elliptic much restrictive instance compact surface boundary free splittings dual properly embedded arcs always elliptic respect edge groups trivial compatible arcs disjoint isotopy hand splittings hyperbolic surface group associated two simple closed geodesics compatible disjoint equal specific case compatibility equivalent trees elliptic respect part introduce another type jsj decomposition encodes compatibility splittings rather ellipticity new feature except degenerate cases lead canonical tree tco deformation space fix family trees assumed say tree corresponding graph groups universally compatible compatible every tree one may obtain tree refining collapsing view splitting coming edge splitting vertex group definition compatibility jsj deformation space dco maximal deformation space domination containing universally compatible tree maximal deformation space exists unique words dco contains universally compatible tree dominates universally compatible trees theorem theorem let finitely presented let conjugacyinvariant class subgroups stable taking subgroups compatibility jsj deformation space dco exists although existence usual jsj deformation space fairly direct consequence accessibility proving existence compatibility jsj deformation space delicate among things use limiting argument need know limit universally compatible trees universally compatible best expressed terms see appendix mentioned deformation space dco contains canonical element tco except degenerate cases recall tree irreducible acts fixed point fixed end invariant line say deformation space irreducible equivalently every irreducible theorem corollary dco exists irreducible contains canonical tree tco compatibility jsj tree particular invariant automorphisms tco deformation space containing irreducible universally compatible tree preferred element develop analogy arithmetic viewing refinement multiple splittings primes define greatest common divisor gcd two trees least common multiple lcm family pairwise compatible trees tco lcm reduced universally compatible trees contained dco see subsection definition reduced tree tco similar canonical tree tss constructed scott swarup property compatible sets scott swarup use word enclosing generalizes compatibility general tco dominates tss may tss trivial happens instance group none divides see relation tss jsj decompositions invariant automorphisms sometimes forces tco trivial point happens instance free hand give simple examples tco certain virtually free groups generalized groups duality groups trees cylinders also provide many examples particular canonical trees theorems closely related tco contents paper reader convenience describe detailed contents section includes results directly related jsj decompositions independent interest relative finite presentation small orbifolds orbifolds finite mapping class group groups compatibility length functions arithmetic trees meant description statements may imprecise incomplete preliminary section section collect basic facts groups acting trees define trees edge stabilizers relative collapse maps refinements compatibility domination deformation spaces discuss slenderness smallness subgroups recall main accessibility results also define discuss relative finite generation presentation relative finite presentation vertex groups studied subsection section starts useful fact edge stabilizers elliptic refinement dominates defining universal ellipticity define jsj trees jsj deformation space maximality property explained prove existence jsj decompositions finite presentability assumption first case general relies version dunwoody accessibility due state prove also explain jsj decompositions constructed jsj decompositions sense section devoted simple examples first consider grushko decompositions trivial group decompositions finite groups explaining interpret jsj decompositions also consider small groups locally finite trees associated cyclic splittings generalized baumslagsolitar groups examples jsj decompositions rigid vertices end section work example jsj decomposition flexible vertices section contains various useful technical results given vertex graph groups tree define incident edge groups point splitting vertex group relative incident edge groups extends splitting given universally elliptic splitting one may obtain jsj decomposition relative jsj decompositions vertex groups particular one may usually restrict groups studying jsj decompositions section devoted groups first study hyperbolic orbifolds particular relation splittings simple closed geodesics classify orbifolds essential simple closed geodesic groups split cyclic subgroup relative fundamental groups boundary components appear vertices jsj decompositions surface pair pants sphere occurs classification complicated singular possibly orbifolds also classify orbifolds finite mapping class group define subgroups study basic properties universal ellipticity fiber used boundary components existence simple geodesics universally elliptic subgroups particular show vertex group jsj decomposition slender groups acts tree slender edge stabilizers either fixes point action minimal subtree dual family simple closed geodesics also prove general version proposition showing suitable assumptions vertex stabilizer tree elliptic jsj trees using filling construction give examples possible peripheral structures vertex groups show flexible vertex groups jsj decompositions abelian subgroups filling construction introduced section order provide alternative construction relative splittings section study flexible vertices jsj decompositions slender groups particular prove theorem completeness also describe slender flexible subgroups allow edge stabilizers slender trees whenever acts tree fix point leave line invariant useful working relative setting groups automatically slender trees prove theorem follow approach simplifications particular priori knowledge jsj decompositions exist allows reduce totally flexible groups following define core enclosing group two splittings call regular neighborhood construct filling pair splittings show regular neighborhood required group technical reasons replace notion minimal splittings slightly stronger notion minuscule splittings theorem family family slender subgroups theorem holds family slender groups satisfying one two stability conditions scz familly cyclic resp virtually cyclic subgroups satisfies scz resp conditions ensure regular neighborhood two splittings edge groups also edge groups section devoted tree cylinders given admissible equivalence relation set infinite groups one may associate tree cylinders tree stabilizers tree depends deformation space give conditions ensuring acylindrical smally dominated means particular groups elliptic small also study compatibility properties section show jsj decompositions exist flexible vertex groups finite fiber assumption one may associate tree acylindrical tree smally dominated first construct relative tree theorems refine order get required jsj tree applied section tree cylinders used prove theorems study csa groups relatively groups well cyclic splittings commutative transitive groups introduce groups integer better suited csa groups study groups torsion prove theorem also discuss slightly different type jsj decompositions hyperbolic groups edge groups required maximal virtually cyclic subgroups infinite center section define universal compatibility show theorem existence compatibility jsj deformation space dco construct compatibility jsj tree tco give examples particular use compatibility properties tree cylinders identify dco abelian splittings csa groups elementary splittings relatively hyperbolic groups cyclic splittings commutative transitive groups appendix view trees metric rather combinatorial objects actually consider first give simple proofs two standard results tree determined length function axes topology agrees equivariant topology study compatibility defined using collapse maps prove two compatible sum length functions length function comes rtree particular compatibility closed property space trees used proof theorem using core introduced first author show finite family pairwise compatible common refinement going back simplicial trees develop analogy basic arithmetics define prime factors tree splittings corresponding edges quotient graph show trees define gcd lcm conclude remarks combining jsj theory rips theory describe small actions gives another general approach main result new jsj theory developed several people best knowledge following original material definition jsj decompositions deformation space satisfying maximality property systematic study relative jsj decompositions section classification small orbifolds orbifolds finite mapping class group section stability conditions scz compare description slender flexible groups everything subsection exception particular detailed proof existence jsj decomposition acylindricity assumptions compare compatibility jsj tree arithmetic trees property see also acknowledgements first author acknowledges support institut universitaire france anr project membership henri lebesgue center second author acknowledges support contents preliminaries preliminaries basic notions notations trees maps trees compatibility deformation spaces slenderness smallness accessibility relative finite generation presentation jsj deformation space definition existence standard refinements universal ellipticity jsj deformation space existence jsj deformation space case existence relative case relation constructions examples jsj decompositions free groups free splittings grushko deformation space splittings finite groups splittings small groups generalized groups locally finite trees raags deformation space parabolic splittings examples useful facts changing edge groups incidence structures vertex groups jsj decompositions vertex groups relative jsj decompositions iii fillings flexible vertices jsj decompositions slender groups statement results reduction totally flexible groups core regular neighborhood constructing filling pair splittings flexible groups trees minuscule splittings totally flexible group minuscule slenderness trees slender flexible groups acylindricity trees cylinders definition acylindricity small domination compatibility quadratically hanging vertices splittings definition properties quadratically hanging subgroups quadratically hanging subgroups elliptic jsj peripheral structure quadratically hanging vertices flexible vertices abelian jsj decompositions constructing jsj decompositions using acylindricity uniform acylindricity acylindricity small groups applications csa groups groups groups relatively hyperbolic groups virtually cyclic splittings zmax decomposition compatibility compatibility jsj tree existence compatibility jsj space compatibility jsj tree tco examples free groups algebraic rigidity free products generalized groups canonical decomposition scott swarup duality groups trees cylinders length functions compatibility metric trees length functions length functions trees compatibility length functions common refinements arithmetic trees reading actions part preliminaries preliminaries paper always finitely generated group sometimes finite presentation needed instance prove existence jsj decompositions full generality basic notions notations two subgroups group commensurable finite index commensurator set elements ghg commensurable denote free group generators group virtually cyclic cyclic subgroup finite index finite finite index subgroup isomorphic infinite virtually cyclic groups characterized two types infinite virtually cyclic groups infinite center map onto finite kernel finite center map onto infinite dihedral group finite kernel see kernel unique maximal finite normal subgroup subsection shall say cyclic constant cardinality group relatively hyperbolic respect family finitely generated subgroups acts properly isometries proper gromov hyperbolic space invariant collection disjoint horoballs action cocompact complement horoballs stabilizers horoballs exactly conjugates subgroup called parabolic conjugate subgroup elementary parabolic virtually cyclic subgroups contain free subgroups acting hyperbolic isometries group csa conjugately separated abelian maximal abelian subgroups malnormal example hyperbolic groups csa see subsection generalisation presence torsion trees consider actions simplicial trees identify two trees equivariant isomorphism appendix view trees metric spaces work paper trees considered combinatorial objects still useful think tree geometric object instance define arc two points midpoint edge see basic facts trees given two points unique segment joining degenerate segments also called arcs homeomorphic disjoint simplicial subtrees bridge unique segment always assume acts without inversion edge element interchanges often redundant vertex vertex valence unique fixed point element denote set vertices edges respectively stabilizer vertex edge stabilizer arbitrary point theory action viewed splitting marked graph groups isomorphism fundamental group graph groups splitting one edge amalgam also denote groups carried vertices edges action trivial fixes point hence vertex since assume inversion minimal proper subtree unless otherwise indicated trees endowed minimal action without inversion allow trivial case point finite generation implies quotient graph finite note however restriction action subgroup minimal element subgroup elliptic fixes point equivalent contained conjugate vertex group denote fix fix fixed point set elliptic subtree meets fix finite index elliptic finite groups groups kazhdan property serre property fixed point every tree act finitely generated elliptic elements follows serre lemma corollaire well products elliptic elliptic subgroup element elliptic hyperbolic unique axis acts translation also denote characteristic set fixed point set elliptic axis hyperbolic translation lengths length functions used subsections discuss appendix need consider restriction action subgroups let therefore arbitrary group possibly infinitely generated acting tree assume action trivial global fixed point contains hyperbolic element finitely generated serre lemma fixes unique end ray finitely generated subgroup fixes subray end equivalence class rays equivalent intersection ray case subtrees minimal one assume contains hyperbolic element sometimes say acts hyperbolically hyperbolic serre lemma always holds finitely generated acts unique minimal subtree namely union axes hyperbolic elements tree action irreducible exist two hyperbolic elements whose axes disjoint intersect finite segment suitable powers generate free group acting freely follows irreducible exist two hyperbolic elements whose commutator hyperbolic irreducible preserves line fixes unique end recall assumption global fixed point note invariant line two fixed ends two types actions line orientation preserved action translations ends line invariant points edges stabilizer reflections action said dihedral factors action infinite dihedral group invariant end edges stabilizer vertex stabilizers may contain edge stabilizer index fixes end associated homomorphism measuring much element pushes towards end precisely one defines difference number edges number edges ray going end map contains hyperbolic element case quotient graph groups homeomorphic circle one may orient circle inclusion onto whenever positively oriented edge single edge defines ascending hnn extension sign exponent sum stable letter sum proposition acts tree one following holds global fixed point hyperbolic elements contains unique minimal subtree infinitely generated fixes unique end proposition acts minimally tree proper subtree five possibilities point trivial action line acts translations action factors line reverses orientation dihedral action action factors unique invariant end global fixed point quotient graph groups homeomorphic circle irreducible implies cases except hyperbolic particular corollary acts irreducible fixed point unique fixed end unique invariant line proof existence follows propositions prove uniqueness statements two invariant ends line joining invariant show fixed point exist two invariant lines disjoint midpoint bridge fixed intersection segment finite length midpoint fixed intersection ray origin fixed point besides usually also fix nonempty family subgroups stable conjugation taking subgroups tree whose edge stabilizers belong often say corresponding splitting groups say cyclic tree abelian tree slender tree family cyclic abelian subgroups also fix arbitrary set subgroups restrict elliptic terms graphs groups contained conjugate vertex group finitely generated stronger requiring every elliptic call tree tree relative set change replace group conjugate enlarge making invariant conjugation acts say splits group relative group freely indecomposable relative split trivial group relative equivalently unless trivial one write every group contained conjugate one says relative split finite group relative empty equivalent finite theorem stallings see maps trees compatibility deformation spaces morphisms collapse maps refinements compatibility maps trees always send vertices vertices edges edge paths maybe point minimality actions always surjective edge contained image edge edge stabilizer contains edge stabilizer also note edge vertex stabilizer contained vertex stabilizer mention two particular classes maps map two trees morphism one may subdivide maps edge onto edge equivalently edge collapsed point folds examples morphisms see morphism edge stabilizer contained edge stabilizer collapse map map obtained collapsing certain edges points followed isomorphism equivariance set collapsed edges equivalently preserves alignment image arc point arc another characterization preimage every subtree subtree terms graphs groups one obtains collapsing edges irreducible irreducible one easily checks trivial point irreducible compare lemma tree collapse collapse map conversely say refines terms graphs groups one passes collapsing edges vertex vertex group fundamental group graph groups occurring preimage conversely suppose vertex splitting splitting incident edge groups elliptic one may refine using obtain splitting whose edges together see lemma note uniquely defined flexibility way edges attached vertices discussed section two trees compatible common refinement exists tree collapse maps additional property edge gets collapsed discussed subsection domination deformation spaces tree dominates tree equivariant map call domination map equivalently dominates every vertex stabilizer fixes point every subgroup elliptic also elliptic particular every refinement dominates beware domination defined considering ellipticity subgroups elements may make difference vertex stabilizers finitely generated deformation spaces defined saying two trees belong deformation space elliptic subgroups one dominates restrict say deformation space tree irreducible others say irreducible instance trees free action belong deformation space cvn outer space note however finitely many trees compatible given cvn defined deformation spaces combinatorial objects like outer space may viewed geometric objects see use point view deformation space dominates space trees dominate every deformation space dominates deformation space trivial tree called trivial deformation space deformation space elliptic tree reduced proper collapse lies deformation space different reduced sense observing inclusion onto inclusion one sees reduced whenever edge belong orbit projects loop another characterization edge hgu elliptic exists hyperbolic element sending particular edge maps loop reduced one obtains reduced tree deformation space collapsing certain orbits edges uniquely defined general slenderness smallness slenderness group slender subgroups finitely generated examples slender groups include finitely generated virtually abelian groups finitely generated virtually nilpotent groups virtually polycyclic groups slender group contain free group slender groups characteristic property whenever act tree fix point invariant line lemma lemma let slender group acting tree fix point unique line since finitely generated minimal subtree terminology proposition cases possible proof action irreducible since contain fixed point invariant line fixed end associated finitely generated element ker fixes ray going fixed end finitely generated ker elliptic serre lemma fixed point set subtree ker normal action factors action cyclic group ker contains line uniqueness follows corollary convenient use lemma define weaker notion subgroups say subgroup possibly infinitely generated slender whenever acts point fixed line particular slender group group contained group group property slender following lemma used subsection lemma let subgroups slender slender trees slender proof let elliptic fixed point set subtree normal action subtree fixes point leaves line invariant true elliptic preserves unique line since smallness one defines abstract group small contain group act irreducibly tree use trees give weaker definition subgroups particular want groups small given tree acts say following subgroup small action irreducible mentioned fixes point end leaves line invariant see corollary say small small every acts every subgroup containing every group contained group small moreover small finitely generated subgroups accessibility constructions jsj decompositions based accessibility theorems stating given suitable priori bound number orbits edges assumption redundant vertex valence unique fixed point holds particular finitely generated groups finite bounded order finitely presented groups finite finitely presented groups small trees reduced sense finitely generated trees finitely generated trees finitely presented groups tree resp pointwise stabilizer segment length trivial resp order paper use version dunwoody accessibility given see proposition section use acylindrical accessibility relative finite generation presentation mentioned always assume finitely generated finitely presented however properties always inherited vertex groups therefore consider relative finite generation presentation behave better respect see subsection let group finite family subgroups definition one says finitely generated relative exists finite set generated subset relative generating set clearly finitely generated finitely generated relative finitely generated relative finite generation equivalent finite generation adding conjugators one sees relative finite generation change one replaces subgroups conjugate subgroups subsection proposition suppose finitely generated relative acts tree relative global fixed point contains hyperbolic elements unique minimal invariant subtree quotient finite graph recall relative every elliptic consider relative finite presentation see note relative finite generating set natural morphism epimorphism free group definition one says finitely presented relative exists finite relative generating set kernel epimorphism normally generated finite subset particular group hyperbolic respect finite family finitely presented relative one easily checks relative finite presentation depend choice affected one replaces subgroups conjugate subgroups finitely presented finitely presented relative finite collection finitely generated subgroups note however free group finitely presented relative infinitely generated free subgroup finitely many generators may appear conversely finitely presented relative finite collection finitely presented subgroups finitely presented following lemma used subsection lemma suppose finitely presented relative finitely presented relative proof show finitely generated lemma follows applying tietze transformations let set elements appear letter one relators expressed elements equal element define new finite set relators replacing adding relations new presentation expresses amalgam finitely generated group inclusion implies suppose finitely generated group splits finite graph groups see instance lemma vertex groups finitely generated one assumes edge groups finitely generated false general without assumption however vertex groups always finitely generated relative incident edge groups similar statement relative finite presentation see subsection part jsj deformation space start part introducing standard refinements edge stabilizers elliptic tree refines dominates define jsj deformation space show exists finite presentability assumption give examples cases flexible vertex flexible vertices subject part iii conclude part collecting useful facts particular given tree discuss finite presentation vertex stabilizers relate splittings relative incident edge groups splittings also explain one may usually restrict groups studying jsj decompositions fix finitely generated group family subgroups closed conjugating taking subgroups another family trees minimal see subsection whenever construct new tree instance propositions check minimal definition existence standard refinements let trees definition ellipticity trees elliptic respect every edge stabilizer fixes point note elliptic respect whenever refinement dominates see subsection difference refinement domination edge stabilizers elliptic hence show converse statement proposition elliptic respect tree maps collapse map restriction subtree injective particular refinement dominates stabilizer edge fixes edge iii every edge stabilizer contains edge stabilizer subgroup elliptic elliptic assertions guarantee since remark edge stabilizers finitely generated obtained finite number collapses folds proof construct follows vertex stabilizer choose subtree instance minimal subtree whole edge choose vertices fixed possible elliptic assumption fixed point subtree make choices define tree blowing vertex fand attaching edges using points formally consider disjoint union edge identify define sending sending also define map equal inclusion sending edge segment general may fail minimal define unique minimal subtree action unless points define restrictions maps clearly satisfy first two requirements let check properties follow assertion clear edge collapsed fixes edge otherwise maps injectively segment fixes edge assertion holds assertion iii true surjective map trees prove direction assertion assume elliptic preserves subtree since injective restriction enough prove fixes point holds elliptic remark one may think construction terms graphs groups follows starting graph group one replaces vertex graph groups dual action minimal subtree one attaches edge incident onto vertex whose group contains conjugate since vertex group finitely generated relative incident edge groups see lemma edge groups elliptic minimal subtree lemma thus one may require act minimally preimage definition standard refinement tree proposition called standard refinement dominating general uniqueness standard refinements however assertion proposition standard refinements belong deformation space lowest deformation space dominating deformation spaces containing respectively dominates resp dominates deformation space resp moreover symmetry also happens elliptic respect standard refinement dominating deformation space lemma refines belong deformation space hyperbolic elliptic elliptic respect every elliptic also elliptic dominates elliptic respect elliptic respect splits group infinite index edge stabilizer recall trees assumed splitting obtained relative proof one needs prove first assertion obtained collapsing orbit edge orbit hyperbolic element becomes elliptic otherwise deformation space see subsection second assertion assume dominate let standard refinement dominating belong deformation space since refinement seen elliptic hyperbolic assumption elliptic contradicting assertion proposition remark let proposition let edge stabilizer elliptic contains edge stabilizer since elliptic index infinite universal ellipticity definition universally elliptic subgroup universally elliptic elliptic every tree universally elliptic edge stabilizers universally elliptic elliptic respect every need specific say universally elliptic relative universally elliptic otherwise say universally elliptic recalling trees assumed groups serre property particular finite groups universally elliptic universally elliptic contains finite index universally elliptic lemma consider two trees universally elliptic refinement dominates universally elliptic standard refinement dominating universally elliptic particular universally elliptic tree dominating universally elliptic elliptic elements belong deformation space proof first two assertions follow directly assertions proposition last one follows second assertion lemma following lemma used subsection lemma let family trees exists countable subset elliptic respect every dominates every dominates proof since countable find countable element hyperbolic hyperbolic dominates every elliptic elliptic every lemma tree dominates every many purposes enough consider splittings trees one orbit edges lemma let tree universally elliptic elliptic respect every splitting dominates every universally elliptic tree dominates every universally elliptic splitting proof direction one proves elliptic respect resp dominates induction number orbits edges using following lemma lemma let tree subgroup let partition two sets let trees obtained collapsing respectively subgroup elliptic elliptic tree dominates dominates proof let vertex fixed let preimage collapse map subtree embeds since elliptic fixes point elliptic one shows applying vertex stabilizers jsj deformation space fixed define aell set groups universally elliptic relative aell stable conjugating taking subgroups tree universally elliptic aell definition jsj deformation space exists deformation space djsj aell maximal domination unique second assertion lemma called jsj deformation space relative trees djsj called jsj trees relative precisely trees universally elliptic dominate every universally elliptic tree also say trees djsj associated graphs groups jsj decompositions show jsj deformation space exists finitely presented theorems presence acylindricity see section see subsection example jsj deformation space general many jsj trees belong deformation space therefore lot common see section particular corollary vertex stabilizers except possibly vertex stabilizers aell remark results saying two trees belong deformation space one pass one finite sequence moves certain type see also particular defined section instance groups finite two reduced trees may joined finite sequence slide moves results may interpreted saying jsj tree unique certain moves content uniqueness statements definition rigid flexible vertices let vertex stabilizer jsj tree vertex group graph groups say rigid universally elliptic flexible also say vertex rigid flexible flexible say flexible subgroup relative definition flexible subgroups depend choice jsj tree heart jsj theory understand flexible groups discussed part iii record following simple facts future reference lemma let jsj tree tree tree refines dominates universally elliptic may refined jsj tree proof since elliptic respect one construct standard refinement dominating proposition satisfies first assertion second assertion since elliptic respect consider standard refinement dominating universally elliptic second assertion lemma dominates jsj tree sometimes exists universally compatible jsj tree see sections case one may require also refinement existence jsj deformation space case prove existence jsj decompositions first assuming theorem finitely presented jsj deformation space djsj exists contains tree whose edge vertex stabilizers finitely generated hypothesis smallness finite generation elements recall finitely generated finite generation edge stabilizers implies finite generation vertex stabilizers existence djsj deduced following version dunwoody accessibility whose proof given next subsection proposition dunwoody accessibility let finitely presented assume sequence refinements trees exists tree large enough morphism particular dominates edge vertex stabilizer finitely generated note maps required collapse maps recall subsection morphism may subdivided maps edges edges collapse map morphism particular edge stabilizers fix edge since every universally elliptic every remark unfortunately true deformation space must stabilize increases even edge stabilizers cyclic example let group sequence nested infinite cyclic groups let let tree iterated amalgam hck trees refine deformation space since hck elliptic dominated tree dual free decomposition accordance proposition applying proposition constant sequence yields following standard result corollary finitely presented tree exists morphism tree finitely generated edge vertex stabilizers universally elliptic proposition basically proposition omit proof refer general proposition proof theorem let set universally elliptic trees finitely generated edge vertex stabilizers equivariant isomorphism since contains trivial tree element described finite graph groups finitely generated edge vertex groups since countably many finitely generated subgroups countably many homomorphisms given finitely generated group another set countable corollary every universally elliptic tree dominated one suffices produce universally elliptic tree dominating every choose enumeration define inductively universally elliptic tree refines dominates may infinitely generated edge vertex stabilizers start given dominates let standard refinement dominates exists proposition universally elliptic universally elliptic second assertion lemma dominates apply proposition sequence tree universally elliptic dominates every hence every follows jsj tree existence relative case section prove existence relative jsj deformation space relative finite presentation assumption theorem assume finitely presented relative jsj deformation space djsj relative exists contains tree finitely generated edge stabilizers recalling finitely presented group finitely presented relative finite collection finitely generated subgroups get corollary let finitely presented let finite family finitely generated subgroups jsj deformation space djsj relative exists contains tree finitely generated edge hence vertex stabilizers remark give different approach existence subsection comparing relative jsj decomposition jsj decomposition larger group remark corollary apply groups infinitely generated see section existence results arbitrary theorem proved case theorem set finitely generated edge stabilizers countable vertex stabilizers relatively finitely generated lemmas explained remark proposition replaced following result proposition relative dunwoody accessibility let finitely presented relative assume sequence refinements exists large enough morphism edge stabilizer finitely generated applying proposition constant sequence get corollary finitely presented relative exists morphism finitely generated edge stabilizers proving proposition recall finitely presented relative subgroups exists finite subset natural morphism onto kernel normally generated finite subset equivalent definition finitely presented relative fundamental group connected may assumed simplicial containing disjoint connected subcomplexes possibly infinite following properties contains finitely many open cells embeds image conjugate fact relatively finitely presented space exists follows van kampen theorem conversely finitely presented relative one construct follows let pointed starting disjoint union add edges joining additional vertex additional edges joining get complex whose fundamental group isomorphic free product represent element loop space glue disc along loop obtain desired space proof proposition let universal cover simplicial acting deck transformations consider connected component whose stabilizer also fix lifts vertices denote collapse map note preimage midpoint edge single point namely midpoint edge mapping onto shall construct equivariant maps maps vertex fixed sends vertex sends edge either point injectively onto segment require vertex midpoint edge construct inductively start point constant maps assume constructed construct define note vertex fixed since preserves alignment subtree since elliptic fixes vertex subtree map vertex define vertex extend equivariance consider edge contained lift map already defined endpoints explain define restriction segment joining images endpoints collapse map particular preimage midpoint edge midpoint edge mapping onto recalling constant injective allows define map either constant injective satisfies midpoint edge equivariantly defined extend standard way every triangle abc contained particular constant abc preimages midpoints edges straight arcs joining two distinct sides completes construction maps define preimage midpoints edges pattern sense dunwoody intersect maps constructed denote projection finite graph contained complement let tree dual pattern claim finitely generated edge stabilizers construction induces map sending edge edge edge stabilizers finitely generated generated fundamental groups components every elliptic intersect proves claim let closure complement construction finite complex theorem bound number tracks implies exists every connected component exists connected component bounds product region containing vertex follows one obtain subdividing edges take relation constructions several authors constructed jsj splittings finitely presented groups various settings explain case splittings jsj splittings sense definition results literature often stated splittings restriction lemma rips sela consider cyclic splittings group consists cyclic subgroups including trivial group theorem says jsj splitting universally elliptic statement maximal statement iii uniqueness deformation statement work authors consider splittings group slender subgroups class split finite extensions infinite index subgroups restrictions class one typically take see details notation set subgroups elements universal ellipticity splitting construct follows statement main theorem fact edge group contained white vertex group maximality follows fact white vertex groups universally elliptic statement black vertex groups either case universally elliptic assumption made groups hence necessarily elliptic jsj tree see proposition fujiwara papasoglu consider splittings group class slender subgroups statement theorem says jsj splitting obtain elliptic respect splitting minimal sense proposition splitting dominated minimal splitting universal ellipticity holds statement theorem implies maximality mentioned introduction regular neighbourhood closer decompositions constructed parts examples jsj decompositions recall fixed consider unless otherwise indicated assumed finitely generated end section shall give two examples jsj decompositions flexible vertices examples vertices rigid fact indeed jsj decompositions consequence following simple fact lemma tree universally elliptic vertex stabilizers jsj tree proof assumption dominates every tree particular universally elliptic dominates every universally elliptic tree jsj tree also note lemma assume groups universally elliptic jsj tree vertex stabilizers universally elliptic applies particular splittings finite groups proof vertex stabilizer flexible consider elliptic since jsj decomposition universally elliptic one consider standard refinement dominating assumption tree universally elliptic definition jsj deformation space dominates implies elliptic hence contradiction free groups let finitely generated free group let arbitrary jsj deformation space space free actions unprojectivized cullervogtmann outer space generally virtually free contains finite subgroups djsj space trees finite vertex stabilizers free splittings grushko deformation space let consist trivial subgroup thus trees trivial edge stabilizers also called free splittings jsj deformation space exists outer space introduced see free factor call grushko deformation space consists trees edge stabilizers trivial vertex stabilizers freely indecomposable different one often considers freely decomposable since splits hnn extension trivial group denoting decomposition given grushko theorem freely indecomposable free quotient graph groups homotopy equivalent wedge circles one vertex group vertex groups trivial see figure introduction jsj deformation space grushko deformation space relative edge stabilizers jsj trees trivial groups fix point vertex stabilizers freely indecomposable relative subgroups conjugate group splittings finite groups deformation space set finite subgroups call jsj deformation space deformation space set trees whose edge groups finite whose vertex groups end one deduces stallings theorem tree maximal domination vertex stabilizers one end jsj deformation space exists finitely presented dunwoody original accessibility result finitely generated exists accessible particular inaccessible group constructed jsj decomposition finite groups remark even inaccessible jsj deformation space family finite subgroups bounded order reason proposition remains true subdivision large linnell accessibility jsj deformation space deformation space relative edge stabilizers finite groups fix point vertex stabilizers relative subgroups conjugate group sense subsection split finite subgroups relative subgroups conjugate group relative jsj space exists finitely generated consists finite groups bounded order arbitrary contains finite groups arbitrary large order jsj space exists relative accessibility holds splittings small groups recall small irreducible action tree always fixed point fixed end invariant line see corollary subsection particular case small contains free group acts fixed end invariant line every vertex stabilizer subgroup index fixing edge index acts dihedrally line lemma small universally elliptic tree dominates every tree proof since action irreducible every vertex stabilizer contains edge stabilizer index follows every vertex stabilizer universally elliptic dominates every tree corollary small one deformation space containing universally elliptic tree situation jsj deformation space always exists deformation space corollary jsj space otherwise jsj space trivial consider instance cyclic splittings solvable groups infinitely many deformation spaces corresponding epimorphisms universally elliptic tree klein bottle group exactly two deformation spaces one contains tree hnn extension contains tree associated amalgam hti hvi none trees universally elliptic hyperbolic hnn extension hyperbolic amalgam thus klein bottle group cyclic jsj deformation space trivial one flexible see subsection generalizations examples jsj space shall see generalized groups let generalized group finitely generated group acts tree vertex edge stabilizers infinite cyclic let set cyclic subgroups including trivial subgroup unless isomorphic klein bottle group deformation space jsj deformation space short proof arguments contained show every vertex stabilizer universally elliptic commensurator intersection pair vertex stabilizers finite index acts hyperbolically tree commensurator preserves axis line edge stabilizers cyclic vertex stabilizers virtually cyclic hence cyclic since implies klein bottle group locally finite trees generalize previous example locally finite trees small edge stabilizers suppose acts irreducibly locally finite tree small edge stabilizers local finiteness equivalent edge stabilizers finite index neighboring vertex stabilizers particular vertex stabilizers small lemma proved trees belong deformation space happens jsj deformation space proposition suppose groups small locally finite irreducible tree belongs jsj deformation space proof show every vertex stabilizer universally elliptic since locally finite contains edge stabilizer finite index small simplicity write small proof way contradiction assume elliptic tree small fixes unique end preserves unique line see corollary finite index subgroup preserves unique end line previous subsection local finiteness implies commensurates preserves end line particular irreducible define small normal subgroup act dihedrally line fixed end let commutator subgroup small finitely generated subgroup pointwise fixes ray contained edge stabilizer acts dihedrally let kernel action infinite dihedral group consider action normal subgroup elliptic fixed point set minimality action factors action abelian dihedral group contradicts irreducibility otherwise preserves unique end line end line normal contradicting irreducibility raags let finite graph associated artin group raag also called graph group partially commutative group group presented follows one generator per vertex relation edge see introduction decomposition connected components induces decomposition free product freely indecomposable raags may infinite cyclic study jsj decompositions one may assume connected see corollary clay determines cyclic jsj decomposition gives characterization raags cyclic jsj decomposition shows flexible vertex see abelian splittings raags relative generators parabolic splittings assume hyperbolic relative family finitely generated subgroups recall subgroup parabolic contained conjugate let family parabolic subgroups jsj trees parabolic subgroups relative equivalently exist theorem finitely presented relative parabolic subgroups universally elliptic splittings relative jsj trees flexible vertices lemma see jsj decomposition related cut points boundary see subsection case virtually cyclic groups added examples unlike previous subsections consider examples flexible vertices may viewed introduction part iii consider cyclic splittings suppose fundamental group closed orientable hyperbolic surface simple closed geodesic defines dual cyclic splitting amalgam depending whether separates element represented immersed closed geodesic elliptic splitting dual since meets transversely simple shows universally elliptic element jsj decomposition trivial vertex flexible similar considerations apply splittings fundamental groups compact hyperbolic surfaces boundary relative fundamental groups boundary components pair pants special contains essential simple geodesic see subsection content section example somehow universal example hci figure three punctured tori attached along boundaries corresponding jsj decomposition suppose fundamental group space pictured figure consisting three punctured tori attached along boundaries presentation fundamental group graph groups one central vertex three terminal vertices edges well carry cyclic group hci claim jsj decomposition flexible vertices let first show universally elliptic universally elliptic using lemma consider cyclic splitting elliptic argue towards contradiction group generated fundamental group closed surface genus particular follows edge group hai intersection conjugate lies conjugate consider action tree fixes edge hak finite index conjugate elliptic contradiction denote characteristic set unique fixed point axis contained minimal subtree conjugate contains lift vertex permuting indices shows vertices lifts single point lift implies hak finite index conjugate contradiction prove maximality consider universally elliptic tree dominating domination strict different deformation spaces gvi universally elliptic elliptic standard fact see proposition action gvi minimal subtree dual essential simple closed curve one curve punctured torus considering splitting dual curve intersecting shows universally elliptic contradiction useful facts section first describe behavior jsj deformation space change class allowed edge groups introduce incidence structure inherited vertex group graph groups relate jsj decompositions jsj decompositions vertex groups relative incidence structure also discuss relative finite presentation vertex groups finally give alternative construction relative jsj decompositions obtained embedding larger group changing edge groups fix two families subgroups compare jsj splittings universal ellipticity jsj decompositions relative fixed family example consists finitely generated abelian subgroups consists slender subgroups useful describe abelian jsj decomposition subsection groups locally slender finitely generated subgroups slender family slender subgroups relatively hyperbolic family parabolic subgroups family elementary subgroups subgroups parabolic virtually cyclic consists trivial group finite subgroups see corollary two notions universal ellipticity shall distinguish auniversal ellipticity elliptic ellipticity elliptic course ellipticity implies ellipticity recall two trees compatible common refinement proposition assume let jsj tree jsj tree one compatible may obtained refining collapsing edges whose stabilizer every elliptic elliptic tree obtained collapsing edges whose stabilizer jsj tree note applies consists finite groups generally groups serre property proof let jsj tree since tree elliptic respect let standard refinement dominating consider edge whose stabilizer fixes unique point equivariant map constant follows tree obtained collapsing edges whose stabilizer dominates elliptic assertion lemma jsj tree first note elliptic another one elliptic hence dominated map factors jsj tree proposition let assume every finitely generated group belongs finitely presented relative jsj tree relative jsj tree relative proof corollary every always relative morphism tree finitely generated edge stabilizers edge stabilizers fix edge proposition easily follows applies particular family groups locally finitely generated subgroups instance may family locally cyclic resp locally abelian resp locally slender subgroups family cyclic resp finitely generated abelian resp slender subgroups incidence structures vertex groups given vertex stabilizer tree useful consider splittings relative incident edge stabilizers extend splittings lemma subsection give definitions show finitely presented relative incident edge groups finitely presented edge stabilizers finitely generated proposition definitions let tree minimal relative edge stabilizers let vertex stabilizer definition incident edge groups incv given vertex tree finitely many edges origin choose representatives define incv incgv family stabilizers gei call incv set incident edge groups finite family subgroups conjugacy alternatively one define incv quotient graph groups image groups carried oriented edges origin peripheral structure defined studied subsection sophisticated invariant derived incv unlike incv change replace another tree deformation space definition restriction given consider family conjugates groups fix vertex define restriction choosing representative class family definition inch define incv incv sometimes write rather incv incv also view inch families subgroups conjugacy remark emphasize contains groups unique fixed point two groups conjugate conjugate particular number classes groups bounded number classes groups also note subgroup conjugate group contained conjugacy group belonging inch elliptic splitting relative incv finiteness properties assume fundamental group finite graph groups edge groups finitely generated resp finitely presented vertex groups goal subsection extend results relative finite generation resp finite presentation defined subsection needed groups assumed finitely presented lemma lemmas finitely generated group acts tree every vertex stabilizer finitely generated relative incident edge groups generally finitely generated relative tree relative finitely generated relative inch remark family consists groups conjugate follows set vertex stabilizers finitely generated edge stabilizers countable used proof theorem similar statement relative finite presentation proposition finitely presented tree finitely generated edge stabilizers every vertex stabilizer finitely presented relative incident edge groups generally finitely presented relative tree relative finitely generated edge stabilizers finitely presented relative inch use following fact lemma let cellular complex let compact connected subcomplex let let image finitely presented relative image fundamental groups connected components topological boundary generally compact exist finitely many connected disjoint subcomplexes disjoint compact closure finitely presented relative images fundamental groups connected components proof denote connected components special case map injective standard argument shows map also injective finitely presented finitely presented relative finitely generated subgroups one reduces special case gluing possibly infinitely many discs proof second assertion similar proof proposition assume finitely presented relative proof proposition consider disjoint subcomplexes compact let universal cover connected component whose stabilizer let equivariant map sends vertex vertex sends vertex fixed constant injective edge standard triangle consider pattern obtained preimage midpoints edges projection let closure complement regular neighborhood let tree dual first suppose applying lemma component corresponding shows finitely presented relative family consisting incv contained may quite required family inch remove fixes edge see subsection contained conjugacy group belonging inch use lemma suppose recall edge stabilizers finitely generated may use theorem saying geometric construct general may write induced map finite composition folds see proposition page therefore suffices show given factorization fold may change new complex associated map equals let adjacent edges folded dual components adjacent component let edges mapping respectively subdivide needed let edge path joining note mapped single vertex image glue square vertical edges glued edge glued edge free map extends square vertical arcs mapped edge gluing yields desired jsj decompositions vertex groups given compare splittings relative splittings vertex stabilizers recall inch family incident edge stabilizers together see subsection finitely generated relative inch particular whenever acts relative incv global fixed point unique minimal subtree proposition view tree action elliptic let fixed point definition denote family consisting subgroups belonging splittings groups lemma let vertex stabilizer splitting relative inch extends splitting relative precisely given inch exist collapse map isomorphic say obtained refining using generally one may choose splitting orbit vertices refine using refinement may obtained construction possibly trees proof construct proof proposition relative group conjugate subgroup conjugate subgroup group belonging inch comes fact may several ways attaching edges see section lemma let vertex stabilizer universally elliptic tree groups inch elliptic every subgroup elliptic subgroup inch elliptic subgroup second assertion says elliptic every acts elliptic every incv acts holds simply say universally elliptic proof first assertion clear relative also incv universally elliptic suppose inch elliptic subgroup let relative inch first assertion since finitely generated relative incv minimal subtree proposition action inch assumption fixes point hence proved direction second assertion converse follows lemma corollary let vertex stabilizer jsj tree split universally elliptic subgroup relative inch flexible splits relative inch rigid otherwise proof splitting may use refine universally elliptic tree see lemma tree must deformation space splitting must trivial follows lemma applied proposition let universally elliptic assume every vertex stabilizer jsj tree relative inch one refine using decompositions obtain jsj tree relative conversely jsj tree relative vertex stabilizer one obtains jsj tree relative inch considering action minimal subtree point elliptic proof prove let tree obtained refining using lemma relative universally elliptic lemma since edge stabilizers edge stabilizers show maximality consider another universally elliptic show vertex stabilizer elliptic vertex stabilizer vertex elliptic minimal subtree universally elliptic inch since jsj tree dominates elliptic proves let lemma relative inch incv elliptic edge stabilizers contained edge stabilizers prove maximality consider another tree action relative inch universally elliptic use refine tree lemma relative universally elliptic lemma jsj tree dominates vertex stabilizers elliptic hence hence dominates proves following corollary says one may usually restrict groups studying jsj decompositions corollary suppose contains finite subgroups refining grushko decomposition using jsj decompositions free factors yields jsj decomposition similarly refining decomposition using jsj decompositions vertex groups yields jsj decomposition one must use relative grushko decompositions jsj decompositions vertex groups relative every flexible subgroup flexible subgroup mentioned subsection decompositions exist accessibility assumption proof follows proposition applied grushko stallingsdunwoody deformation space finite groups universally elliptic every splitting relative incv assertion flexible subgroups follows corollary relative jsj decompositions fillings fix finitely presented group family subsection shown existence jsj deformation space relative finite set finitely generated subgroups give alternative construction using absolute jsj decompositions another group obtained filling construction construction used subsections provide examples flexible groups figure group obtained filling construction filling construction let let finitely presented group property define group amalgamating see figure words finitely presented denote tree amalgam vertex stabilizer stabilizer edge origin conjugate one fix family subgroups instance family subgroups conjugate note two subgroups conjugate also conjugate central family induces subgroup elliptic elliptic splittings relative subgroup elliptic elliptic splittings edge groups splittings lemma elliptic subgroup elliptic viewed subgroup elliptic proof consider group fixes point property unique since point also fixed since commutes proves first assertion since elliptic family incident edge groups second assertion follows lemma finitely presented jsj decomposition theorem let minimal subtree point elliptic proposition tree jsj tree relative proof tree edge stabilizers relative elliptic lemma show dominates universally elliptic tree use refine tree lemma tree two types edges coming define new tree collapsing edges coming note subgroup elliptic elliptic indeed subgroup elliptic elliptic hence elliptic elliptic equivalent elliptic tree universally elliptic lemma dominated vertex stabilizers elliptic hence dominates part iii flexible vertices flexible vertex groups jsj decompositions important understanding splittings conditions understanding splittings key result many cases flexible vertex groups instance first consider cyclic splittings group see flexible vertex group jsj decomposition may viewed compact possibly surface moreover incident edge groups trivial contained conjugacy boundary subgroup fundamental group boundary component boundary subgroups generated elements quadratic words suitable basis free group rips sela called subgroup quadratically hanging simple infinitely many isotopy classes essential simple closed curves curve defines cyclic splitting relative incident edge groups extends cyclic splitting lemma two curves made disjoint isotopy define two splittings elliptic respect makes flexible see corollary turns construction basically source flexible vertices allowed torsion edge groups allowed definition must adapted first may orbifold rather surface second always equal maps onto possibly kernel called fiber various authors called groups hanging surface groups hanging fuchsian groups hanging groups type vertex groups choose extend rips sela initial terminology groups emphasize way attached rest group trivial mirror see theorem hand insist group based hyperbolic orbifold euclidean one section formalize definition vertices prove general properties vertices section show indeed flexible vertices jsj deformation space nice classes slender subgroups fix family closed conjugating passing subgroups another family trees assumed quadratically hanging vertices section preliminaries groups give definition subgroups study basic properties particular relate splittings families simple geodesics underlying orbifold show natural hypotheses subgroup elliptic jsj deformation space subsection give examples possible incident edge groups vertex groups jsj decomposition relevant section show flexible subgroups slender jsj decomposition also show flexible subgroups abelian jsj decompositions splittings hyperbolic compact orbifolds including concern euclidean hyperbolic refer basic facts orbifolds euclidean orbifolds whose fundamental group virtually cyclic empty boundary arise flexible vertices trivial way instance case cyclic splittings groups klein bottle group may appear flexible vertex groups see subsections free factors incident edge groups must trivial therefore restrict hyperbolic orbifolds compact orbifold equipped hyperbolic metric totally geodesic boundary quotient convex subset proper discontinuous group isometries isom isometries may reverse orientation denote quotient map definition orbifold fundamental group may also view quotient compact orientable hyperbolic surface geodesic boundary finite group isometries point singular preimages stabilizer forget orbifold structure homeomorphic surface disc figure boundary comes boundary mirrors corresponding reflections see define boundary image thus excluding mirrors equivalently image component either component circle arc contained orbifold fundamental group infinite dihedral group accordingly boundary subgroup subgroup conjugate fundamental group component equivalently setwise stabilizer connected component closure complement union mirrors mirror image component fixed point set element equivalently mirror image component fixed point set element mirror circle arc contained figure orbifold mirrors bold boundary components corner reflectors carrying fundamental group coxeter group generated reflections sides hexagon mirrors may adjacent whereas boundary components disjoint singular points contained mirrors conical points stabilizer preimages finite cyclic group consisting maps rotations points belonging two mirrors corner reflectors associated stabilizer finite dihedral group order case surfaces hyperbolic orbifolds may characterized terms euler characteristic see definition euler characteristic euler characteristic defined euler characteristic underlying topological surface minus contributions coming singularities conical point order isotropy group contributes corner reflector isotropy group dihedral group order contributes point adjacent mirror component contributes proposition compact orbifold hyperbolic curves splittings generalize fact essential simple closed curve surface defines cyclic splitting let hyperbolic closed geodesic image geodesic whose image compact simple equal disjoint say essential simple closed geodesic possibly brevity often call geodesic essential simple closed geodesic orbit family disjoint geodesics simplicial tree dual family vertices components edges components group acts inversions case subdivide edges get action without inversions may viewed replacing boundary regular neighborhood connected simple bounding band thus associated splitting clearly relative boundary subgroups call splitting dual essential simple closed geodesic splitting determined edge group conjugacy subgroup consisting elements preserve bounded isomorphic infinite dihedral group generally splitting dual family disjoint essential simple closed geodesics simplicity sometimes say splitting dual family geodesics recall group small contain next result says construction yields small splittings relative boundary subgroups note small subgroups virtually cyclic remark also note subgroup preserves line end splitting virtually cyclic proposition let compact hyperbolic assume acts tree without inversions minimally small edge stabilizers boundary subgroups elliptic equivariantly isomorphic tree splitting dual family disjoint essential simple closed geodesics edge stabilizers assumed small still dominated tree dual family geodesics remark statement assume redundant vertex allow multiple parallel simple closed curves consider geodesics allowed redundant vertices isomorphic subdivision tree dual family geodesics proof orientable surface follows theorem consider covering surface action dual family closed geodesics family projects required family action dual family second statement follows standard arguments see proof theorem corollary splitting relative boundary subgroups contains essential simple closed geodesic orbifolds essential simple closed geodesic classified next subsection proposition implies particular relative boundary subgroups remain true set one boundary component aside lemma let boundary component compact hyperbolic exists splitting relative fundamental groups boundary components distinct proof arc properly embedded endpoints defines free splitting relative groups cases one choose splitting study exceptional cases disc annulus conical point disc boundary circle consists components mirrors since hyperbolic must mirror adjacent otherwise would consist one two mirrors two boundary components two mirrors would negative arc one endpoint defines splitting adjacent annulus two cases arc one find arc general case circle circle contains mirror otherwise would regular annulus arc yields splitting remark splitting constructed contains mirror contained infinite dihedral subgroup generated conjugate definition filling geodesics let compact hyperbolic let collection essential simple closed geodesics say fills following equivalent conditions hold every essential simple closed geodesic exists intersects every element infinite order conjugate boundary subgroup exists acts hyperbolically splitting dual full preimage universal covering connected equivalence conditions well known include proof completeness proof clear using representing convex hull prove consider connected component connected indeed contained half space bounded geodesic let connected component boundary indeed cuts geodesic note cuts geodesic elements disjoint contained bounded contradicting projects simple closed geodesic intersect transversely would contained intersection half spaces bounded contradiction assumption ensures boundary component implies connected prove consider infinite order let axis connectedness contained hold intersects geodesic one bounded follows convex hull properly contained since contradiction corollary contains least one essential simple closed geodesic set simple closed geodesics fills using first definition filling follows immediately following lemma lemma lemma essential simple closed geodesic exists another essential simple closed geodesic intersecting small orbifolds pair pants compact hyperbolic surface containing essential simple closed geodesic subsection classify hyperbolic contain essential geodesic fundamental groups split relative boundary subgroups corollary appear flexible vertex groups jsj decompositions see subsection work compact orbifolds geodesic boundary could equally well consider orbifolds cusps proposition compact hyperbolic geodesic boundary contains simple closed essential geodesic belongs following list see figure figure orbifolds splittings mirrors bold labels isotropy size sphere conical points disc conical points annulus conical point pair pants mirror disk whose boundary circle union single mirror single boundary segment exactly one conical point annulus one mirror conical point disk whose boundary circle union three mirrors together boundary segments conical point orbifolds list hyperbolic see proposition proof let orbifold closed geodesic let underlying topological surface conical points removed orientable since otherwise contains embedded band whose core yields essential simple closed geodesic similarly planar boundary components punctures mirror must case total number conical points boundary components must three negative therefore assume mirror recall component contained component circle segment whose endpoints belong possibly equal mirrors contained classification mirrors relies inequality following basic observation consider properly embedded arc joining mirror possibly equal since isotopic essential simple closed geodesic may isotoped complement conical points endpoints remaining respectively boundary segment arc contained implies connected component containing connected component containing mirror contain mirrors considering arcs joining also implies disc annulus pair pants annulus possibility boundary segment corresponds cases depending whether conical point necessarily order second boundary component remaining possibility disc boundary conical point pair mirrors adjacent joined boundary segment follows contains mirrors many boundary segments must mirrors order negative orbifold finite mapping class group suppose compact hyperbolic surface unless pair pants contains simple closed geodesic defines splitting maximal cyclic subgroup dehn twist around defines infinite order element mapping class group dual splitting index subgroup corresponding boundary curve regular neighborhood dehn twist around homotopically trivial pair pants projective plane contain geodesic finite mapping class group hyperbolic surfaces contain geodesic infinite mapping class group note klein bottle contains unique geodesic like closed surface genus mapping class subsection generalize discussion orbifolds examples simple closed geodesics yield twists mirror full circle fundamental group isomorphic defines splitting index subgroup isomorphic associated dehn twist trivial geodesic arc whose endpoints belong mirrors fundamental group dihedral group associated dehn twist trivial center mirror joining two corner reflectors carrying yields splitting twist refer especially theorem general discussion relations splittings automorphisms see also discussion subsection limit classifying orbifold groups split maximal cyclic subgroup relative boundary subgroups equivalently classify orbifolds interior contains geodesic orbifolds whose mapping class group finite let orbifold first note must planar conical points boundary components band one conical point band one open disc removed projective plane conical points say component simple contains mirror component single mirror say circular boundary component circular mirror accordingly note simple boundary component contribute see definition particular planar boundary components simple total number boundary components conical points boundary component consider simple closed curve parallel inside since isotopic geodesic must parallel simple boundary component bound band containing conical point bound disc containing one conical point one check must following list write mean surface obtained removing boundary conical points sphere minus points sphere choice conical points circular boundary components circular mirrors annulus one whose boundary component either conical point circular boundary component circular mirror disc whose boundary conical point projective plane choice conical points circular boundary components circular mirrors band whose boundary conical point definition properties quadratically hanging subgroups usual fix let subgroup definition subgroup fiber extended boundary subgroup say subgroup relative stabilizer vertex extension compact hyperbolic call fiber underlying orbifold incident edge stabilizer intersection ghg extended boundary subgroup definition means image finite contained boundary subgroup condition may rephrased saying groups inch see definition extended boundary subgroups full generality isomorphism type necessarily determine refer subgroup always consider part structure small however may characterized largest normal subgroup small particular automorphism leaves invariant group finitely generated finitely generated relative inch lemma proposition guaranteeing existence minimal subtree applies actions relative inch vertex well image called vertex point fixed extended boundary subgroups proper subgroups also note preimage finite group carried conical point corner reflector extended boundary subgroup incident edge stabilizer contained extension virtually cyclic subgroup even may meet trivially trivial image see subsection particular full generality belong universally elliptic jsj tree definition dual splitting group splitting dual family geodesics subsection induces splitting relative third condition definition splitting also relative inch see definition extends splitting relative lemma say splitting dual determined edge group associated denoted edge groups extensions fiber general many natural classes groups though extensions still content stability conditions subsection conversely seen proposition small splittings orbifold group relative boundary subgroups dual families geodesics prove similar statement lemma splittings groups relative incident edge stabilizers groups inch one make additional assumptions first fiber elliptic splitting automatic full generality holds long groups slender see lemma next need ensure boundary subgroups elliptic motivates following definition definition used boundary component boundary component used group isomorphic contains finite index image incident edge stabilizer subgroup conjugate group equivalently used exists subgroup inch whose image infinite contained conjugacy using lemma get lemma let vertex group tree fiber boundary component used splits relative group containing index moreover refined using splitting remark whenever underlying orbifold mirror general proof let boundary component lemma yields splitting group containing index mirror used splitting relative inch lemma one may use refine one obtains splitting relative may collapsed splitting proposition implies lemma let vertex group fiber minimal splitting relative factors splitting splitting also relative inch every boundary component used splitting dominated splitting dual family geodesics particular contains essential simple closed geodesic moreover splitting small edge groups splitting dual family geodesics remark induced splitting relative boundary subgroups holds necessarily holds proof let tree splitting group acts identity fixed point set nonempty normal deduce action factors action assumptions action relative boundary subgroups boundary components used incident edge stabilizer conjugate group apply proposition flexible vertex groups jsj decompositions splittings relative inch see corollary corollary wish deduce contains essential simple closed geodesic rule small orbifolds proposition proposition let vertex stabilizer jsj tree relative assume subgroup containing index belongs also assume universally elliptic every boundary component used flexible contains essential simple closed geodesic let acts small edge stabilizers elliptic action minimal subtree dual subdivision family essential simple closed geodesics proof boundary component used lemma yields refinement jsj tree new edge stabilizers contain index belong universally elliptic contradicts maximality jsj tree flexible acts tree also relative universally elliptic yields splitting relative relative incident edge stabilizers universally elliptic since every boundary component used obtain geodesic lemma applying previous argument shows splitting dual family geodesics third assertion lemma first part following proposition shows conversely existence essential geodesic implies flexibility proposition let vertex group assume contains essential simple closed geodesic essential simple closed geodesics group see definition belongs universally elliptic element resp subgroup contained extended boundary subgroup particular universally elliptic small image virtually cyclic remark holds weaker assumption set essential simple closed geodesics fills sense definition proof subgroup contained union extended boundary subgroups contained single extended boundary subgroup prove suffices show element lie extended boundary subgroup universally elliptic image infinite order acts splitting dual geodesic see definition splitting dual relative edge group assumed universally elliptic infinite image acts splitting dual action preserves line end smallness image virtually cyclic remark proposition requires universally elliptic show automatic splittings slender groups lemma let vertex group groups slender universally elliptic proof suppose slender universally elliptic acts tree unique line line normal follows extension group image isom virtually cyclic group groups slender deduce slender contradiction maps onto hyperbolic corollary let vertex group jsj tree assume groups slender every extension fiber virtually cyclic group belongs universally elliptic largest slender normal subgroup boundary components underlying orbifold used acts tree fix point action minimal subtree dual family essential simple closed geodesics flexible contains essential simple closed geodesic flexible universally elliptic subgroup extended boundary subgroup follows results proved quadratically hanging subgroups elliptic jsj goal subsection prove suitable hypotheses vertex group elliptic jsj deformation space assume existence jsj deformation space obtain ellipticity every universally elliptic tree start following fact lemma splits group split infinite index subgroup elliptic jsj deformation space proof apply assertion lemma jsj tree splitting remark universally elliptic fixes unique point jsj tree also note elliptic universally elliptic commensurability invariant conclusions hold groups commensurable proved vertex group splitting class considered elliptic jsj deformation space true general even class cyclic groups contains many subgroups none elliptic jsj deformation space consists free actions see subsection happens splits groups infinite index something prohibited hypotheses papers mentioned allowed split subgroup infinite index group enclosing group minimal splittings see definition theorem different counterexample given example subsection family abelian groups example subgroup one abelian splitting universally elliptic elliptic jsj space examples explain hypotheses following result theorem let vertex group assume preimage virtually cyclic subgroup belongs split one following conditions holds elliptic subgroup infinite index jsj deformation space contains essential simple closed geodesic boundary nonempty fiber elliptic jsj deformation space groups slender note contains proof fix jsj tree start proving ellipticity assuming existence essential simple closed geodesic given lift denoted subgroup consisting elements preserve bounded preimage write rather want conjugacy group belongs splits proposition universally elliptic lemma remark fixes unique point lemma let lifts simple geodesics intersect proof assume let tree splitting determined contains unique edge stabilizer since intersect group acts hyperbolically minimal subtree line contains let refinement dominates lemma let minimal subtree image consists single point refinement image equivariant map contains let edge contains stabilizer contained fixes since split infinite index subgroups index finite group universally elliptic acts splitting dual unique fixed point remark fixes mapped conclude since set essential simple geodesics fills corollary union lifts connected subset definition particular given pair lifts exists finite sequence intersects lemma implies depend fixed elliptic proved theorem case contains geodesic reduce cases one using lemma find geodesic case reduces lemma first show every boundary component used see definition lemma yields splitting group containing index remark group contained preimage subgroup contradicts assumptions theorem since infinite index every boundary component used incident edge stabilizer whose image contained finite index let preimage folding yields splitting instance since lemma implies elliptic jsj deformation space particular elliptic elliptic lemma implies contains geodesic suppose elliptic jsj space action jsj tree factors assertion lemma every boundary subgroup elliptic follows previous argument used incident edge stabilizer holds used group splittings relative proposition yields geodesic remark assumptions theorem assume moreover groups slender split subgroup whose image finite fixes unique point gvj claim gvj hence also universally elliptic vertex used proof theorem let tree vertex group note gvj gvj elliptic show edge containing extended boundary subgroup let refinement dominates let unique point fixed let equivariant map let lift fixes edge adjacent extended boundary subgroup otherwise consider segment contains choose segment minimal length let initial edge fixes edge adjacent since split groups mapping finite groups image finite index subgroup boundary subgroup also slender containing finite index subgroup image contained corollary let class virtually polycyclic groups hirsch length assume split group let vertex group fiber splitting jsj tree contained vertex stabilizer proposition introduction case proof group fixes point theorem universally elliptic apply remark flexible prove theorem peripheral structure quadratically hanging vertices suppose vertex group jsj decomposition incident edge groups extended boundary subgroups see definition however collection incident edge groups family incv definition may change jsj tree varies jsj deformation space though collection extended boundary subgroups usually change see assertion theorem section introduced collection subgroups related incident edge groups depend tree jsj deformation space called collection peripheral structure goal subsection show peripheral structure contains information collection extended subgroups may fairly arbitrary examples class slender groups slender group conjugate proper subgroup context peripheral structure vertex stabilizer tree may determined follows first collapse edges make reduced see subsection image stabilizer still denote set conjugacy classes incident edge stabilizers properly contained another incident edge stabilizer see details seen proposition often uses every boundary component apart peripheral structure may fairly arbitrary shall give examples particular example show following proposition proposition vertex group slender jsj decomposition oneended group possible incident edge group meet trivially trivial image work family slender subgroups figure example group jsj decomposition give examples use filling construction described subsection see figure start extension slender compact orientable surface genus least boundary components let finite family infinite extended boundary subgroups defined definition note slender impose boundary subgroup maps onto finite index subgroup every boundary component used sense definition let nonslender finitely presented group serre property action tree instance subsection define finitely presented group amalgamating lemma tree amalgam defining slender jsj tree flexible subgroup conjugate subgroup peripheral structure consists conjugacy classes proof let tree fixes unique point slender point also fixed particular universally elliptic prove jsj tree suffices see elliptic universally elliptic tree lemma universally elliptic elliptic lemma action minimal subtree factors nontrivial action slender hence cyclic edge stabilizers since every hence every boundary subgroup elliptic action dual system disjoint geodesics proposition proposition edge stabilizer universally elliptic contradicting universal ellipticity shows jsj tree flexible chosen contain intersecting simple closed curves see corollary proposition one obtains jsj tree finite groups collapsing edges infinite stabilizer since infinite jsj tree trivial assertion follows definition given example let punctured torus fundamental group write let finite let hui peripheral structure jsj tree consists two elements though one boundary component jsj tree incident edge groups conjugate quotient tripod display peripheral structure example let write hti infinite cyclic let hui hti meets trivially maps trivially example assume orbifold conical point carrying finite cyclic group fiber infinite one attach edge one chooses infinite subgroup preimage one constructs amalgam similar constructions possible corner reflector point mirror even point trivial isotropy flexible vertices abelian jsj decompositions shall see section family cyclic subgroups virtually cyclic subgroups slender subgroups flexible vertex groups jsj decompositions slender fiber say things complicated family abelian subgroups equivalently finitely generated abelian subgroups see proposition basic reason following group extension finitely generated abelian group surface splitting dual simple closed curve induces splitting subgroup slender indeed polycyclic necessarily abelian using terminology definition regular neighbourhood two abelian splittings necessarily abelian splitting fact shall construct examples showing proposition flexible subgroups abelian jsj trees always groups one always obtain abelian jsj tree collapsing edges slender jsj tree proposition one obtain abelian jsj tree refining collapsing slender jsj tree point collapsing alone always sufficient may shown collapsing suffices finitely presented csa see proposition use construction previous subsection act fiber example example let obtained gluing torus one boundary components pair pants let circle bundle trivial punctured torus two boundary components let let fundamental groups components homeomorphic klein bottles note construct amalgamation claim abelian jsj decomposition trivial flexible argue proof lemma know hence also universally elliptic tree abelian edge stabilizers action minimal subtree factors action dual system simple closed curves simple closed curves give rise abelian splitting positive sense bundle trivial prove none splittings universally elliptic hence abelian jsj space trivial flexible suffices see positive curve intersects essential way positive curve true curve separating pair pants punctured torus one easily constructs positive curve meeting points also true curves meeting curves disjoint contained punctured torus result true every map group finite image isomorphic group choose remark one performs construction adding third group becomes flexible vertex group group whose jsj decomposition example let surface genus two boundary components let simple closed curve separating let space obtained collapsing point map aut projecting embedding free group let associated product construct abelian splittings come simple closed curves belonging kernel easy see curve follows splitting dual abelian jsj decomposition two rigid vertex groups obtained collapsing slender jsj splitting jsj decompositions slender groups main result section description jsj decompositions slender groups recall subsection subgroup slender subgroups finitely generated whenever acts tree fixes point leaves line invariant approach essentially follows fujiwara paposoglu simplifications particular deal third splitting see discussion statement results let finitely generated group family subgroups stable conjugation taking subgroups finite set finitely generated subgroups finitely presented relative goal section show flexible vertex groups jsj decompositions relative see subsection need two assumptions first groups slender least slender see subsection second stability condition involving subfamily show fibers flexible vertex groups belong definition stability condition say satisfies stability condition fibers family subgroups following hold every short exact sequence isomorphic isomorphic quotient group acts line way infinite dihedral group also acts line orientation preserved group may finite cyclic dihedral infinite isomorphic subgroup fiber recall simple geodesic defines splitting group extension geodesic stability condition ensures first every compare assumptions proposition corollary theorem failure stability condition explains theorem apply abelian jsj splittings see subsection hand one easily checks stability condition holds following cases slender consists slender subgroups virtually cyclic finite virtually polycyclic virtually polycyclic subgroups hirsch length csa group finitely generated abelian recall group csa maximal abelian subgroups malnormal see definition different condition applies cyclic also recall since relatively finitely presented jsj decomposition also one groups locally proposition theorem suppose groups slender satisfies stability condition fibers family let finite family finitely generated subgroups let finitely presented finitely presented relative flexible vertex group jsj decomposition relative fiber maps onto compact hyperbolic kernel image incident edge group finite contained boundary subgroup acts tree fix point action minimal subtree dual family geodesics extended boundary subgroups universally elliptic every boundary component used contains essential closed geodesic every universally elliptic subgroup extended boundary subgroup known assertions direct consequences results subsection see subsection list orbifolds containing essential closed geodesic subsection description slender flexible vertex groups corollary let class slender subgroups let finite family finitely generated subgroups let finitely presented finitely presented relative every flexible vertex jsj decomposition relative either slender slender fiber view examples given one similar description jsj decompositions virtually cyclic groups flexible groups finite fiber flexible groups fiber etc result case class slender groups consider generalisation case split flexible vertex groups fiber groups assumed slender conclusions theorem apply groups slender see subsection class finite infinite cyclic groups satisfy stability condition contains dihedral subgroups recover rips sela description jsj decompositions cyclic groups introduce modified stability condition scz preventing groups acting dihedrally line definition stability condition scz say satisfies stability condition scz fibers following hold group maps onto given short exact sequence given short exact sequence isomorphic quotient condition satisfied classes consisting cyclic subgroups subgroups finite cyclic virtually cyclic subgroups map onto finite infinite center finite class satisfying consisting groups map onto theorem theorem holds satisfies stability condition scz rather case underlying orbifold mirror singular points conical points particular theorem let finitely presented relative finite family finitely generated subgroups let class finite cyclic subgroups flexible vertex group jsj decomposition relative virtually trivial fiber moreover underlying orbifold mirror every boundary component used contains essential simple closed geodesic indeed follows proposition proposition slender flexible groups virtually next subsection shall reduce theorems theorem proved subsections subsections assume finitely generated finitely presented may arbitrary use finite presentability lemma subsection useful subsection finite presentability assumed particular see theorem theorem true finitely generated arbitrary family subgroups reduction totally flexible groups unlike know advance jsj decompositions exist need show flexible vertex groups allows forget concentrate remember incident edge groups therefore consider splittings relative inch see definition since incident edge groups universally elliptic exactly splittings extend splittings relative see lemma even interested jsj decompositions important work relative context subsection even allow groups infinitely generated fact flexible vertex jsj decomposition says splits relative inch universally elliptic subgroup corollary motivates following general definition definition totally flexible totally flexible relative admits splitting none universally elliptic subgroup equivalently jsj decomposition trivial flexible example mind fundamental group compact hyperbolic surface pair pants class cyclic groups consisting fundamental groups boundary components shall deduce theorems following result says totally flexible groups theorem let finitely presented relative finite family finitely generated subgroups let class slender groups satisfying stability condition scz fibers see definitions assume totally flexible relative slender fiber mirror scz case since incident edge groups means extension group fundamental group hyperbolic orbifold image group either finite contained boundary subgroup corollary every component used group closed orbifold theorem implies theorems apply family subgroups belonging inch definition note incv finite family finitely generated subgroups theorem remark finitely presented relative inch proposition satisfies scz fibers three main steps proof theorem given two trees edge stabilizer one tree elliptic follow construction core core happens surface away vertices precisely one removes cut vertices vertices whose link homeomorphic line one gets surface whose connected components simply connected decomposition dual cut points yields tree vertices coming surface components borrowing scott swarup terminology call tree regular neighborhood see example explanation name stability condition used ensure edge stabilizers given totally flexible construct two splittings fill show regular neighborhood trivial splitting deduce required case cyclic splittings group fundamental group compact surface one think dual transverse pair pants decompositions see example previous steps require splittings minuscule condition controls way may split subgroups check satisfied first step total flexibility allows avoid complicated part paper given vertex group tree obtained instance regular neighborhood two splittings one make larger enclose third splitting second step first splitting obtained maximality argument requires finite presentability fujiwara papasoglu work minimality condition splittings allows construction regular neighborhoods condition sufficient purpose replace stronger condition minuscule core mentioned earlier let arbitrary assume finitely generated groups slender one stability conditions scz satisfied minuscule splittings definition minuscule given say exists tree elliptic also write say minuscule whenever subgroup fixes edge tree infinite index stabilizer equivalently minuscule commensurable edge stabilizer tree minuscule edge stabilizers minuscule say trees splittings minuscule minuscule infinite index particular relation transitive minuscule commensurability invariant often used following way edge stabilizer tree elliptic another tree minuscule containing also elliptic remark split relative subgroup commensurable subgroup infinite index group every minuscule minuscule finite index subgroup contained infinite index holds instance assumptions particular splittings groups virtually cyclic subgroups reader interested case may therefore ignore subsection prove splittings totally flexible group minuscule following lemma says ellipticity symmetric relation among minuscule trees terminology minuscule splittings minimal lemma elliptic respect minuscule elliptic respect proof let standard refinement dominating definition let edge edge group elliptic since minuscule elliptic argument applies edge elliptic respect lemma assume finite family finitely generated subgroups finitely presented relative trees minuscule exists tree maximal domination dominates dominates deformation space finite presentation necessary evidenced dunwoody inaccessible group family finite subgroups example consider cyclic splittings fundamental group closed orientable surface maximal dual pair pants decomposition proof let set trees finitely generated edge vertex stabilizers equivariant isomorphism dunwoody accessibility corollary every tree dominated tree suffices find maximal element pointed proof theorem set countable suffices show given sequence dominates exists tree dominating every produce inductively tree deformation space refines start assume already defined since dominates hence elliptic respect trees assumed minuscule elliptic respect lemma may therefore define standard refinement dominating belongs deformation space assertion proposition dunwoody accessibility proposition exists tree dominating every hence every definition core let trees recall elliptic respect every edge stabilizer elliptic definition several orbits edges may happen certain edge stabilizers elliptic others slender act hyperbolically leaving line invariant motivates following definition definition fully hyperbolic fully hyperbolic respect every edge stabilizer acts hyperbolically implies edge stabilizers infinite vertex stabilizer elliptic except trivial example suppose surface group dual families disjoint geodesics subsection elliptic respect curve either contained disjoint fully hyperbolic respect curve meets every intersection transverse let fully hyperbolic respect consider product view complex made squares diagonal action edge form horizontal edge vertical following define asymmetric core follows edge vertex let minimal subtrees respectively line since slender let fully hyperbolic definition asymmetric cores respect asymmetric core also assume fully hyperbolic respect denote opposite construction confusion possible use notations instead note consists belongs minimal subtree every subtree endpoint standard argument shows simply connected group acts diagonally since acts cocompactly line finitely many squares remark elliptic fixes vertex fixes elliptic fixes point subtree symmetry core goal subsection prove asymmetric core minuscule splittings actually symmetric note following basic consequence symmetry lemma let fully hyperbolic respect assume set vertices surface connected simply connected proof construction edge denotes open edge follows horizontal edges contained exactly two squares symmetric argument shows vertical edges contained since open edges contained exactly two exactly two squares squares surface proposition let two minuscule trees fully hyperbolic respect words relation belongs minimal subtree symmetric remainder subsection devoted proof proposition always assume fully hyperbolic respect assumed minuscule indicated denote union closed squares contained equivalently contains horizontal edges contains vertical edges bound square open edge bound square disconnects define analogously remark minimality condition weaker requiring trees minuscule conclude point define invariant claim connected vertex union isolated vertices connected component connected component lemma let midpoint edge connected minuscule moreover connected edge stabilizer hyperbolic splitting group moreover needed subsection proof assuming connected let connected component contain exists one since connected claim disconnected choose let global stabilizer first show elliptic particular contains element hyperbolic projection contains axis since slender contradicting choice prove minuscule therefore enough construct minimal tree edge stabilizer consider note connected component since simply connected connected component separates track let tree dual vertices connected components edges connected components construction splitting stabilizer edge corresponding particular since fixes vertex see remark remains check hence minimal let vertex definition point belongs whose axis contains line naturally defines embedded line acts translation proves hyperbolic follows minuscule stronger assumption disconnected edge choose element tree provides required splitting edge group lemma connected whenever midpoint edge connected every proof clearly homeomorphic belongs interior edge assume vertex sketch argument standard consider join piecewise linear path projection loop based projection subpath whose endpoints project whose initial terminal segments project edge using connectedness midpoint may replace path contained iterating yields path joining lemma assume fully hyperbolic respect minuscule proof lemmas imply connected every connected contains contains proposition follows immediately lemma symmetry regular neighborhood subsection assume fully hyperbolic respect use core construct tree call regular neighborhood main properties summarized proposition seen lemma surface away vertices follows link vertex manifold disjoint union lines circles since simply connected vertices whose link disconnected precisely cut points define regular neighborhood tree dual decomposition cut points definition regular neighborhood regular neighborhood bipartite tree vertex set set cut vertices set connected components edge closure may empty always unless trivial example basic example following example let cyclic splittings fundamental group closed orientable surface dual families geodesics dual family defined follows precisely subdivision tree dual consider boundary regular neighborhood disregard homotopically trivial curves isotope one geodesic orbits vertices correspond components meeting components correspond orbits vertices proposition suppose groups slender satisfies scz let trees fully hyperbolic respect bipartite tree minimal satisfying following properties acts hyperbolically extension fiber contained every incident edge stabilizer fixes edge two possibilities slender fiber underlying orbifold hyperbolic fundamental group contains essential simple closed geodesic mirror scz holds extended boundary subgroups elliptic slender virtually fundamental group euclidean orbifold without boundary incident edge stabilizers finite extensions stabilizer vertex elliptic particular elliptic respect conversely group elliptic elliptic precisely fixes vertex fixes vertex image finite contained boundary subgroup one passes refining vertices using families essential simple closed geodesics underlying orbifolds collapsing original edges particular compatible edge stabilizer fixes vertex contains associated fiber tree may trivial example happens precisely fill case slender fiber proof action induces action obvious minimal edge stabilizers note however relative fact subgroup elliptic elliptic fixes vertex remark stabilizer vertex fixes point elliptic particular edge stabilizers elliptic words elliptic respect heart proof show slender along way show complete proof prove minimality action group acts closure connected component associated thus may viewed one pieces one obtains cutting open cut points stabilizers edges incident fix vertex group contains edge stabilizer resp acts hyperbolically resp recall made squares since acts without inversions leaving square invariant identity hence adjacent squares therefore whole let pointwise stabilizer acts pointed subsection acts finitely many orbits squares true action action proper complement vertices free complement element may swap two adjacent squares consider action near vertex since view vertex rather one link connected stabilizer action acts trivial edge stabilizers circle surface near finite group cyclic dihedral action proper near line stabilizer acts translations dihedrally image must viewed puncture since want compact orbifold boundary remove open neighborhood near vertices whose link line get effective proper action simply connected surface quotient compact orbifold fundamental group isomorphic subgroup elliptic particular fixes vertex image finite link circle contained boundary subgroup link line conversely subgroup whose image finite contained boundary subgroup elliptic completes proof concluding slender use scz prove edge stabilizers first show given square group stabilizer action line particular fixes edge moreover kernel epimorphism stability condition implies scz holds acts translations acts translations true squares orbifold contains mirror incident edge stabilizer stabilizer vertex action pointed earlier elliptic contains acts link trivial edge stabilizers cyclic dihedral finite infinite stability condition implies scz holds cyclic mirror also get tree bipartite given would proved subgroup sense definition knew orbifold hyperbolic assume otherwise since quotient simply connected surface infinite group euclidean fundamental group virtually cyclic contains image square implies quotient torus particular empty boundary incident edge stabilizers finite extensions also deduce contains finite index slender extension sum group slender euclidean hyperbolic prove proof process show orbifolds contain essential geodesic thus completing proof action minimal let preimage midpoints edges first projection collection disjoint properly embedded lines containing vertex one may view tree dual vertices components edges components projection induces map dual tree isomorphism point preimages connected immediately follows edge stabilizer fixes vertex associated component containing preimage midpoint contains associated fiber particular elliptic respect since hyperbolic edge stabilizers elliptic vertex fixed shows exists subgroup edge stabilizer unique fixed point minimality easily follows observation since vertex terminal define refining tree dual decomposition given cut points vertices elements together components edges components one obtains collapsing edges coming edges refining vertices remains show refinement dual family geodesics associated orbifold recall associated component consider component contained stabilizer edge stabilizer acts cocompactly image orbifold associated simple simple closed curve mirror suborbifold homotopically trivial isotopic geodesic boundary parallel hyperbolic extended boundary subgroups elliptic components contained thus yield required family disjoint essential simple closed geodesics remark note given orbifold filled images defined defined similarly using proof also shows number orbits vertices bounded number orbits edges decompositions one orbit vertices dual single geodesics proposition assumptions proposition suppose furthermore minuscule slender slender every boundary component underlying orbifold used see definition proof recall fixes edge since fully hyperbolic respect slender exists edge incident stabilizer finite extension minuscule contradiction implies lemma boundary component unused splits group containing index hence stability condition contradiction constructing filling pair splittings subsection well next one assume trees minuscule guarantees symmetry core proposition two trees fully hyperbolic respect regular neighborhood also implies ellipticity symmetric relation among trees lemma goal subsection following result showing existence pair splittings fill proposition assume totally flexible trees minuscule given tree exists tree fully hyperbolic respect next subsection apply proposition maximal given lemma example surface case examples dual family decomposing pairs pants dual family meeting transversely every curve generally suppose fundamental group compact hyperbolic set boundary subgroups consisting virtually cyclic subgroups dual family geodesics proposition proposition claims exists family every geodesic meets every geodesic meets geodesic belongs prove consider maximal family geodesic meets transversely applying lemma orbifold obtained cutting along shows every geodesic meets proof difficulty find fully hyperbolic respect done redefine collapsing edges belong axis edge stabilizer makes hyperbolic respect splitting underlying lemma splitting underlying hyperbolic respect therefore fully hyperbolic respect argue induction number orbits edges existence fact totally flexible universally elliptic hyp ell hyp hyp ell hyp ell figure first players proof proposition denote representatives edge stabilizers see figure represent quotient graphs groups induction tree hyperbolic assume elliptic case exists hyperbolic assume single orbit edges elliptic respect take standard refinement dominating otherwise let obtained collapsing edges keeping edges whose stabilizer elliptic hyperbolic respectively may trivial note fully hyperbolic respect symmetry ellipticity relation among minuscule trees fully hyperbolic respect hand elliptic respect collapses let regular neighborhood recall set vertices bipartite set cut points set connected components since single orbit edges single orbit vertices fix denote stabilizer vertex proposition lemma vertex group elliptic ell hyp figure proof elliptic represents collapse map proof spirit proof somewhat analogous theorem let edge edge since elliptic respect stabilizer fixes point point unique otherwise would fix edge elliptic respect would elliptic contradicting fact fully hyperbolic respect similarly fixes unique point consider core claim square contained assuming associate square point since adjacent squares mapped point squares given connected component mapped point since stabilizer component fixes point required conclude proof prove claim let common refinement see figure definition core lies axis subtree since collapses edge projecting lies axis note since elliptic respect elliptic respect hence also respect let therefore standard refinement dominating proposition let axis since collapse claim maps axis find edge whose image contains since fixes point image collapse map particular fixes since fixes enough prove fixes edge elliptic elliptic respect hyperbolic respect contradicts fact minuscule proves claim hyp ell dual hyp figure end proof proposition using lemma construct standard refinement dominating see figure without refining vertices orbit particular still vertex tree obtained refining recall want every edge stabilizer hyperbolic since automatic hyperbolic consider tree obtained collapsing edges keeping edges whose stabilizer elliptic trivial simply take assume contrary claim elliptic collapse point otherwise edge stabilizers conjugate subgroups would elliptic would elliptic respect contradiction every hyperbolic hence elliptic dominated consider action minimal subtree note meets every edges otherwise would elliptic every elliptic contains conjugacy edge stabilizer action follows previous results action subdivision dual family geodesics underlying orbifold see view vertex stabilizer every boundary component used proposition fiber universally elliptic lemma incident edge groups elliptic elliptic respect existence follows lemma let family geodesics transverse example define refining using splitting dual observe every edge stabilizer hyperbolic required clear hyperbolic otherwise contains conjugacy edge stabilizer hyperbolic every curve meets flexible groups trees minuscule prove theorem assumption trees minuscule prove next subsection assumption always fulfilled proof theorem assuming trees minuscule using finite presentability assumed theorem fix maximal splitting lemma proposition yields fully hyperbolic respect may therefore consider regular neighborhood claim trivial point implies required assume let common refinement exists proposition since elliptic respect fully hyperbolic respect edge must collapsed possibly contains edge let edge mapped define collapsing edges collapsed except orbit since collapses maximality lemma implies belong deformation space thus viewed edge endpoint endpoint different orbit since action minimal edge origin belong orbit edge stabilizer edge hyperbolic respect contradiction since elliptic elliptic respect splittings totally flexible group minuscule subsection complete proof theorems showing splittings minuscule totally flexible proposition mentioned remark sometimes needed words proof goes follows given edge stabilizer tree first find minuscule tree edge stabilizer commensurable subgroup goal show fact commensurable proved showing slender subgroups vertex group containing fiber ultimately argument relies fact two nested infinite slender subgroups hyperbolic orbifold group commensurable embed group use tree hyperbolic construct core since know minuscule choose carefully ensure core symmetric hence surface complexity edge groups recall consists slender groups particular finitely generated lemma let bound depending length chain particular infinite descending chain proof since find tree elliptic let intersection subgroups index action invariant line acts line translation given homomorphism letting implies linearly independent lemma follows since finitely generated using lemma associate complexity maximal length chain commensurable edge stabilizers trees thus minuscule note commensurable lemma given edge stabilizer tree exists tree edge stabilizer minuscule finite index proof result trivial argue induction definition complexity exists commensurable edge stabilizer induction hypothesis exists minuscule edge stabilizer finite index since commensurable index finite symmetry core recall proposition core minuscule trees symmetric order prove trees minuscule need establish weaker symmetry statements use notations subsection lemma let fully hyperbolic respect proof note implies assuming get pointed subsection union open vertical edges edge disconnects assumption says contains edge set edges bounds square set contain minimality action set intersects components particular exist connected component may joined contradiction since path lemma let minuscule splitting assume universally elliptic among splittings elliptic respect consider whose edge stabilizers minimal complexity note fully hyperbolic respect lemma defined proof let midpoint edge since edge stabilizers minimal complexity second assertion lemma shows connected particular contains therefore contains squares shows conclude hand minuscule lemma implies lemma splittings minuscule proposition totally flexible slender every tree minuscule proof may assume splitting denoting edge stabilizer lemma yields minuscule tree edge stabilizer finite index since totally flexible universally elliptic choose lemma core symmetric consider regular neigbourhood decomposition know fixes unique vertex finite index subgroup also unique fixed point hyperbolic edge stabilizers elliptic first assume slender fiber consider slender group containing instance claim finite index first elliptic otherwise acts axis elliptic subgroup index fixing axis contradiction since fixes deduce since slender subgroups contained extensions virtually cyclic groups contains infinite index finite index index finite commensurable applied argument shows minuscule slender least one edge incident since assume slender stabilizer edge commensurable hand contained elliptic shows minuscule contradiction slenderness trees proved flexible vertex groups jsj decompositions satisfies stability condition consists slender groups consider edge groups slender trees recall slender whenever acts tree fixes point leaves line invariant following generalization theorems theorem let finitely presented relative finite family finitely generated subgroups suppose groups slender satisfies scz fibers family flexible vertex group jsj decomposition relative slender theorems fiber orbifold mirror scz case applies whenever groups assumed slender contained group conjugacy instance splittings relatively hyperbolic groups elementary subgroups relative parabolic subgroups theorem proved slender case changes describe arguments subsections extend directly replacing slender slender using lemma proof proposition uses lemma replace following statement lemma let vertex group groups slender universally elliptic proof universally elliptic proof lemma extension group virtually cyclic group slender lemma subsection lemma requires finitely generated groups may arbitrary complexity may infinite lemmas remain valid every tree dominated tree finitely generated stabilizers see corollary end proof proposition need know subgroup slender contained extension virtually cyclic group generally suppose underlying orbifold contains essential simple geodesic claim slender image virtually cyclic see assume extended boundary subgroup proposition consider splitting dual geodesic elliptic group acts line virtually cyclic edge stabilizers virtually abelian hence virtually cyclic embeds proves claim slender flexible groups subsection consider slender flexible vertex group jsj tree whenever acts tree fix point unique line action line gives rise map isom whose image isomorphic orientation preserved natural analogue theorem would following statement maps onto fundamental group compact euclidean fiber incident edge groups finite image note euclidean whose fundamental group virtually cyclic empty boundary acts tree fix point action invariant line factors unfortunately assertion statement correct consider splittings replace two assertions proposition let theorem theorem let slender flexible vertex group jsj tree extension virtually incident edge groups contained finite extensions finite groups virtually cyclic map whose image finite index incident edge groups finite image acts tree fix point action invariant line factors first assertion claim actions factor fundamental group fiber universally elliptic second one maps euclidean orbifold claim kernel first assertion requires first half stability condition second one requires stability condition proposition remains true assumed slender example group infinitely many slender splittings given morphisms klein bottle group hti hvi exactly two splittings corresponding two presentations see subsection correspond morphisms respectively beeker classified flexible groups occur abelian jsj decomposition fundamental group graph free abelian groups particular exhibits twisted klein bottle group splits amalgam also hnn extension see proposition proof sketch proof flexible admits two different splittings relative inch given epimorphisms equal image virtually cyclic virtually kernel also kernel ker ker incident edge group image order universally elliptic hence elliptic corresponding splitting consider actions line restrictions actions trees view homomorphisms may infinitely many let ker subgroup consisting elements always acting identity let shall show embeds finite index maps induce maps ker trivial let subgroup consisting elements always acting translations abelian torsionfree finite index contains intersection subgroups index contains commutator subgroup let rank induces subgroup hom generated finite index ker trivial choose finite family corresponding basis show product map injective image finite index let infinite order mapped hence hence hence ker finite order order maps nontrivial reflection find mapped translation element previous case ker follows injective image finite index virtually composing quotient map yields map whose image finite index assertion incident edge groups universally elliptic image order implies image finite last claim obvious way defined remark similar arguments show finitely generated group residually finite index subgroup direct product whose factors isomorphic part acylindricity previous chapters studied jsj deformation space existence guaranteed dunwoody accessibility requires finite presentability chapter propose construction based idea acylindrical accessibility sela defined tree fixed point sets elements diameter bounded since allow torsion following definition better adapted definition acylindrical tree pointwise stabilizer every arc length finite cardinality example let subsection splitting dual family geodesics geodesics underlying orbifold vertex group splitting dual acylindrical fiber finite order acylindrical accessibility bounds number orbits edges tree assumption finitely generated since finite presentability longer required allows instance construct jsj decompositions hyperbolic groups relative infinite collection infinitely generated subgroups jsj decompositions finitely generated csa groups order approach work must able produce tree uniform constants arbitrary tree using tree cylinders introduced additional benefit construction available produces canonical tree deformation space invariant automorphisms see applications construction constructions canonical decompositions uses canonical decomposition general construction another invariant tree based compatibility splittings given part compatibility jsj tree tree cylinders introduced section section describes construction jsj decompositions based acylindricity applications given section unless indicated otherwise assume finitely generated may arbitrary trees cylinders previous chapters defined studied jsj deformation space consisting jsj trees much better able find canonical tree key example provided hyperbolic groups bowditch constructs virtually cyclic jsj tree using topology boundary construction unfortunately always possible find invariant jsj tree instance free jsj deformation space consists trees free action see subsection see subsection easy check tree trivial one point section describe construction collapsed tree cylinders certain conditions associates new nicer tree given tree first feature new tree depends deformation space second feature interest suitable assumptions acylindrical uniform constants lies least deformation space different new tree smally dominated sense definition third feature lies compatibility properties sense common refinements see subsection used chapter provide examples compatibility jsj trees results section next jsj decompositions constructed summed corollary gives conditions ensuring canonical jsj tree flexible vertices finite fiber definition recall definition basic properties tree cylinders see details usual fix restrict let family infinite groups applications assume relative trees edge stabilizers family stable taking subgroups sandwich closed definition admissible equivalence relation equivalence relation admissible relative following axioms hold gag gbg invariance conjugation nesting implies equivalence let tree relative infinite edge stabilizers fix respectively edge one equivalence class denoted let act conjugation stabilizer denoted examples studied detail later see section torsion free csa group instance limit group generally toral relatively hyperbolic group take set infinite abelian subgroups commutation relation abelian group maximal abelian subgroup containing relatively hyperbolic group small parabolic subgroups take set infinite elementary subgroups subgroup elementary virtually cyclic parabolic case equivalent small relation elementary group maximal elementary subgroup containing may also allow nonsmall parabolic subgroups provided include consider splittings relative parabolic groups set infinite virtually cyclic subgroups commensurability relation finite index group commensurator given admissible equivalence relation associate tree cylinders tree infinite edge stabilizers declare two edges equivalent groups assumed infinite union edges equivalence class subtree axiom subtree called cylinder two distinct cylinders meet one point equivalence class containing stabilizers edges denoted definition tree cylinders given tree edge stabilizers tree cylinders bipartite tree vertex set set vertices belong least two cylinders set cylinders edge equivalently one obtains replacing cylinder cone set vertices belonging another cylinder collapsed tree cylinders tree obtained collapsing edges whose stabilizer belong warning always clear equivalent definition tree edge stabilizers always belong also consider next lemma say relative trees minimal claim always irreducible trivial consist single point indeed irreducible compatible lemma implies one cylinder trivial contradiction stabilizer vertex stabilizer viewed vertex belong stabilizer vertex stabilizer equivalence class stabilizer edge elliptic infinite contains edge origin lies representative lemma dominates vertex stabilizer elliptic stabilizer equivalence class associated cylinder edge equivalence class hge depend deformation space containing sometimes say tree cylinders suppose stabilizer every equivalence class belongs edge stabilizers belong therefore tree equal collapsed tree cylinders assertion applies particular examples csa groups relatively hyperbolic groups proof consider vertex belongs two cylinders defines vertex vertex fixed belongs single cylinder fixes vertex corresponding shows dominates hence also collapse second assertion follows remarks made stabilizer vertex vertex stabilizer stabilizer vertex stabilizer equivalence class also note clear since stabilizer edge contained stabilizer assertion corollary contains three proofs third assertion sketch one domination map induces map mapping edge onto vertex edge image unique point fixed viewed vertex image either unique cylinder whose edge stabilizers equivalent viewed vertex unique point fixed stabilizers edges viewed vertex belong deformation space map induced map inverse assertion implies trees cylinders canonical elements deformation space particular corollary jsj tree relative invariant automorphism preserves figure tree tree cylinders following examples may viewed commutation commensurability indeed two edge stabilizers equivalent equal example tree graph groups subsection three punctured tori glued along boundaries see figure introduction equal tree cylinders cylinders tripods projecting bijectively onto example let tree graph groups pictured left hand side figure reproducing figure four vertices fundamental groups punctured tori edge mod carrying group equal boundary subgroup cylinder line vertices lifts mod setwise stabilizer shall call isomorphic acts translations edge stabilizers equal conjugacy group hyperbolic relative subgroup toral relatively hyperbolic group equal line collapsed point vertex type replaced edges joining point stabilizer isomorphic centralizer commensurator unlike tree invariant automorphisms acylindricity recall pointwise stabilizer every arc length cardinality see beginning part lemma assume exists two groups inequivalent intersection order applies particular examples csa groups relatively hyperbolic groups note relatively hyperbolic group bound order finite subgroups see lemma example tree acylindrical cylinders lines fixed infinite cyclic group proof seen let segment length edges images edges let edge origin stabilizer belongs contains gei belongs equivalent gei since collapse find implies gei gej stabilizer fixes cardinality virtually cyclic groups play particular role context infinite group virtually cyclic finite index subgroup infinite cyclic group maps onto infinite dihedral group finite kernel definition cyclic given say cyclic maps onto kernel order equivalently acts simplicial line edge stabilizers order infinite virtually cyclic group cyclic order maximal finite normal subgroup hand finite subgroups cyclic group cardinality bounded recall subsection small resp fixes point end leaves line invariant resp acts group containing contained group small following lemma simple conceptually important says subgroups small virtually cyclic elliptic acylindrical trees lemma subgroup small tree elliptic cyclic proof smallness preserves line fixes end result clear acts line since edge stabilizers action order fixes end set elliptic elements kernel homomorphism every finitely generated subgroup elliptic fixes ray order acylindricity thus order small domination seen dominates lemma assumptions lemma domination may strict particular shown lemma small groups virtually cyclic tend elliptic instance example subgroup becomes elliptic unavoidable according lemma together subgroups conjugates group elliptic want deformation space close possible example motivates following definition applies arbitrary pair trees definition smally dominates let say smally dominates dominates edge stabilizers elliptic iii group elliptic small generally let family subgroups closed conjugation taking subgroups every small smally dominates every group elliptic belongs say remark useful describe flexible vertex stabilizers jsj trees proof theorem note first two conditions definition always satisfied tree collapsed tree cylinders following proposition basically says third condition holds groups small case make every tree acylindrical without changing deformation space forced lemma proposition let admissible equivalence relation let integer assume two groups inequivalent intersection order every stabilizer small one following holds every stabilizer belongs index group maps onto infinite edge stabilizers smally dominated assume furthermore subgroups small virtually cyclic elliptic belongs deformation space proof acylindricity comes lemma conditions small domination always satisfied pairs vertex stabilizers vertex stabilizers equal assumption subgroups elliptic satisfy condition iii words would smally dominate conclude smally dominated show belong deformation space assumptions see remark note holds may assume holds let edge suffices prove collapsing orbit change deformation space every edge collapse occurs star vertex group small claim elliptic assume otherwise consider subgroup elliptic fixes end fixes ray contradiction remaining possibility ruled holds acts dihedrally line case subgroup index fixes edge contradicting proved elliptic hence fixes unique vertex claim let initial edge segment contradicting thus edge incident stabilizer moreover since fixes proves lie deformation space smally dominates furthermore need show every elliptic assumption small case consider virtually cyclic edge contained stabilizer infinite finite index subgroup follows elliptic remark given definition assume every belongs groups virtually cyclic elliptic belongs deformation space proof exactly compatibility tree cylinders strong compatibility properties useful construct compatibility jsj tree see theorem following fact general property tree cylinders recall two trees compatible common refinement lemma proposition compatible dominated also need following technical statement lemma let relative see subsection let admissible equivalence relation every stabilizer particular every group small let integer suppose contains cyclic groups let refining assume vertex stabilizer small possibly elliptic finite fiber cardinality refines assumption vertex stabilizers holds particular jsj decomposition whose flexible vertices small fiber cardinality dominates map sends cylinder cylinder induces cellular map maps vertex vertex edge vertex edge independent lemma point lemma collapse map proof one passes successively collapsing orbits edges arbitrary order starting perform collapses change deformation space long possible since trees deformation space tree cylinders allows assume proper collapse refining belongs deformation space ensures vertex action preimage minimal may point fix consider tree obtained collapsing edges mapped vertex orbit thus embeds view minimal subtree show refines vertex stabilizers small fiber cardinality lemma follows inductive argument considering may assume point otherwise first suppose small implies line subtree fixed end edges belong cylinder particular elliptic tree cylinders hence thus domination maps recalled lemma maps induce equivariant cellular maps collapsed trees cylinders see lemma equality maps increase translation lengths follows theorem isomorphisms one may also show directly injective segment joing two vertices stabilizer thus vertex stabilizers small vertex stabilizers contained conjugate second case finite fiber underlying orbifold consider action every component used lemma lemma action dominated action dual family geodesics fact equal action edge stabilizers small assumption hence virtually cyclic assertion proposition assumption cyclic groups ensures groups particular vertices fiber claim cylinder containing edge preimage entirely contained implies refines remark therefore refines thus completing proof let arbitrary edge given edge prove suppose cylinders connected assume endpoint collapsed since finite fiber infinite image orbifold group infinite therefore contained finite index boundary subgroup geodesic contains preimage fundamental group since hge small contradicts second assertion proposition since virtually cyclic proves claim hence lemma constructing jsj decompositions using acylindricity using acylindrical accessibility show subsection one may construct jsj deformation space relative describe flexible subgroups provided every deformation space contains acylindrical tree uniform constants subsection prove results weaker assumption every tree smally dominates acylindrical tree section combine proposition ensures existence trees already mentioned acylindrical accessibility bounds number orbits edges acylindrical trees prevent existence infinite sequences refinements example consider hyperbolic groups fix define let tree amalgam refined splitting refinement belongs deformation space tree trees lie distinct deformation spaces although true example obvious general splits intersection groups conjugates proof subsection therefore uses acylindrical accessibility also arguments sela proof involving actions obtained taking limits splittings similar considered refer reader appendix basic facts particular use compactness space projectivized length functions paulin theorem gromov topology axes topology agree space irreducible even space semisimple trees see theorem usual family stable conjugation taking subgroups family may arbitrary assumed finitely generated would enough assume finitely generated relative finite collection subgroups subgroups uniform acylindricity recall tree segments length stabilizer cardinality belongs deformation space see general control main assumption section following fix assume tree deformation space goal deduce existence jsj deformation space section first step towards theorem allow lie deformation space example suppose csa group instance toral relatively hyperbolic group class abelian subgroups one take tree cylinders first example subsection provided contains abelian subgroups generally subgroups contained group conjugacy guarantees lies deformation space see proposition general case assumption abelian subgroups taken care subsection recall definition group cyclic maps kernel cardinality also recall assumed finitely generated restriction collection subgroups theorem given suppose exist numbers contains cyclic subgroups subgroups order deformation space jsj deformation space relative exists moreover groups small flexible vertices fiber cardinality see subsections definitions small flexible theorem proved subsections start general lemma lemma finitely generated group split subgroups order relative family exists finite family finitely generated group contained group split subgroups order relative special case lemma proved proof trees proof stabilizers family consisting subgroups order note trees universally elliptic splitting equivalent jsj deformation space trivial let enumeration finitely generated subgroups contained group pointed subsection linnell accessibility admits jsj decompositions let jsj tree relative show lemma proving trivial large lemma tree relative may refined jsj tree relative fix may therefore find trees jsj tree relative refines linnell accessibility theorem uniform bound number orbits edges assumed redundant vertices deformation space number upper bound number belong different deformation spaces large trees belong deformation space elliptic subgroups since every elliptic every otherwise would fix unique end edge stabilizers would increase infinitely many times along ray going end hypothesis made implies trivial also note lemma suffices prove theorem additional hypothesis split subgroups order relative condition proof let denote family subgroups order mentioned previous proof linnell accessibility implies existence jsj tree relative apply proposition existence jsj deformation space lemma assume split subgroups order relative subsection prove first assertion theorem flexible vertices studied next subsection construct universally elliptic tree dominates every universally elliptic tree course trees universal ellipticity defined respect countability allows choose sequence universally elliptic trees elliptic every elliptic every universally elliptic tree assertion lemma suffices dominates every inductively replacing standard refinement dominating proof theorem may assume dominates particular free replace subsequence needed let tree deformation space distance function denoted view metric tree edges length let translation length function see appendix proof two main steps first assume sequence bounded construct universally elliptic dominates every tree jsj tree second step deduce contradiction assumption sequence unbounded bounded pass subsequence limit possibly since set length functions trees closed theorem length function associated action takes values tree simplicial possibly edges length see example general show relative assume trees lemma irreducible except virtually cyclic case theorem clear hyperbolic dominates hyperbolic since every irreducible exist hyperbolic hence also irreducible converges equivariant topology see theorem claim anything edge stabilizers study vertex stabilizers claim subgroup elliptic elliptic every particular every elliptic dominates every claim true finitely generated since elliptic elliptic every infinitely generated fix finitely generated subgroup cardinality elliptic elliptic otherwise fixes unique end fixes infinite ray contradicting acylindricity conversely elliptic every group fixes infinite ray convergence gromov topology fixes segment length large contradicts acylindricity thus proving claim return trees dominated since trees edge stabilizers aell family groups universally elliptic since aell stable taking subgroups equivariant map factors tree obtained collapsing edges stabilizer aell tree universally elliptic dominating every hence every universally elliptic tree jsj tree suppose unbounded work towards contradiction since set projectivized length functions finitely generated group compact theorem may assume converges length function sequence theorem converges gromov topology irreducible take line irreducible lemma subgroup order elliptic every fixes unique point particular elements elliptic tripod stabilizers cardinality arc stabilizer order cyclic let two arcs cyclic proof recall dominates subgroup acting also acts prove may assume finitely generated elliptic elements fix arc otherwise since converges gromov topology would fix long segment large contradicting acylindricity using gromov topology example one sees finitely generated subgroup fixing tripod fixes long tripod large cardinality acylindricity proves prove consider group fixing arc write length suffices show small depending whether elliptic every assertion lemma gives required conclusion contains free group choose elements generating free subgroup rank choose large exist approximations distance least contained characteristic set axis fixed point set additionally translation length every large enough commutators fix segment contradicting acylindricity prove assertion know acts large choose hyperbolic element suppose fix endpoint say argue towards contradiction let axis gai large translation lengths small compared elements fix common long arc acylindricity exist commutes follows preserves axis therefore moves contradiction since goes sela proof acylindrical accessibility comes play describe structure use generalization given theorem allows tripod stabilizers lemma shows stabilizers unstable arcs tripods cardinality assumed split subgroup cardinality relative hence also relative finite family lemma follows graph actions theorem order reach desired contradiction rule several possibilities first consider vertex action decomposition given line acts dense orbits finitely generated group assertion lemma contains finitely generated subgroup mapping onto finite virtually cyclic kernel acting group acts large contradicting lemma suppose kernel action dual arational measured foliation conical singularities order since fixes tripod consider splitting relative dual simple closed curve splitting cyclic group particular since hyperbolic also hyperbolic hence large enough hand universally elliptic elliptic respect remark see assertion lemma splits relative infinite index subgroup group order fact case contradicting assumptions theorem remaining possibility simplicial tree edge stabilizers cyclic edge stabilizers hyperbolic large leads contradiction previous case concludes proof first assertion theorem description flexible vertices know jsj decomposition exists prove second assertion theorem groups small flexible vertices fiber cardinality arguments similar used section one key difference since assume finite presentability lemma constructing tree maximal domination using dunwoody accessibility shall use acylindrical accessibility instead subsection may assume totally flexible exist nontrivial none universally elliptic indeed prove flexible vertex jsj decomposition enough study inch instead note know advance jsj trees finitely generated edge stabilizers allow infinitely generated groups inch even groups finitely generated fact totally flexible implies edge stabilizer tree cyclic since small follows lemma applied action elliptic may therefore replace family consisting cyclic subgroups finite subgroups satisfies stability condition definition family subgroups order free use results subsections well proposition splittings clearly minuscule subsection needed acylindrical accessibility bound number orbits edges tree among trees redundant vertex consider tree whose number orbits edges maximal substitute maximal tree provided lemma proposition yields fully hyperbolic respect let regular neighborhood proposition recall proposition stabilizer vertex slender slender maps onto group virtually hand must virtually cyclic lemma acts hyperbolically contradiction shows must fiber cardinality proof theorem thus suffices show point one obtains refining vertices collapsing edges let common refinement edge collapsed lcm subsection let tree redundant vertex deformation space consider commensurability classes edge stabilizers since dual geodesics vertices redundant vertex edges different orbits stabilizers every edge stabilizer also edge stabilizer next observe edge exists edge commensurable indeed since dominates exists lemma cardinality greater since cyclic implies infinite hence commensurable given tree denote number orbits edges number equivalence classes orbits edges two orbits equivalent contain edges commensurable stabilizers proved maximality property implies inequalities equalities every edge stabilizer commensurable edge stabilizer point let edge stabilizer elliptic hand edge stabilizer commensurable edge stabilizer contradicts full hyperbolicity respect thus completing proof theorem splittings virtually cyclic groups generalizing theorem next subsection give application previous arguments section proved certain flexible groups finite family finitely generated subgroups finitely presented relative acylindricity allow remove assumptions splittings virtually cyclic groups based following lemma stability conditions total flexibility defined section lemma assume groups virtually cyclic satisfies one stability conditions scz totally flexible tree acylindrical depending proof since groups virtually cyclic may assume making smaller needed groups finite total flexibility implies relative trees minuscule course also follows proposition proposition exists tree fully hyperbolic respect let regular neighborhood tree dual families geodesics orbifolds underlying vertex groups follows maximum order corresponding fibers see example beginning part lemma allows argue previous subsection provided bound order groups particular get following strengthening theorem theorem let finitely generated group let arbitrary family subgroups let class finite cyclic subgroups assume exists jsj tree relative flexible vertex stabilizer virtually trivial fiber moreover underlying orbifold mirror every boundary component used contains essential simple closed geodesic proof argument subsection one first reduces case totally flexible since trees lemma acylindrical accessibility applies acylindricity small groups section generalize theorem instead requiring every deformation space contains tree require every tree smally dominates tree definition recall assumed finitely generated restriction collection subgroups theorem given suppose exist numbers contains cyclic subgroups subgroups cardinal every smally dominates jsj deformation space relative exists assume groups small flexible vertex groups small fiber cardinality generally definition always flexible vertex groups belong fiber cardinality abelian jsj decomposition canonical relative abelian jsj abelian jsj constructed figure jsj decompositions csa group relative means relative abelian subgroups example applies instance toral relatively hyperbolic group generally csa group family abelian subgroups take tree cylinders equal see section examples particular let return example figure shows quotient graphs groups three trees first one abelian jsj tree next one tree cylinders also jsj tree relative abelian subgroups constructed proof theorem see corollary last one jsj tree constructed proof obtained refining vertices stabilizer lemma another jsj tree deformation space rest section devoted proof theorem assume thanks lemma split groups order relative assumption write family subgroups small family definition write snvc groups cyclic lemma suppose tree vertex stabilizer elliptic belongs snvc particular cyclic let subgroup elliptic elliptic contained group snvc particular trees belong deformation space assume reduced every edge stabilizer subgroup index fixing edge particular universally elliptic recall see subsection tree reduced proper collapse lies deformation space equivalently edge different orbits satisfies since one may obtain reduced tree deformation space collapsing edges loss generality assuming smally dominated trees reduced proof clearly assume cyclic stabilizers edges incident infinite index since elliptic order contradicts assumption note trivial would cyclic also contradicting direction assertion follows lemma conversely assume elliptic vertex fixed snvc assertion assertion let endpoints edge first suppose elliptic small preserves line fixes end since elliptic subgroup index fixes edge fixes two distinct points fixes edge may therefore assume fix unique point fixed points different fixes edge otherwise hgu fixes point therefore elliptic since reduced acting hyperbolically maps conjugates unless collapsing edge yields tree deformation space element fixes elliptic contradiction corollary snvc dominates dominates snvc lies deformation space particular theorem applies snvc also deduce corollary subgroup small snvc also small elliptic every tree relative snvc elliptic every tree relative proof small belong elliptic assertion lemma lemma small snvc elliptic lemma elliptic thanks third assertion corollary may apply theorem get corollary exists jsj tree relative snvc assume reduced think relative relative snvc example snvc class abelian subgroups jsj tree abelian groups relative abelian subgroups lemma reduced elliptic proof let edge elliptic argue towards contradiction may assume one orbit edges first step show dominates since relative snvc group fixes vertex snvc universal ellipticity unique edge stabilizers elliptic also note snvc assertion lemma may assume point snvc point contains edge stabilizer hence stabilizer edge since equivariant map group trivially snvc elliptic snvc since single orbit edges snvc elliptic relative snvc hand snvc snvc elliptic assertion lemma maximality jsj dominates recall unique fixed point denote endpoints since dominates groups fix hga snvc elliptic proof lemma acting hyperbolically maps reduced fixes belongs snvc contradiction since relative snvc shall construct jsj tree relative refining reduced jsj tree relative snvc vertices small stabilizer jsj tree thought absolute relative snvc lemma exists jsj tree relative may obtained refining vertices stabilizer particular set vertex stabilizers belonging moreover lies deformation space proof let vertex shall prove existence jsj tree relative family inch consisting incident edge groups subgroups conjugate group see definition proposition applies elliptic lemma one obtains jsj tree relative refining using trees elliptic every elliptic tree jsj trivial see lemma refinement needed assume therefore elliptic consider snvc elliptic assertion lemma hence snvc elliptic dominated particular elliptic belongs let minimal subtree exists proposition lemma since incident edge stabilizers elliptic small one deformation space containing universally elliptic tree proposition applying splittings relative inch deduce jsj tree relative incv shows first two assertions lemma show moreover since snvc snvc universally elliptic third assertion lemma dominated conversely dominates therefore dominates corollary dominates since lie deformation space corollary remark corollary type vertex stabilizers rigid flexible relative relative snvc conclude proof theorem jsj deformation space relative exists lemma description flexible vertex groups follows theorem since jsj trees relative snvc vertex stabilizers lemma applications recall family infinite groups combining proposition theorem yields corollary let finitely generated group given let admissible equivalence let family groups contained assume relative exists integer contains cyclic subgroups subgroups cardinal two groups inequivalent intersection order every stabilizer small hence every element one following holds every stabilizer belongs index group maps onto jsj tree relative collapsed tree cylinders jsj tree relative snvc snvc family groups cyclic vertex stabilizers flexible vertex stabilizers belong fiber cardinality canonical jsj tree relative snvc particular invariant automorphism preserving compatible every proof assumption edge stabilizers collapsed tree cylinders defined proposition remark theorem applies taking groups small assumption exists jsj tree obtained lemma flexible vertex stabilizers fiber cardinality lemma states belongs deformation space jsj tree relative snvc obtained refining vertices stabilizer proves first two assertions corollary third one follows corollary fourth assertion let since universally elliptic exists refinement dominating proposition lemma common refinement since refinement lemma tree common refinement remark result remains true enlarge keeping invariant conjugating taking subgroups long groups small remark corollary assumption usually necessary assertions provided relative space exists first assumption corollary ensures propositions apply flexible vertex stabilizers belong particular corollary flexible vertex stabilizer belonging underlying orbifold contains essential simple geodesic every boundary component used every universally elliptic subgroup contained extended boundary subgroup fix point action minimal subtree dual family geodesics proof follows directly propositions noting edge stabilizers action virtually cyclic second assertion proposition following sections going describe examples corollaries apply first treat case abelian splittings csa groups allow torsion introduce groups subsection describe jsj decomposition virtually abelian groups consider elementary splittings relatively hyperbolic groups splittings virtually cyclic subgroups assumption subgroups small commensurators conclude defining zmax decomposition hyperbolic groups csa groups first application csa group consider abelian cyclic splittings recall csa commutation relation transitive maximal abelian subgroups malnormal toral relatively hyperbolic groups particular limit groups hyperbolic groups csa see example figure illustration let either family abelian subgroups family cyclic subgroups freely indecomposable relative commutation admissible equivalence relation see lemma define trees cylinders groups maximal abelian subgroups small trees abelian groups class abelian subgroups edge stabilizers belong since every abelian cyclic groups may edge stabilizers use obtained collapsing edges stabilizers theorem let finitely generated csa group family subgroups assume freely indecomposable relative abelian resp cyclic jsj tree relative collapsed tree cylinders commutation jsj tree relative abelian subgroups vertex stabilizers flexible vertex stabilizers fundamental groups compact surfaces invariant automorphisms preserving compatible every proof apply corollary consisting abelian resp cyclic subgroups family abelian subgroups since vertex groups trivial fiber underlying orbifold surface groups groups notion csa groups groups torsion shall introduce groups integer every hyperbolic group universal property particular groups say group abelian contains abelian subgroup index note infinite dihedral group cyclic sense definition abelian usual group locally abelian finitely generated subgroups abelian lemma countable group locally abelian abelian proof let numbering elements let abelian subgroup index given finitely many subgroups index subsequence ani ani independent diagonal argument one produces abelian subgroup whose intersection index index definition say finite subgroup cardinality particular element order infinite order element infinite order contained unique maximal virtually abelian group abelian normalizer group csa group klein bottle group hyperbolic group since finite subgroups bounded order finitely many isomorphism classes virtually cyclic groups whose finite subgroups bounded order see lemma proof corollary say groups also lemma let group infinite order following conditions equivalent commuting powers commute virtually abelian infinite virtually abelian subgroup contained unique maximal virtually abelian group group abelian almost malnormal infinite proof assertion clear since index prove commutes normalizes clearly proves assertion virtually abelian contains element infinite order define assertion depend choice prove consider since infinite order equals normalizer similar argument shows almost malnormality remains prove uniqueness virtually abelian group containing defined coincides one easily checks subgroup group still fact consequence following proposition saying universal property refer topological space marked groups relation universal theory proposition fixed class groups defined set universal sentences universal property particular class groups stable taking subgroups closed space marked groups proof finite group fact contain subgroup isomorphic equivalent universal sentence saying satisfying multiplication table distinct thus first property groups defined infinitely many universal sentences consider second property claim given fact abelian may expressed disjunction vam finitely many finite systems equations elements see let homomorphism sending generator free group index conversely index define vam enumerate subgroups index subgroup choose finite set generators write system equations proves claim lemma zorn lemma contained maximal abelian subgroup second property definition restated follows finitely generated virtually abelian group abelian abelian order abelian defined set universal sentences constructed using vam first two properties definition hold third one expressed saying order abelian set universal sentences well recall group defined limit subgroups space marked groups proposition implies group particular corollary let hyperbolic group exists group moreover subgroup group contains free subgroup abelian remark use moreover additional restrictions virtually abelian subgroups instance exists infinite order infinite order proof first assertion immediate proposition let infinite subgroup containing proposition exists number distinct elements element form infinite order order universal statement also holds contains element infinite order recall exists number commute generate see statement holds since word universal statement holds hence thus elements commute lemma normalizes abelian let group show define tree cylinders virtually abelian splittings hence also virtually cyclic splittings definition virtual commutation let family virtually abelian subgroups family infinite subgroups given define equivalence relation call virtual commutation equivalently virtually abelian stabilizer equivalence class action conjugation virtually abelian group lemma relative equivalence relation admissible see definition proof edge stabilizers first two properties admissibility obvious consider fixes fixes since group generated two commuting elliptic groups elliptic finite index subgroups fixes point given edge segment contained say required theorem let group family subgroups assume relative jsj tree relative virtually abelian resp virtually cyclic subgroups collapsed tree cylinders virtual commutation jsj tree relative virtually abelian subgroups virtually cyclic abelian vertex stabilizers flexible vertex stabilizers virtually abelian finite fiber invariant automorphisms preserving compatible every proof apply corollary family virtually abelian resp virtually cyclic subgroups virtual commutation family virtually abelian subgroups relatively hyperbolic groups subsection assume hyperbolic relative family finitely generated subgroups recall subgroup parabolic conjugate subgroup elementary virtually cyclic possibly finite parabolic infinite elementary subgroup contained unique maximal elementary subgroup following lemma folklore lemma let relatively hyperbolic group exists elementary subgroup cardinality contained unique maximal elementary subgroup moreover parabolic finite virtually cyclic parabolic finite cardinality cyclic elementary subgroups cardinality proof first assertion contained lemma virtually cyclic cyclic maximal finite normal subgroup cardinality thus parabolic since proves second assertion third assertion immediately follows first definition say two infinite elementary subgroups elementary equivalently equivalence relation set infinite elementary subgroups let either class elementary subgroups class virtually cyclic groups cases equivalence relation lemma let family elementary subgroups resp virtually cyclic subgroups let family subgroups every small equivalence relation admissible relative proof fix infinite edge stabilizers assume fix respectively must show every edge clear parabolic hence virtually cyclic since infinite finite index contained assume therefore contained assumption small distinguish several cases fixes point edge contained contains equivalent argument fixes end infinity last case acts dihedrally line let projections fixed respectively contained contains may assume subgroup index fixes pointwise lemma allows define trees cylinders stabilizer equivalence class action conjugation small every note small contain contained group class elementary subgroups collapsing necessary hand class virtually cyclic subgroups virtually cyclic one may may proper collapse theorem let hyperbolic relative family finitely generated subgroups virtually cyclic let class elementary subgroups resp virtually cyclic subgroups let family subgroups relative every small jsj tree relative elementary resp virtually cyclic subgroups collapsed tree cylinders jsj tree relative vertex stabilizers flexible vertex stabilizers elementary finite fiber invariant automorphisms preserving compatible every hyperbolic tree virtually cyclic jsj tree constructed bowditch using topology removing virtually cyclic groups destroy relative hyperbolicity assumption virtually cyclic makes statements simpler causes loss generality proof apply corollary lemma family elementary subgroups use remark work virtually cyclic groups torsion group groups snvc parabolic every snvc particular tree relative snvc relative automorphisms preserving preserve set elementary subgroups corollary applies assumption parabolic groups small automatic soon contains since consider splittings relative therefore get corollary let hyperbolic relative finite family finitely generated subgroups let family elementary subgroups let family subgroups containing relative jsj tree relative equal tree cylinders invariant automorphisms preserving compatible every flexible vertex stabilizers finite fiber particular corollary let hyperbolic relative finite family finitely generated subgroups let family elementary subgroups relative jsj tree relative equal tree cylinders invariant automorphisms preserving compatible every flexible vertex stabilizers finite fiber note finitely presented relative existence jsj tree also follows theorem virtually cyclic splittings subsection consider splittings virtually cyclic groups assuming smallness commensurators let family virtually cyclic possibly finite subgroups family infinite virtually cyclic subgroups recall two subgroups commensurable finite index commensurability relation admissible relation see one define tree cylinders stabilizer equivalence class group commensurator comm consisting elements gag commensurable corollary yields theorem let family virtually cyclic subgroups let set subgroups relative let set subgroups commensurators infinite virtually cyclic subgroups assume bound order finite subgroups groups small virtually cyclic jsj tree relative collapsed tree cylinders commensurability virtually cyclic jsj tree relative groups virtually cyclic vertex stabilizers flexible subgroups commensurate infinite virtually cyclic subgroup finite fiber invariant automorphisms preserving compatible every remark applies csa group group relatively hyperbolic group whose finite subgroups bounded order long parabolic subgroups small trees cylinders given commutation commensurability belong deformation space follows lemma also theorem let commutative transitive let family subgroups freely indecomposable relative cyclic jsj tree relative collapsed tree cylinders commensurability jsj tree relative subgroups isomorphic solvable group vertex stabilizers flexible subgroups surface groups unless invariant automorphisms preserving compatible every recall commutative transitive commutation transitive relation remark apply corollary directly claim commensurator cyclic subgroup hai note however metabelian commensurates relation mapping defines map whose kernel centralizer abelian group proof proposition tree cyclic edge stabilizers vertex stabilizer collapsed tree cylinders elliptic solvable group commutative transitivity particular consisting groups contained subgroup may therefore apply theorem taking argue proof corollary using lemmas lemma applies since contain group flexible except flexible groups surface groups zmax decomposition section hyperbolic group consider splittings virtually cyclic subgroups necessarily infinite simplicity assume theorem yields tree tree cylinders jsj tree tree jsj tree canonical particular invariant automorphisms flexible vertex stabilizers finite fiber fact tree constructed bowditch noticed several authors sometimes useful replace slightly different tree whose edge stabilizers maximal virtually cyclic subgroups infinite center motivate recall strong connection splittings automorphisms paulin theorem combined rips theory actions splits virtually cyclic subgroup whenever infinite conversely suppose virtually cyclic similar discussion hnn extensions belongs center defines dehn twist automorphism conjugation identity always imply infinite see two reasons first even though infinite center may finite instance infinite dihedral see subsection second power centralizes image finite order therefore consider set subgroups virtually cyclic infinite center family zmax consisting maximal elements inclusion true infinite splits group zmax say subgroup zmax belongs zmax denote unique zmax containing pointwise stabilizer pair points fixed tree zmax edge stabilizers zmax definition zmax tree zmax zmax tree elliptic respect every zmax maximal domination property beware zmax stable taking subgroups fit usual setting zmax trees belong deformation space zmax deformation space subsection follows proposition need know given zmax standard refinement dominating zmax see consider edge image edge edge stabilizer zmax contained preimage vertex group elliptic contains finite index fixes segment fixed hence zmax deduce fixes therefore leaves invariant fact maps injectively zmax implies fixes since fixes every edge image remark argument based following useful fact subgroup fixes vertex zmax contains group fixes section shall construct describe canonical zmax tree relation zmax infiniteness mentioned algorithmically computable computability usual jsj decomposition see lemma given one construct zmax tzmax following properties dominates tzmax every zmax dominated dominated tzmax every edge stabilizer tzmax finite index edge stabilizer example may happen tzmax trivial even though occurs instance corresponding splitting form hci proof let quotient smallest equivalence relation edges shall give alternative description shows zmax satisfying required properties however may happen minimal may even trivial define tzmax minimal subtree lemma follows construct folding argue induction number edges let edge none zmax since group elliptic one endpoints fixed let tree obtained folding together edges every edge let first edge shortest path joining fix since path fixed one one fold edges cases obtain tree fewer orbits edges map factors lemma follows induction remark similar construction used section toral relatively hyperbolic group tree abelian edge stabilizers replaced tree whose edge stabilizers abelian stable taking roots denote family consisting subgroups finite subgroups alternatively one could include finite subgroups stable taking subgroups since edge stabilizers let canonical jsj tree provided theorem flexible vertex stabilizers finite fiber underlying orbifold mirrors see theorem let tzmax tree associated lemma zmax elliptic respect every zmax assertion lemma unfortunately always zmax tree illustrate klein bottle group course hyperbolic see hyperbolic examples pointed subsection cyclic jsj decomposition trivial flexible two cyclic splittings corresponding presentations hti hvi thus tzmax trivial trees note however amalgam zmax group generated maximal cyclic subgroup follows tree hnn extension elliptic respect every zmax zmax tree geometrically let flat klein bottle compact hyperbolic surface generally hyperbolic orbifold theorem allow cone points mirrors essential simple closed geodesic defines splitting cyclic group zmax see subsection zmax tree trivial every essential simple closed geodesic crosses transversely geodesic happens almost cases exceptions flat klein bottle klein bottle one conical point klein bottle one open disc removed flat klein bottle essential simple closed geodesics isotopic two exceptional cases hyperbolic unique essential simple closed geodesic construct canonical zmax tree orbifold underlying vertex canonical jsj tree exceptional one let tree obtained applying lemma suppose vertices klein bottle one conical point case klein bottle one open disc removed refine vertices using splitting dual unique essential geodesic apply construction lemma tree thus obtained proposition let hyperbolic group tree constructed canonical zmax tree proof first suppose exceptional vertex tzmax assertion lemma elliptic respect every zmax canonical canonical definition tzmax given first paragraph proof lemma involve choices need prove maximality vertex stabilizer tzmax elliptic every tree elliptic respect every zmax recall tree cylinders bipartite see subsection stabilizer tzmax maximal virtually cyclic subgroup elliptic contains edge stabilizer finite index two edges folded passing tzmax commensurable stabilizers belong cylinder star vertex implies stabilizer tzmax multiple amalgam tree groups representatives conjugacy classes incident edge stabilizers group clearly elliptic rigid also underlying orbifold zmax relative incident edge group universally elliptic follows elliptic remark argument exceptional vertices similar refining replaces exceptional vertices vertices whose underlying orbifold pair pants annulus conical point stabilizers split zmax relative boundary subgroups remark proof shows flexible vertex groups sockets also called sockets orbisockets groups roots added boundary subgroups precisely form vertex group representatives conjugacy classes boundary subgroups incident edge groups may missing tree defined proof lemma minimal remark applications model theory hyperbolic surfaces carry diffeomorphism play special role four pair pants projective plane klein bottle closed surface genus first two finite mapping class group two causes problems see proof proposition klein bottle appears vertex one may refine splitting explained using unique essential simple closed geodesic creates vertex based pair pants dehn twist around generates finite index subgroup mapping class group acts trivially every vertex group refined splitting similarly closed surface genus unique simple geodesic whose complement orientable see proposition core band whose complement torus unlike carries diffeomorphism mapping class group leaves invariant preserves cyclic splitting given decomposing union band isomorphic mapping class group refinements give canonical way modifying jsj decomposition torsionfree hyperbolic group described subsection surfaces appearing vertices mapping class group finite contains map part compatibility usual fix family subgroups stable conjugating taking subgroups another family consider work simplicial trees often view metric trees every edge length order apply results appendix instance fact compatibility passes limit freely use concepts appendix particular arithmetic trees subsection section defined jsj tree relative tree universally elliptic dominates every universally elliptic tree deformation space jsj deformation space djsj next section define compatibility jsj deformation space dco compatibility jsj tree tco deformation space dco contains universally compatible tree dominates every universally compatible tree tree tco preferred universally compatible tree dco particular invariant automorphisms preserving section give examples provided particular trees cylinders see section compatibility jsj tree recall subsection two trees compatible common refinement words exists tree collapse maps definition universally compatible tree universally compatible relative compatible every tree particular means tree obtained refining collapsing splitting either coincides splitting associated one edges one obtain refining vertex using splitting relative incident edge groups collapsing original edges definition compatibility jsj deformation space among deformation spaces containing universally compatible tree one maximal domination unique denoted dco called compatibility jsj deformation space relative prove uniqueness consider universally compatible trees corollary may assume irreducible compatible see lemma appendix universally compatible proposition belong maximal deformation spaces get lie deformation space proving uniqueness clearly universally compatible tree universally elliptic implies dco dominated djsj also note universally compatible edge stabilizer arbitrary tree elliptic tree elliptic respect existence compatibility jsj space theorem finitely presented relative family finitely generated subgroups compatibility jsj space dco relative exists heart proof theorem following proposition proposition let finitely presented relative family finitely generated subgroups let sequence refinements irreducible universally compatible trees exist collapses deformation space sequence converges universally compatible simplicial dominates every view trees metric edge length convergence space unlike proof theorem rescaling metric necessary tree may redundant vertices proof theorem proposition may assume universally compatible tree may also assume trees irreducible otherwise follows corollary see appendix one deformation space trees theorem trivially true let set isomorphism classes universally compatible trees find universally compatible tree dominates every lemma need dominate trees countable set let lcm see definition universally compatible assertion proposition refines proposition yields desired tree dominates every hence every universally compatible corollary proof proposition dunwoody accessibility see proposition exists tree dominates every use finite presentability course claim universally compatible may assume minimal different gcd see definition independent define redundant vertices see remark denote quotient graphs groups let collapse map see figure denote collapse map trees belong deformation space strictly refines particular number edges grows idea following accessibility holds within given deformation space see page easy form accessibility requires smallness finite presentability hypothesis figure segments quotients trees case used directly implies growth occurs creation bounded number long segments whose interior vertices valence one incident edge groups equal vertex group edge group smaller vertex group since redundant vertices make precise fix vertex define disjoint edges contained correspond edges since deformation space tree groups contains vertex whose vertex group equals fundamental group vertex group may fail unique choose one every way compatible maps orient edges towards group carried edge equal group carried initial vertex say vertex peripheral adjacent edge mapped onto edge minimality terminal vertex peripheral carries group initial edge segment total number peripheral vertices bounded follows number points valence bounded cutting peripheral vertices points valence produces segments mentioned earlier example figure one segment corresponding edges labelled point vertex labelled vertex segment corresponds vertex labelled vertex peripheral defined segments let vary preimage segment map union segments since number segments bounded independently may assume maps every segment onto segment particular number segments independent recall oriented edges towards edge contained carries group initial vertex edges given segment coherently oriented segments therefore oriented various ways performing collapses collapsing edges contained segments yields change deformation space hand one obtains collapsing edges contained segment trivial segments may viewed segments collapsing initial edge segment may change deformation space group carried initial point segment increased collapsed define graph groups collapsing segment edges initial one corresponding tree collapse belongs deformation space moreover number edges prime factors constant one per segment one common prime factor let length function lemma sequence every sequence converges proof difference comes fact initial edges segments may collapsed fix segment let initial edge assume distinct edge mapping onto initial edge image assume simplicity adjacent general case similar group carried equal group carried initial vertex given lift therefore adjacent one lift several lifts translation axis every occurrence lift immediately preceded occurrence lift length function prime factor corresponding therefore bounded prime factor corresponding since true every segment get required let collapses belongs deformation space collapse number edges bounded observation due forester see implies inequality independent since get convergence call limit length function tree set length functions trees closed tree simplicial takes values see example irreducible universally compatible limit universally compatible trees corollary since every elliptic elliptic dominates lemma elliptic finitely generated elements elliptic remains prove every edge stabilizer belongs finitely generated simple argument using equivariant gromov topology general argue follows may find hyperbolic elements stabilizer bridge bridge might proof lemma endpoint valence vertex choose values coincide particular axes disjoint elliptic since moreover fixed point set must therefore intersect axis since otherwise similarly intersects axis follows fixes bridge axes concludes proof proposition compatibility jsj tree tco shall deduce dco irreducible contains canonical tree tco call compatibility jsj tree fixed automorphism leaves dco invariant note tco may refined jsj tree lemma lemma irreducible deformation space contain finitely many reduced universally compatible trees recall see subsection reduced proper collapse lies deformation space reduced one may perform collapses obtain reduced tree deformation space universally compatible proof follows results refer definitions given suppose infinitely many reduced universally compatible trees let belongs assertion proposition pointed page tree reduced sense edges surviving edges survive one space assertion proposition since bound number orbits edges tree proposition sequence eventually constant remark proof shows contains finitely many reduced trees compatible every tree corollary irreducible contains universally compatible tree preferred element lcm reduced universally compatible trees preferred element universally compatible assertion proposition definition compatibility jsj tree tco compatibility jsj deformation space dco exists irreducible preferred element called compatibility jsj tree tco relative dco trivial define tco trivial tree point may happen dco neither trivial irreducible follows remark deformation space trees unique reduced tree dco particular dco consists actions line define tco otherwise define tco see subsection example dco consists trees exactly one fixed end examples start various examples explain subsection tree cylinders section belongs compatibility deformation space groups belong simplicity assume subsections free groups aut compatibility jsj tree tco sometimes forces trivial suppose instance finite generating set elements belong aut length function trivial one follows serre lemma see subsection generators elliptic inequality max see lemma hyperbolic particular proposition free group aut tco trivial algebraic rigidity following result provides simple examples tco proposition assume one reduced jsj tree djsj split subgroup contained infinite index group tco exists equals proof let second assumption implies elliptic respect remark assertion lemma consider standard refinement dominating proposition equivariant map must constant edge whose stabilizer universally elliptic hence factors tree obtained collapsing edges particular dominates hence jsj tree universally elliptic since unique reduced jsj tree refinement compatible shows universally compatible thus tco necessary sufficient condition tree unique reduced tree deformation space given see also proposition applies instance free splittings splittings finite groups whenever jsj tree one orbit edges provides examples virtually free groups tco amalgam finite property set finite subgroups free products let consist trivial group let grushko decomposition freely indecomposable free rank jsj tree one orbit edges tco splitting explained show tco trivial course trivial also freely indecomposable free rank assuming actually show tree trivial edge stabilizers invariant finite index subgroup collapsing edges may assume one orbit edges since write vertex stabilizer given subgroup image automorphism conjugate contradicts invariance generalized groups consider cyclic splittings generalized groups see subsection first consider solvable group case dco trivial prime power prime power dco jsj deformation space irreducible none dividing proposition applies particular dco holds generally generalized group defined labelled graph label dividing another label vertex see systematic study dco generalized groups canonical decomposition scott swarup recall group vpcn resp virtually polycyclic hirsch length resp let finitely presented group assume split subgroup let consist subgroups vpcn subgroups shown tree cylinders commensurability jsj deformation space subdivision tree tss regular neighbourhood constructed theorem since tss universally compatible definition corollary theorem dominated compatibility deformation space dco domination may strict tree tss always trivial pointed dco none divides duality groups let duality group dimension see also work kropholler subject although group necessarily finitely presented almost finitely presented proposition sufficient dunwoody accessiblity jsj deformation space compatibility jsj deformation space exist theorem splits virtually solvable subgroup therefore consider family consisting corollary subgroups number coends theorem virtually polycyclic finite index commensurator corollary implies jsj deformation space contains universally compatible tree namely tree cylinders commensurability equals dco one context universally elliptic tree universally compatible indeed since precisely coends proposition implies two splittings edge stabilizers elliptic compatible indeed strong crossing almost invariant subsets corresponding occurs edge stabilizers elliptic lem absence weak strong crossing equivalent compatibility sum corollary let duality group dimension virtually polycyclic let family tco exists lies jsj deformation space particular canonical jsj tree trees cylinders sections used trees cylinders construct universally compatible tree see last assertion corollary always denote jsj tree relative lemma show belongs compatibility jsj deformation space additional assumption every stabilizer belongs implies trees cylinders edge stabilizers see subsection collapsing needed theorem given let admissible equivalence assume relative exists integer contains cyclic subgroups subgroups cardinal two groups inequivalent intersection order every stabilizer belongs every small compatibility jsj deformation space dco exists contains tree cylinders jsj trees see corollary trivial irreducible jsj compatibility tree tco defined flexible vertex stabilizers tco belong subgroups finite fiber proof corollary applied shows universally compatible flexible vertex stabilizers point show maximal domination among universally compatible trees prove dco exists contains trivial irreducible tree cylinders trivial irreducible see subsection consider universally compatible tree show dominates replacing universally compatible assertion proposition assume refines show vertex stabilizer elliptic elliptic hence universally elliptic dominated assumptions therefore assume also small since quotient graph point equivalently least two cylinders two cases image valence least refine minimal tree deformation space edge stabilizer since edge group elliptic universally compatible remaining case image valence assume elliptic argue towards contradiction let edge containing going prove contains subgroup index assuming fact refine minimal edge stabilizer elliptic required contradiction proving dominates construct since assumed refines proposition lemma imply contains hyperbolic element know small two possibilities fixed end action defines homomorphism see subsection homomorphism vanishes elliptic hyperbolic element define preimage index subgroup image action dihedral get epimorphism since elliptic image trivial contained conjugate factor one constructs preimage suitable index subgroup image theorem applies directly abelian splittings csa groups subsection virtually abelian splittings groups subsection elementary splittings relatively hyperbolic groups subsection condition satisfied cases conclude belongs dco get instance corollary let finitely generated csa group class abelian subgroups family subgroups let commutation relation among infinite abelian subgroups compatibility jsj deformation space dco exists contains tree cylinders jsj tree corollary let hyperbolic relative family finitely generated subgroups let class elementary subgroups let family subgroups containing contain let relation relative compatibility jsj deformation space dco exists contains tree cylinders jsj tree consider cyclic splittings commensurability relation example let hyperbolic group property nontrivial action tree hai maximal cyclic subgroup consider hnn extension csa group tree jsj tree abelian groups tree cylinders tree amalgam also compatibility jsj tree abelian groups theorem cyclic groups jsj tree collapsed tree cylinders compatibility jsj tree follows proposition assumption proposition holds cyclic groups abelian groups example dco strictly dominates one obtains tree dco refining vertices group general fact theorem let finitely generated group let family cyclic subgroups let family subgroups relative assume commensurators infinite cyclic subgroups small cyclic compatibility jsj space dco relative exists furthermore solvable group one obtains tree dco refining collapsed tree cylinders commensurability vertices virtually remark torsion bound order finite subgroups similar theorem holds virtually cyclic splittings proof furthermore one must assume virtually one prove ascending hnn extension infinite virtually cyclic group virtually leave exercise reader compare proof theorem universally compatible dco exists dominates dominated remark smallness commensurators implies deformation space follows group elliptic contained commensurator hence small shall show universally elliptic trees dominating dominated belong finitely many deformation spaces prove existence dco determine tree deformation one needs know action vertex stabilizers deformation action universally elliptic edge stabilizers lemma small elliptic hence two deformation spaces possible action given corollary shows required finiteness hence existence dco one may obtain tree dco refining vertices elliptic explained small two possibilities fixes exactly one end dco ascending deformation space defined section proposition dco irreducible ascending hnn extension cyclic group hence isomorphic possibility acts line hence virtually edge stabilizers cyclic isomorphic klein bottle group corollary let commutative transitive let family subgroups freely indecomposable relative solvable baumslagsolitar group cyclic compatibility jsj deformation space dco relative exists may obtained possibly refining vertices stabilizer isomorphic proof theorem applies commensurators cyclic subgroups metabelian see remark hence small length functions compatibility appendix view simplicial tree metric space giving length every edge generally consider see basic facts recall two simplicial trees compatible tree collapses onto context simplicial collapse maps natural generalisation maps preserving alignment image arc segment possibly point compatibility thus makes sense length function isometric action map defined first main result appendix theorem saying two compatible sum length functions length function nice consequence set compatible given tree closed give proof following classical facts minimal irreducible determined length function equivariant topology axes topology determined length functions agree space irreducible following suggestion feighn extend space trees proof use based length functions extends proof theorem proving theorem show pairwise compatibility finite set rtrees implies existence common refinement define prime factors greatest common divisors gcd least common multiples lcm irreducible simplicial trees conclude explaining obtain actions blowing vertices jsj trees assume finitely generated subsections apply infinitely generated group hypotheses irreducibility ensure contains enough hyperbolic elements leave details reader metric trees length functions endowed path metric making edge isometric closed interval simplicial tree becomes usually declare edge length rtree geodesic metric space two distinct points connected unique topological arc often call segment considerations preliminary section apply well simplicial trees denote distance tree equipped isometric action considered equivalent equivariantly isometric denote tree equipped distance branch point point least three components subtree degenerate single point otherwise disjoint closed subtrees bridge unique arc say map preserves alignment collapse map image segment segment possibly point restriction continuous map collapse map restriction segment continuous preimage point subtree two trees compatible exists tree collapse maps denote translation length parabolic isometry minimum achieved subset characteristic set fixed point set elliptic axis hyperbolic map length function denote risk confusion say map length function tree action minimal proper subtree contains hyperbolic element unique minimal subtree union translation axes hyperbolic elements proposition group possibly infinitely generated acts one following must occur action irreducible two hyperbolic elements whose axes intersection finite length large generate acting freely discretely fixed point metric completion trivial action invariant line fixed end end defined equivalence class geodesic rays finite hausdorff distance finitely generated finitely generated relative finitely many elliptic subgroups fixes point metric completion fixes point fixed end length function absolute value homomorphism length functions usually called abelian use terminology may cause confusion say minimal hyperbolic element either invariant line action irreducible use following facts minimal action lemma let hyperbolic elements axes disjoint intersection bridge axes meet min max inequality equality axes meet single point lemma theorem irreducible exist hyperbolic elements hyperbolic lemma lemma irreducible arc contained axis example use lemmas show minimal irreducible action takes values simplicial tree suffices prove distance two branch points lies given two branch points lemma one find hyperbolic elements disjoint axes bridge axes precisely lemma length functions trees let finitely generated group let set minimal isometric actions modulo equivariant isometry let tirr set irreducible following classical results theorem two minimal irreducible length function equivariantly isometric theorem equivariant topology axes topology agree tirr theorem assignment defines embedding tirr axes topology topology induced embedding set length functions closed even finitely generated finitely generated projectively compact theorem equivariant topology gromov topology defined following neighbourhood basis given number finite subset define set trees exist axj call approximation example illustration definition let explain given set trees fixes tripod open gromov topology recall tripod convex hull points lie segment let indices modulo endpoints tripod fixed center tripod let small compared distances take lies consider approximation points since gxi follows midpoint distance follows three points lie segment midpoint always belongs characteristic set characteristic set line thus elliptic therefore fixes fact axes topology finer gromov topology harder half theorem viewed version parameters theorem length function determines tree continuous way preparation next subsection give quick proofs theorems unlike previous proofs use based length functions proof theorem let minimal irreducible length function denote axis hyperbolic element axis lemma empty empty define isometric equivariant map set branch points follows let branch point auxiliary branch point lemmas exist hyperbolic elements whose axes intersect bridge axes intersect single point call note intersection hyperbolic elements whose axis contains element axis meets three sets contains gives intrinsic definition independent choice particular isometric equal extend equivariantly isometrically first closure set branch points complementary interval resulting map onto minimal proof theorem given map tirr continuous gromov topology follows formula max shows gromov topology finer axes topology converse fix finite set points finite set elements show length function close enough suitable finite subset exist points first assume branch point choose elements previous proof endpoint bridge close axes disjoint define different choice may lead different point distance goes tends pairwise distances easy complete proof branch points one add new points contained arc bounded branch points xbi xci one defines point dividing way divides xbi xci suggested feighn one may extend previous results reducible trees let tss consist minimal trees either irreducible isometric rule trivial trees trees exactly one fixed end every length function length function tree tss theorem two minimal trees tss length function equivariantly isometric equivariant topology axes topology agree tss words assignment induces homeomorphism tss equipped gromov topology space length functions note results stated trees tss trees irreducible dihedral proof refer page proof first assertion actions irreducible since set irreducible length functions open suffices show following fact claim sequence trees tss whose length functions converge length function action converges gromov topology prove claim denote characteristic set fix hyperbolic hence large denote possibly empty degenerate segment first case acts translations show converges suffices show given elements length goes infinity standard argument using helly theorem may assume first show length goes infinity let arbitrary since elliptic distance goes image lim inf show overlap goes infinity assume relative position disjoint empty contradiction since every goes infinity result clear nested remaining case changing inverse assume translate direction along hyperbolic equals length goes infinity suppose action dihedral suppose reverses orientation large axes ghg long overlap previous argument overlap translates one direction ghg close follows acts central symmetry long subarc moreover reverse orientation distance fixed points close convergence easily follows observations proves claim hence theorem compatibility length functions recall two compatible common refinement exists equivariant maps preserving alignment image segment segment possibly point see subsections call maps collapse maps compatible standard common refinement constructed follows denote distance length function let common refinement given define satisfying also length measure defined associated metric space obtained identifying refines maps satisfying hence continuous length function follows formula particular length function prove converse theorem two minimal irreducible action compatible sum length functions length function remark compatible length function corollary compatibility closed relation tirr tirr particular set irreducible compatible given closed tirr proof follows fact set length functions closed subset proof theorem prove direction let irreducible minimal length functions length function minimal denote axes respectively lemma irreducible hyperbolic elements hyperbolic exist since exist want prove common refinement fact show standard refinement mentioned earlier unique theorem proof similar theorem first need lemmas lemma lem let finitely generated semigroup point line invariant subgroup hsi generated let arc contained axis hyperbolic element exists finitely generated semigroup hsi every element hyperbolic axis contains translates direction lemma let arbitrary irreducible minimal trees given arc exists hyperbolic whose axis contains proof apply lemma whose axis contains finitely generated one takes group generated hyperbolic since generates group must contain element hyperbolic otherwise would global fixed point serre lemma see subsection applying lemma action get semigroup whose elements hyperbolic similarly contains element hyperbolic element satisfies conclusions lemma remark generally one may require hyperbolic finitely many trees assume described beginning proof theorem lemma let hyperbolic therefore axes meet axes axes meet axes meet one point particular elements hyperbolic proof assume meet since get similarly inequalities incompatible lemma assume meet meet nondegenerate arc may assume since contradicting lemma complete proof theorem suffices define maps maps collapse maps three points satisfy triangular equality images satisfy strict triangular inequality standard common refinement construction proof theorem given branch points use lemma get elements hyperbolic three trees bridge lemma guarantees single point define new phenomenon may intersect single point relation comes equality using formula defined branch points extend continuity closure set branch points linearly complementary interval relation still holds common refinements following result proved sets theorem proposition let finitely generated group let irreducible minimal compatible exists common refinement remark statement may interpreted fact set projectivized trees satisfies flag condition simplicial complex whenever one sees indeed two compatible trees define length functions segments joining pair length functions proposition says length functions prove proposition need terminology direction connected component quadrant product direction direction quadrant heavy exists hyperbolic contains positive equivalently one large say makes heavy core complement union quadrants heavy core subsection compatible contains rectangle product arc reduced point first prove technical lemma lemma let irreducible minimal let collapse map irreducible let quadrant quadrant heavy note direction preserves alignment proof consider element making heavy hyperbolic makes heavy done assume find hyperbolic hence axis intersects compact set large enough element makes heavy since element hyperbolic makes heavy prove existence consider line disjoint bridge let arc containing interior lemma exists hyperbolic whose axis contains hence disjoint hyperbolic element hyperbolic axis intersects compact set mapped single point otherwise would hyperbolic proof proposition assume general case follows straightforward induction let standard common refinement see subsection let core enough prove contain product arcs assume otherwise denote images since least one inequality holds assume instance claim contained core giving contradiction show quadrant intersecting heavy denote collapse map preimage quadrant intersecting since rectangle contained quadrant heavy lemma arithmetic trees subsection work simplicial trees let sirr set simplicial trees minimal irreducible redundant vertices inversion also view metric tree declaring edge length makes sirr subset tirr theorem tree sirr determined length function definition prime factors prime factors splittings obtained collapsing edges orbits one clearly length function may view prime factor orbit edges edge quotient graph groups since assumed finitely generated finitely many prime factors proposition remains true finitely generated relative finite collection elliptic subgroups lemma let sirr tree obtained collapses particular prime factors belongs sirr prime factors distinct squarefree proof let edge collapsed since redundant vertex line either endpoints branch points branch points orbit using lemma find elements hyperbolic whose axes collapsed points bridge axes since hyperbolic disjoint axes tree irreducible easy check collapsing create redundant vertices sirr suppose hence gets collapsed prime factor holds lemma tree sirr determined prime factors particular refines every prime factor also prime factor corollary assume compatible trees irreducible irreducible belongs deformation space proof lemma shows performing collapse irreducible simplicial tree yields irreducible tree point irreducible quotient graph groups circle every edge endpoint inclusion onto see subsection implies performing collapse tree yields minimal tree belonging deformation space point lemma follows compatible standard refinement constructed subsection metric tree viewed product shall define lcm simplicial trees understand difference two suppose obtained subdividing edge length function hand definition gcd consider two trees sirr length functions define sum length functions appear prime factors length function tree possibly point collapse call gcd define sum length functions appear prime factors lemma let compatible trees sirr tree sirr whose length function common refinement edge collapsed proof let common refinement modify follows collapse edge collapsed restrict minimal subtree remove redundant vertices resulting tree belongs sirr irreducible refines common refinement edge collapsed check correct length function finding prime factors since edge collapsed prime factor prime factor conversely prime factor associated orbit edges orbit lifts remark unlike standard refinement tree redundant vertices proposition let pairwise compatible trees sirr exists tree sirr whose length function sum length functions appear prime factor moreover tree sirr refines refines tree sirr compatible compatible subgroup elliptic elliptic dominates belongs deformation space proof first suppose show satisfies additional conditions refines refines every prime factor prime factor proves assertion pairwise compatible common refinement proposition theorem one exclude ascending hnn extensions refines assertion compatible assertion follows fact edge collapsed proof proposition fixes point elliptic fixes point preimage case follows easily induction assertion tree compatible define definition lcm call lcm compatible trees reading actions rips theory gives way understand stable actions relating actions simplicial trees therefore closely related jsj decompositions consider first jsj deformation space compatibility jsj tree simplicity assume reading jsj deformation space proposition let finitely presented let jsj tree family slender subgroups stable action whose arc stabilizers slender edge stabilizers elliptic recall arc stable pointwise fixes subarc also fixes action stable every arc stable proof limit simplicial trees slender edge stabilizers since universally elliptic edge stabilizer elliptic every passing limit deduce element elliptic since finitely generated elliptic remark generally suppose finitely presented stable extension finitely generated free abelian groups let jsj tree finitely generated edge stabilizers exists theorem stable arc stabilizers edge stabilizers elliptic recall theorem finitely presented flexible vertices slender jsj deformation space either slender slender fiber proposition let previous proposition exists obtained blowing flexible vertex action isometries line slender action dual measured foliation underlying resolves dominates following sense exists map piecewise linear every segment decomposed finitely many subsegments restriction preserves alignment proof using ellipticity respect argue proof proposition elliptic let fixed point elliptic let minimal subtree line slender vertex dual measured foliation underlying orbifold skora theorem applied covering surface remark arguments given may applied general situations instance assume finitely generated subgroups containing slender jsj tree slender subgroups whose flexible subgroups let slender arc stabilizers split subgroup stabilizer unstable arc tripod applying techniques see limit slender trees propositions apply reading compatibility jsj tree first author explained obtain small actions hyperbolic group jsj tree proof based bowditch construction jsj tree topology give different general approach based corollary saying compatibility closed condition results subsection describing compatibility jsj space universally compatible tco compatible limit simplicial illustrate idea simple case let finitely presented csa group assume abelian let tco compatibility jsj tree class abelian groups see definition subsection let action trivial tripod stabilizers abelian arc stabilizers limit simplicial since tco compatible compatible corollary let standard common refinement tco length function tco see subsection let fco tco maps preserving alignment dtco fco fco vertex edge tco correspond closed subtrees fco minimality arc containing branch point except maybe endpoints relation dtco shows restriction isometric embedding particular obtained changing length arcs possibly making length shall describe action note infinite action need minimal finitely supported see given edge tco containing denote endpoint belonging minimal convex hull set points extremal connected first suppose fixes point note extremal stabilizer arc contains infinite claim edges tco containing extremal intersection arc stabilizer contains hge abelian follows point fixed since tripod stabilizers trivial deduce fixes contradiction thus proved cone finite number orbits points fix point flexible abelian triviality tripod stabilizers implies line abelian surface group theorems skora theorem asserts minimal subtree dual measured lamination compact surface triviality tripod stabilizers disjoint union segments pointwise stabilizer segment index boundary subgroup follows particular analysis geometric see references roger alperin hyman bass length functions group actions combinatorial group theory topology alta utah pages princeton univ press princeton benjamin barrett computing jsj decompositions hyperbolic groups benjamin beeker compatibility jsj decomposition graphs free abelian groups internat algebra benjamin beeker abelian jsj decomposition graphs free abelian groups group theory mladen bestvina mark feighn bounding complexity simplicial group actions trees invent mladen bestvina mark feighn stable actions groups real trees invent brian bowditch cut points canonical splittings hyperbolic groups acta brian bowditch peripheral splittings groups trans amer math electronic brian bowditch relatively hyperbolic groups internat algebra inna bumagin olga kharlampovich alexei miasnikov isomorphism problem finitely generated fully residually free groups pure appl algebra mathieu carette automorphism group accessible groups lond math soc christophe champetier vincent guirardel limit groups limits free groups israel ruth charney introduction artin groups geom dedicata ian chiswell introduction world scientific publishing river edge matt clay contractibility deformation spaces algebr geom electronic matt clay deformation spaces automorphisms baumslagsolitar groups groups geom matt clay artin group split internat algebra matt clay max forester whitehead moves bull lond math daniel cohen combinatorial group theory topological approach volume london mathematical society student texts cambridge university press cambridge donald collins frank levin automorphisms hopficity certain groups arch math basel marc culler john morgan group actions proc london math soc marc culler karen vogtmann moduli graphs automorphisms free groups invent dahmani daniel groves isomorphism problem toral relatively hyperbolic groups publ math inst hautes dahmani vincent guirardel isomorphism problem hyperbolic groups geom funct dahmani nicholas touikan isomorphisms using dehn fillings splitting case thomas delzant quotients des groupes hyperboliques duke math thomas delzant sur acylindrique des groupes finie ann inst fourier grenoble warren dicks dunwoody groups acting graphs volume cambridge studies advanced mathematics cambridge university press cambridge dunwoody accessibility finitely presented groups invent dunwoody sageev finitely presented groups slender groups invent dunwoody swenson algebraic torus theorem invent martin dunwoody inaccessible group geometric group theory vol sussex pages cambridge univ press cambridge farb relatively hyperbolic groups geom funct benson farb lee mosher rigidity solvable groups invent max forester deformation rigidity simplicial group actions trees geom electronic max forester uniqueness jsj decompositions finitely generated groups comment math max forester splittings generalized groups geom dedicata fujiwara papasoglu finitely presented groups complexes groups geom funct francisco javier juan manuel homeotopy group non orientable surface genus three rev colombiana daniel groves michael hull abelian splittings artin groups vincent guirardel approximations stable actions comment math vincent guirardel dynamics boundary outer space ann sci norm sup vincent guirardel reading small actions hyperbolic group jsj splitting amer vincent guirardel nombre intersection pour les actions groupes sur les arbres ann sci norm sup vincent guirardel actions finitely generated groups ann inst fourier grenoble vincent guirardel gilbert levitt deformation spaces trees groups geom vincent guirardel gilbert levitt outer space free product proc lond math soc vincent guirardel gilbert levitt jsj decompositions definitions existence uniqueness jsj deformation space vincent guirardel gilbert levitt scott swarup regular neighbourhood tree cylinders pacific vincent guirardel gilbert levitt trees cylinders canonical splittings geom vincent guirardel gilbert levitt mccool groups toral relatively hyperbolic groups algebr geom vincent guirardel gilbert levitt splittings automorphisms relatively hyperbolic groups groups geom vincent guirardel gilbert levitt vertex finiteness splittings relatively hyperbolic groups israel vincent guirardel gilbert levitt rizos sklinos elementary equivalence commensurability hyperbolic groups christopher hruska relative hyperbolicity relative quasiconvexity countable groups algebr geom william jaco peter shalen seifert fibered spaces mem amer math klaus johannson homotopy equivalences boundaries volume lecture notes mathematics springer berlin olga kharlampovich alexei myasnikov effective jsj decompositions groups languages algorithms volume contemp pages amer math providence malik koubi croissance uniforme dans les groupes hyperboliques ann inst fourier grenoble kropholler analogue torus decomposition theorem certain duality groups proc london math soc kropholler roller relative ends duality groups pure appl algebra kropholler roller splittings duality groups iii london math soc gilbert levitt automorphisms hyperbolic groups graphs groups geom dedicata gilbert levitt characterizing rigid simplicial actions trees geometric methods group theory volume contemp pages amer math providence gilbert levitt paulin geometric group actions trees amer linnell accessibility groups pure appl algebra darryl mccullough andy miller symmetric automorphisms free products mem amer math miller iii walter neumann swarup examples hyperbolic groups geometric group theory canberra pages gruyter berlin john morgan peter shalen valuations trees degenerations hyperbolic structures ann math denis osin relatively hyperbolic groups intrinsic geometry algebraic properties algorithmic problems mem amer math panos papasoglu eric swenson boundaries jsj decompositions cat geom funct paulin gromov topology topology paulin outer automorphisms hyperbolic groups small actions arboreal group theory berkeley pages springer new york paulin sur des groupes libres sela perin elementary embeddings hyperbolic groups ann sci norm cornelius reinfeldt richard weidmann diagrams hyperbolic groups preprint http rips sela cyclic splittings finitely presented groups canonical jsj decomposition ann math peter scott geometries bull london math peter scott gadde swarup splittings groups intersection numbers geom electronic peter scott gadde swarup regular neighbourhoods canonical decompositions groups corrections available http peter scott terry wall topological methods group theory homological group theory proc durham pages cambridge univ press cambridge sela acylindrical accessibility groups invent sela structure rigidity gromov hyperbolic groups discrete groups rank lie groups geom funct sela endomorphisms hyperbolic groups hopf property topology zlil sela diophantine geometry groups diagrams publ math inst hautes serre arbres amalgames france paris avec collaboration hyman bass peter shalen dendrology groups introduction essays group theory pages springer new peter shalen dendrology applications group theory geometrical viewpoint trieste pages world sci publishing river edge richard skora splittings surfaces amer math john stallings topology finite graphs invent william thurston geometry topology princeton lecture notes nicholas touikan detecting geometric splittings finitely presented groups appear transactions wall duality dimension proceedings casson fest volume geom topol pages electronic geom topol coventry richard weidmann accessibility finitely generated groups index axis equivalence class allowed edge stabilizers aell universally elliptic groups infinite groups characteristic set groups complexity edge group core flipped core union squares cyclic dco compatibility jsj space djsj jsj deformation space edge set serre property free group generators vertex edge stabilizer relative structure incv inch incident structures translation length length function gcd two length functions lcm two length functions minimal subtree maximal virtually abelian group containing regular neighbourhood two trees sirr set irreducible simplicial trees scz stability conditions family small subgroups snvc groups cyclic tree cylinders collapsed tree cylinders tco compatibility jsj tree gcd two trees lcm compatible trees set actions tirr set irreducible actions vertex set virtually cyclic subgroups infinite center zmax maximal virtually cyclic subgroups infinite center abelian tree accessibility accessibility dunwoody acylindrical accessibility acylindrical tree splitting admissible equivalence relation alignment preserving map approximation point arc axes topology axis group boundary orbifold boundary subgroup bowditch bridge canonical tree characteristic set subgroups collapse map collapse tree collapsed tree cylinders commensurable commensurator common refinement commutative transitive compatibility jsj deformation space compatibility jsj tree compatible trees conical point core product two trees corner reflector csa group cyclic tree cylinder tree deformation space degenerate segment dihedral action dihedral group domination dual splitting elementary subgroup elliptic element subgroup elliptic respect tree end tree euler characteristic extended boundary subgroup fiber subgroup filling construction flexible vertex group stabilizer fold free splitting freely indecomposable relative fully hyperbolic tree quadratically hanging raag reduced tree redundant vertex refinement refining vertex regular neighbourhood relative finite generation presentation relative generating set relative tree splitting relatively hyperbolic group restriction rigid vertex group stabilizer group gcd two trees generalized group geodesic orbifold closed essential simple closed filling orbifold simple graph groups equivariant topology grushko decomposition deformation space sandwich closed family subgroups segment action tree serre property serre lemma slender group slender small domination small orbifold small small socket splitting dual family geodesics splitting stability condition stability condition stability condition scz stable action deformation space standard common refinement standard refinement horizontal edge core hyperbolic element subgroup incidence structure incident edge groups irreducible tree deformation space jsj decomposition tree deformation space lcm pairwise compatible trees length function map trees minimal action minimal subtree minuscule mirror morphism trees splitting relative orbifold small orbifolds finite mapping class group outer space topology axes topology equivariant topology totally flexible group translation length tree cylinders trivial action trivial deformation space universally compatible tree universally elliptic subgroup tree used boundary component parabolic peripheral structure prime factors tree vertical edge core virtual commutation abelian virtually cyclic cyclic vpc vincent guirardel institut recherche rennes rennes cnrs umr avenue leclerc rennes gilbert levitt laboratoire nicolas oresme lmno caen cnrs umr pour shanghai normandie univ unicaen cnrs lmno caen france levitt
| 4 |
compression recurrent neural networks application lvcsr acoustic modeling embedded speech recognition rohit ouais alsharif antoine bruguier ian mcgraw may google prabhavalkar oalsha tonybruguier imcgraw abstract study problem compressing recurrent neural networks rnns particular focus compression rnn acoustic models motivated goal building compact accurate speech recognition systems run efficiently mobile devices work present technique general recurrent model compression jointly compresses recurrent weight matrices find proposed technique allows reduce size long memory lstm acoustic model third original size negligible loss accuracy index model compression lstm rnn svd embedded speech recognition introduction neural networks nns multiple recurrent hidden layers emerged acoustic models ams automatic speech recognition asr tasks advances computational capabilities coupled availability large annotated speech corpora made possible train ams large number parameters great success speech recognition technologies continue improve becoming increasingly ubiquitous mobile devices voice assistants apple siri microsoft cortana amazon alexa google enable users search information using voice although traditional model applications recognize speech remotely large servers growing interest developing asr technologies recognize input speech directly promise reduce latency enabling user interaction even cases mobile data connection either unavailable slow unreliable main challenges regard disk memory computational constraints imposed devices since number operations neural equal contribution authors would like thank sak raziel alvarez helpful comments suggestions work chris thornton chen comments earlier draft networks proportional number model parameters compressing model desirable point view reducing memory usage power consumption paper study techniques compressing recurrent neural networks rnns specifically rnn acoustic models demonstrate generalization conventional matrix factorization techniques jointly compress recurrent weight matrices allows compress acoustic models third original size negligible loss accuracy focus acoustic modeling techniques presented applied rnns domains handwriting recognition machine translation inter alia technique presented paper encompasses traditional recurrent neural networks rnns well long memory lstm neural networks section review previous work focussed techniques compressing neural networks proposed compression technique presented section examine effectiveness proposed techniques sections finally conclude discussion findings section related work number previous proposals compress neural networks context asr well broader field machine learning summarize number proposed approaches section noted previous work large amount redundancy parameters neural network example denil show entire neural network reconstructed given values small number parameters caruana colleagues show output distribution learned larger neural network approximated neural network fewer parameters training smaller network directly predict outputs larger network approach termed model compression closely related recent distillation approach proposed hinton redundancy neural network also exploited hashnet approach chen imposes parameter tying network based set hash functions context asr previous approaches acoustic model compression focused mainly case feedforward dnns one popular technique based sparsifying weight matrices neural network example setting weights whose magnitude falls certain threshold zero based loss function optimal brain damage procedure fact seide demonstrate weights network set zero without incurring loss performance although techniques based sparsification decrease number effective weights encoding subset weights zeroed requires additional memory weight matrices represented dense matrices efficient computation parameter savings disk translate savings runtime memory techniques reduce number model parameters based changing neural network architecture introducing bottleneck layers matrix factorization layer also note recent work wang uses combination singular value decomposition svd vector quantization compress acoustic models methods investigated work similar previous work examined using svd reduce number parameters network context feedforward dnns describe section methods thought extension techniques proposed xue wherein jointly factorize recurrent weight matrices network model compression section present general technique compressing individual recurrent layers recurrent neural network thus generalizing methods proposed xue describe approach general setting standard rnn denote activations hidden layer consisting nodes time hlt inputs layer time turn activations previous layer input features denoted write following equations define output activations layers standard rnn hlt whl fig initial model figure compressed jointly factorizing recurrent whl wxl matrices using shared recurrent projection matrix figure weight matrices since proposed approach applied independently recurrent hidden layer describe compression operations particular layer jointly compress recurrent matrices corresponding specific layer determining suitable recurrent projection matrix denoted rank whl zhl wxl thus allowing hlt zhl zxl hlt zhl zxl compression process depicted graphically figure note sharing across recurrent interlayer matrices allows efficient parameterization weight matrices shown section result significant loss performance thus degree compression model controlled setting ranks projection matrices layers network determine recurrent projection matrix first computing svd recurrent weight matrix truncate retaining top singular values defl corresponding singular vectors noted respectively denoted whl uhl vhl zhl finally determine solution following problem represent bias vectors denotes activation function wxl whl denote weight matrices refer respectively recurrent zxl arg min wxl equations slightly complicated using lstm cells recurrent layer basic form remains see section kxkf denotes frobenius norm matrix pilot experiments found proposed initialization performed better training model recurrent projection matrices model architecture random initialization network weights applying technique lstm rnns generalizing procedure described context standard rnns case lstm rnns straightforward using notation note matrix whl case lstm concatenation four gate weight matrices obtained stacking vertically wim wom wcm represent respectively recurrent connections input gate output gate forget gate cell state similarly matrix wxl concatenation matrices wix wox wcx correspond input gate forget gate output gate cell state next layer definitions compression applied described section note compress weights since already narrow single column matrices contribute significantly total number parameters network experimental setup order determine effectiveness proposed rnn compression technique conduct experiments openended dictation task mentioned section one primary motivations behind investigating acoustic model compression build compact acoustic models deployed mobile devices recent work sak demonstrated deep ams trained predict either contextindependent phoneme targets phoneme targets approach performance speech tasks systems two important characteristics addition phoneme labels system also hypothesize blank label unsure identity current phoneme systems trained optimize connectionist temporal classification ctc criterion maximizes total probability correct label sequence conditioned input sequence details found following baseline model thus ctc model five hidden layer rnn lstm cells layer predicts phonemes plus blank point comparison also present results obtained using much larger model large deploy embedded devices nonethless serves performance models dataset model consists five hidden layers lstm cells per layer trained predict one phonemes plus blank systems trained using distributed asynchronous stochastic gradient descent parameter server systems first trained convergence optimize ctc criterion following discriminatively sequence trained optimize minimum bayes risk smbr criterion discussed section applying proposed compression scheme network first ctc criterion followed sequence discriminative training smbr criterion additional step found necessary achieve good performance particularly amount compression increased language model used work model trained sentences data entropybased pruning applied reduce size roughly mainly bigrams vocabulary since goal build recognizer run efficiently mobile devices minimize size decoder graph used recognition following approach outlined perform additional pruning step generate much smaller language model mainly unigrams composed lexicon transducer construct decoder graph perform rescoring larger resulting models compressed use total thus enabling run many times faster recent mobile devices parameterize input acoustics computing log energies range computed every windowed speech segments system uses features computed range since resulted slightly improved performance following stabilize ctc training stacking together consecutive speech frames right context frames every third stacked frame presented input network training evaluation data systems trained anonymized utterances extracted google voice search traffic hours create training data synthetically distorting utterances simulate background noise reverberation using room simulator noise samples extracted youtube videos environmental recordings everyday events distorted examples created utterance training set systems additionally adapted using smbr criterion set anonymized dictation utterances extracted google traffic processed generate training data described improves performance dictation task results reported set anonymized utterances extracted google traffic dictation domain results experiments seek determine impact proposed joint compression technique system performance particular interested determining system performance varies function degree compression controlled setting ranks recurrent projection matrices described section notice since proposed compression scheme applied hidden layers baseline system numerous settings ranks projection matrices layer result number total parameters compressed network order avoid ambiguity set various projection ranks using following criterion given threshold layer set rank corresponding projection matrix corresponds retaining fraction explained variance truncated svd whl specifically singular values sorted set order arg max choosing projection ranks using allows control degree compression thus compressed model size varying single parameter pilot experiments found scheme performed better setting ranks equal layers given total parameter budget projection ranks determined various projection matrices compressed models first optimizing ctc criterion followed sequence training smbr criterion adaptation data described section results experiments presented table seen table baseline system predicts phoneme targets relative worse larger system although half many parameters since ranks chosen retain given fraction explained variance svd operation also note earlier hidden layers network appear lower ranks later layers since variance accounted smaller number singular values seen table word error rates increase amount compression increased although performance system server baseline projection ranks params wer table word error rates test set function percentage explained variance retained svds recurrent weight matrices whl hidden layers rnn compressed systems close baseline moderate compression using value enables model compressed third original size small degradation accuracy however performance begins degrade significantly future work consider alternative techniques setting projection ranks order examine impact system performance conclusions presented technique compress rnns using joint factorization recurrent weight matrices generalizing previous work proposed technique applied task compressing lstm rnn acoustic models embedded speech recognition found could compress baseline acoustic model third original size negligible loss accuracy proposed techniques combination weight quantization allow build small efficient speech recognizer run many times faster recent mobile devices references seide conversational speech transcription using deep neural networks proc interspeech hinton deng dahl mohamed jaitly senior vanhoucke nguyen sainath kingsbury deep neural networks acoustic modeling speech recognition shared views four research groups ieee signal processing magazine vol sak senior beaufays long memory recurrent neural network architectures large scale acoustic modeling proc interspeech sainath vinyals senior sak convolutional long memory fully connected deep neural networks proc icassp deng deep learning methods applications foundations trends signal processing vol schalkwyk beeferman beaufays byrne chelba cohen kamvar strope word command google search voice case study advances speech recognition springer lei senior gruenstein sorensen accurate compact large vocabulary speech recognition mobile devices proc interspeech xue gong restructuring deep neural network acoustic models singular value decomposition proc interspeech xue seltzer gong singular value decomposition based speaker adaptation personalization deep neural network proc icassp graves liwicki bertolami bunke schmidhuber novel connectionist system unconstrained handwriting recognition ieee transactions pattern analysis machine intelligence vol sutskever vinyals sequence sequence learning neural networks proc nips denil shakibi dinh ranzato freitas predicting parameters deep learning proc nips caruana model compression proc acm sigkdd international conference knowledge discovery data mining caruana deep nets really need deep proc nips hinton vinyals dean distilling knowledge neural network arxiv preprint chen wilson tyree weinberger chen compressing neural networks hashing trick proc icml lecun denker solla optimal brain damage proc nips fousek optimizing features lvcsr proc icassp march sainath kingsbury sindhwani arisoy ramabhadran matrix factorization deep neural network training output targets proc icassp wang gong highperformance deep neural speech recognition using proc icassp nakkiran alvarez prabhavalkar parada compressing deep neural networks using rankconstrained topology proc interspeech sak senior rao graves beaufays schalkwyk learning acoustic frame labeling speech recognition recurrent neural networks proc icassp sak senior rao beaufays fast accurate recurrent neural network acoustic models speech recognition proc interspeech graves gomez schmidhuber connectionist temporal classification labelling unsegmented sequence data recurrent neural networks proc icml dean corrado monga chen devin mao ranzato senior tucker yang large scale distributed deep networks proc nips kingsbury optimization sequence classification criteria acoustic modeling proc icassp sak vinyals heigold senior mcdermott monga mao sequence discriminative distributed training long memory recurrent neural networks proc interspeech mcgraw prabhavalkar alvarez gonzalez arenas rao rybach alsharif sak gruenstein beaufays parada personalized speech recognition mobile devices proc icassp
| 9 |
towards statistical reasoning description logics finite domains full version rafael nico jun krdb research centre free university italy university germany npotyka abstract present probabilistic extension description logic alc reasoning statistical knowledge consider conditional statements proportions domain interested consequences proportions introducing general reasoning problems analyzing properties present first algorithms complexity results reasoning fragments statistical alc introduction probabilistic logics enrich classical logics probabilities order incorporate uncertainty probabilistic logics classified three types differ way handle probabilities type logics enrich classical interpretations probability distributions domain well suited reasoning statistical probabilities includes proportional statements like population suffer particular type logics consider probability distributions possible worlds better suited expressing subjective probabilities degrees belief instance medical doctor might say sure diagnosis type logics combine type type logics allow reason kinds uncertainty one basic desiderata probabilistic logics generalize classical logic probabilistic interpretation formulas probability agree classical interpretation however given logic undecidable probabilistic logic satisfies basic desiderata necessarily undecidable order overcome problem instance restrict herbrand interpretations fixed domain consider decidable fragments like description logics probabilistic type extensions description logics previously studied unpublished appendix work type extension alc presented along proof sketch corresponding satisfiability problem type extension enriches classical interpretations probability distributions domain suggested consider similar restrictive setting interested alc extension allows statistical reasoning however impose probability distribution domain instead interested reasoning proportions population satisfying given properties instance given statistical information relative frequency certain symptoms diseases relative frequency symptoms given diseases one ask relative frequency disease given particular combination symptoms therefore consider classical alc interpretations finite domains interested relative proportions true interpretations hence interpretations framework regarded subset interpretations namely finite domains uniform probability distribution domain interpretations indeed sufficient purpose particular considering strictly less interpretations may able derive tighter answer intervals queries approach bears resemblance random world approach however authors consider possible worlds fixed domain size interested limit proportions goes infinity interested finite possible worlds satisfy certain proportions ask statistical statements must true worlds begin introducing statistical alc section together three relevant reasoning problems namely satisfiability problem problem problem section discuss logical properties statistical alc section present first computational results fragments statistical alc statistical alc start revisiting classical description logic alc given two disjoint sets concept names role names alc concepts built using grammar rule one express disjunction universal quantification subsumption usual logical equivalences like semantics focus finite interpretations alc interpretation consist finite domain interpretation function maps concept names sets roles names binary relations two alc concepts equivalent iff interpretations consider probabilistic extension alc statistical alc knowledge bases consist probabilistic conditionals built alc concepts definition conditionals statistical probabilistic alc conditional expression form alc concepts rational numbers statistical alc knowledge base set probabilistic alc conditionals brevity usually call probabilistic alc conditionals simply conditionals example let kflu kflu states percent patients flu fever percent patients flu intuitively conditional expresses relative proportion elements also belong order make precise consider finite alc interpretation alc concept denote cardinality interpretation satisfies written iff either satisfies statistical alc knowledge base iff satisfies conditionals case call model write denote set models mod usual consistent mod inconsistent otherwise call two knowledge bases equivalent write iff mod mod example consider kflu example let interpretation individuals flu flu fever mod kflu classical alc knowledge bases defined set general concept inclusions gcis express subconcept interpretation satisfies iff shown next gcis seen special kind conditionals hence statistical alc kbs generalization classical alc kbs proposition statistical alc interpretations iff proof hence otherwise conversely assume otherwise hence given statistical alc knowledge base first problem interested deciding consistency define satisfiability problem statistical alc knowledge bases usual satisfiability problem given knowledge base decide whether mod example consider knowledge base kflu example conditional implies models mod kflu implies therefore hence adding conditional renders kflu inconsistent consistent interested deriving implicit probabilistic conclusions think different reasoning problems context first define entailment relation analogously logical entailment probabilistic conditional iff mod mod case write context type probabilistic conditionals entailment relation also called logical consequence problem given knowledge base conditional decide whether example consider kflu example explained example holds models mod therefore follows kflu statistical information suggests least patients fever example consider domain birds penguins flying animals let kbirds note conditional actually equivalent furthermore mod kbirds implies therefore hence kbirds statistical information suggests birds population penguins usual satisfiability problem reduced problem proposition inconsistent iff proof inconsistent mod conversely assume interpretations hence mod since must mod well often want check whether specific conditional entailed rather deduce tight probabilistic bounds statement problem often referred probabilistic entailment problem probabilistic logics see instance consider query form alc concepts define problem similar probabilistic entailment problem type probabilistic logics problem given knowledge base query find minimal maximal solutions optimization problems inf subject sup bounded since objective function infimum maximum whenever model mod case say write context type probabilistic conditionals entailment relation also called tight logical consequence mod problem infeasible exists solution example example found kbirds bound actually tight since always lower bound showed upper bound suffices give examples interpretations take bounds lower bound let interpretation individuals individuals birds birds fly penguins model kbirds satisfies construct letting birds penguins another model kbirds satisfies hence also kbirds one might ask whether values actually taken model whether large gaps probabilistic entailment problem type logics show models indeed yield dense interval noting convex combination models model applying intermediate value theorem real analysis however framework consider probability distributions possible worlds worlds discrete nature therefore apply tools however two models yield different probabilities query find another model takes probability middle probabilities lemma bisection lemma let two arbitrary alc concepts exist mod mod proof given interpretation construct interpretation follows set make different copies domain set set induction shape alc concepts show concepts let conditionals let least common multiple values assume different domains rename elements one domain necessary let interpretation obtained taking union domains concept role interpretations consider rin last equality shows convex combination since satisfy conditional satisfies tional well case well conditional still satisfied see second inequality conditional still satisfied case analogous course hence mod choices let show completely analogously letting show value lower upper bound given find model gives probability arbitrarily close value proposition intermediate values let every denotes open interval mod proof since must exist mod mod consider following bisection algorithm let ing let model obtained explained bisection lemma done otherwise let otherwise let construction maintain invariant hence log iterations model proves claim logical properties discuss logical properties statistical alc already noted statistical alc generalizes classical alc proposition furthermore yields tight dense proposition answer interval queries whose condition satisfied models knowledge base let also note statistical alc language invariant increasing language adding new concept role names change semantics alc seen immediately observing interpretation conditionals depends concept role names appear conditional statistical alc also representation invariant sense concepts hence changing syntactic representation conditionals change semantics particular entailment results independent changes satisfy following independence property whether depends conditionals connected query may simplify answering query reducing size order make precise need additional definitions arbitrary alc concept sig denotes set concept role names appearing conditionals directly connected written sig sig sig sig two conditionals directly connected iff share concept role names let denote transitive closure say connected iff restriction conditionals connected set using analogous definition queries qualitative conditionals get following result proposition independence consistent iff iff proof claims suffices show model model vice versa model let restriction concept role names still model particular conversely let model consistency model let interpretation defined disjoint union since share concept role names definition connectedness satisfies conditionals iff conditionals iff hence model particular holds another interesting property probabilistic logics continuity intuitively continuity states minor changes knowledge base yield major changes derived probabilities however demonstrated courtney paris condition strong reasoning maximum entropy model knowledge base problem arises probabilistic entailment problem example logics considered subjective probabilities problem occurs setting statistical probabilities demonstrate example consider knowledge base interpretation model consistent particular since interpreted whole domain know explained proposition deterministic conditionals correspond concept inclusions imply models therefore let denote knowledge base obtained decreasing upper bound first conditional arbitrarily small way satisfy first two conditionals interpreting empty set indeed interpretation interprets concept names empty set model consistent hence minor change probabilities knowledge base yield severe change entailed probabilities means relation consider continuous way either alternative strong notion continuity paris proposed measure difference kbs blaschke distance models blaschke continuity says kbs close respect blaschke distance entailed probabilities close blaschke continuity satisfied probabilistic logics maximum entropy probabilistic entailment probabilistic interpretations probability distributions finite number classical interpretations distance two interpretations distance corresponding probability vectors apply definition interpret conditionals means classical interpretations clear reasonable definition distance two classical interpretations leave search reasonable topology space classical interpretations future work statistical proposition fact reasoning alc show reasoning problems however find upper bounds complexity reasoning alc far therefore focus fragments alc begin focus sublogic alc allow negation universal quantification formally concepts constructed grammar rule statistical statistical alc conditionals restricted concepts notice due upper bounds conditionals statistical kbs capable expressing weak variants negations instance statement restricts every model contain least one element thus contrary classical statistical kbs may inconsistent example consider since every model must satisfy clearly contradiction thus inconsistent interestingly though possible simulate valuations finite set propositional formulas wit help conditional statements thus satisfiability problem least even statistical theorem satisfiability problem statistical proof provide reduction problem decidw ing validity formula let formula conjunction three literals construct statistical follows let set variables appearing every use two concept names addition every clause introduce concept name create additional concept name consider holds valid iff inconsistent hand consistency decided exponential time reduction integer programming describing reduction detail introduce simplifications recall proposition conditionals form equivalent classical gci thus following often express statistical kbs pairs classical tbox finite set gcis set conditionals statistical said normal form gcis form conditionals form informally normal form one constructor used gci conditionals atomic concept names every transformed equivalent one original signature linear time using normalization rules introducing new concept names complex concepts appearing conditionals precisely replace conditional form statement two fresh concept names extend tbox axioms main idea behind consistency algorithm partition finite domain model different types define use integer programming verify logical conditional constraints satisfied let denote set concept names appearing call subset type intuitively type represents elements domain interpreted belong concept names concept name denote set types simplify presentation following treat concept name belongs types given statistical normal form consider integer variable every type variables express number domain elements belong corresponding type addition used represent total size domain build system linear inequalities variables follows first require variables value least sizes types add exactly size domain ensure conditional statements satisfied adding statement constraint finally must ensure types satisfy logical constraints introduced tbox gci states every element belongs must also belong means types containing excluding populated thus introduce inequality dealing existential restrictions requires checking different alternatives solve creating different linear programs gci implies whenever exists element must also exist least one elementp thus satisfy axiom either empty hence every existential restriction form define set deal gcis form follow similar approach together ideas completion algorithm classical every pair existential restrictions define set intuitively whenever exists element belongs case gcis belong tbox must exist element belongs call hitting sets choices program integer program containing inequalities choice get following result lemma consistent iff exists program satisfiable proof direction since inequalities sound semantics statistical kbs focus direction given solution integer program construct interpretation follows create domain elements partition every type class containing exactly elements every nonempty class select representative element interpretation function maps every concept name set given class let type every notice must exist solution must satisfy least one restriction define set remains shown model notice two concept names holds hence given conditional statement since solution must satisfy inequality holds gci inequality follows every type containing hence every every construction element finally construction exists type axiom every gci implies hence means notice construction produces exponentially many integer programs uses exponentially many variables measured size since satisfiability integer linear programs decidable polynomial time size program obtain exponential time upper bound deciding consistency statistical kbs theorem consistency statistical kbs ime reasoning open minded kbs order regain tractability restrict statistical kbs disallowing upper bounds conditional statements call knowledge bases open minded definition open minded kbs statistical open minded iff conditional statements scope section consider open minded kbs first obvious consequence restricting class kbs negations simulated fact every open minded consistent classical satisfied simple universal model theorem every open minded consistent proof consider interpretation interpretation function maps every concept name every role name easy see interpretation holds every concept hence satisfies gcis addition implies conditionals also satisfied recall intuitively conditionals specify proportion population satisfies given properties one interesting special case question likely observe individual belongs given concept table rules deciding add add add definition let open minded concept problem consists deciding whether show problem solved polynomial time previous section assume normal form additionally conditional statements latter assumption made since conditional statement equivalently replaced gci see proposition moreover checking complex concept equivalent deciding new concept name thus following consider problem deciding concept name normal form algorithm extends completion algorithm classification tboxes addition keep track lower bounds necessity relevant concept names algorithm keeps data structure set tuples form intuitively express tbox entails subsumptions respectively additionally keep function maps every element number intuitively expresses algorithm initializes structures structures updated using rules table case rule applied execution extends available knowledge either extended include one tuple lower bound increased latter case larger value kept function first three rules table standard completion rules classical remaining rules update lower bounds likelihood relevant concept names taking account logical relationship explained next rule applies obvious inference associated conditional statements individuals belong states least belong also thus assuming lowest proportion elements possible proportion elements must least expresses every element must also belong must least many elements finally deals fact two concepts proportionally large must necessarily overlap example individuals belong belong least must belong otherwise together would cover whole domain algorithm executes rules saturation rule applicable saturated decide function follows iff showing correctness algorithm show important property notice likelihood information never transferred roles reason existential restriction guarantee existence one element belonging concept proportionally number elements belong tends example consider construct interpretation rin easy see model thus best lower bound correctly given algorithm theorem correctness let function obtained application rules saturation iff proof sketch easy see rules sound proves direction converse direction consider finite domain interpretation concept names rules satisfied interpretation obtained recursively considering last rule application updated assume domain large enough number concept names appearing easy see interpretation satisfies conditional statements gcis every concept name create new domain element extend interpretation iff given role name define interpretation satisfies thus algorithm correctly decide given concept name remains shown process terminates polynomially many rule applications guarantee impose ordering rule applications first apply classical rules rules applicable update function rules case rule update largest possible value applied first known polynomially many classical rules size applied deciding bound rule apply next requires polynomial time number concept names moreover since largest update applied first value changed every concept name hence linearly many rules applied overall means algorithm terminates polynomially many rule applications yields following result theorem deciding related work years various probabilistic extensions description logics investigated see instance one closest approach type extension alc proposed appendix briefly introduces probabilistic constraints form alc concepts correspond conditionals respectively conversely conditional rewritten probabilistic constraint however subtle fundamental difference semantics definition allows probability distributions arbitrary domains consider uncertainty domain comes allowing finite domains uniform distribution domain approach restricts class models one fundamental difference two approaches proposition hold reason conditional satisfied interpretation contains element probability difference main reason ime algorithm proposed lutz transferred setting suffice consider satisfiable types independently implicit subsumption relations may depend conditionals example consider statistical follows every element must also belong hence every domain element must element however defines satisfiable type interpreted model generated approach conclusions introduced statistical alc new probabilistic extension description logic alc statistical reasoning analyzed basic properties logic introduced reasoning problems interested first step towards effective reasoning statistical alc focused sublogic alc classical form allows reasoning showed upper bounds conditional constraints make satisfiability problem statistical gave ime algorithm decide satisfiability showed tractability regained disallowing strict upper bounds conditional statements going provide algorithms complete picture complexity reasoning statistical alc fragments future work combination integer programming principle may fruitful design first algorithms reasoning full statistical alc references baader brandt lutz pushing envelope kaelbling saffiotti eds proc int joint conf artificial intelligence ijcai beierle finthammer potyka extending completing probabilistic knowledge beliefs without bias intelligenz ceylan bayesian ontology language autom reasoning grove halpern koller random worlds maximum entropy logic computer science lics proceedings seventh annual ieee symposium ieee halpern analysis logics probability artificial intelligence hansen jaumard probabilistic satisfiability kohlas moral eds handbook defeasible reasoning uncertainty management systems vol springer netherlands klinov parsia pronto practical probabilistic description logic reasoner uncertainty reasoning semantic web springer koller levy pfeffer tractable probablistic description logic lukasiewicz probabilistic logic programming conditional constraints acm trans comput logic jul lukasiewicz straccia managing uncertainty vagueness description logics semantic web jws lutz probabilistic description logics subjective uncertainty proc aaai press niepert noessner stuckenschmidt description logics ijcai nilsson probabilistic logic artificial intelligence february paris uncertain reasoner companion mathematical perspective cambridge university press potyka probabilistic reasoning description logic alcp principle maximum entropy international conference scalable uncertainty management springer potyka thimm probabilistic reasoning inconsistent beliefs using inconsistency measures ijcai riguzzi bellodi lamma zese probabilistic description logics distribution semantics semantic web
| 2 |
finding optimal nets kirigami oct departamento faculdade universidade lisboa lisboa portugal centro computacional universidade lisboa lisboa portugal department physics university aveiro aveiro portugal ioffe institute petersburg russia shells synthesized spontaneous templates interconnected panels called nets yield maximized following sequentially two design rules maximum number vertices cut minimum radius gyration net previous methods identify optimal net based random search thus limited simple shell structures guaranteeing unique solution show optimal net found using deterministic algorithm map connectivity shell shell graph nodes links graph represent vertices edges shell respectively applying design rule corresponds finding set maximum leaf spanning trees shell graph applied straightforwardly method allows designing much larger shell structures also apply additional design rules complete catalog maximum leaf spanning trees obtained pacs numbers synthesis polyhedral shells micron nano scales key encapsulation drug delivery inspired japanese art kirigami hollowed structures obtained cutting folding sheet paper lithographic methods developed form shells twodimensional nanometer templates interconnected panels potential enormous wide range shapes sizes obtained ideally unfolded templates nets spontaneously target structure reduce production costs achieve parallel production many nets fold structure time effectiveness pathways may differ orders magnitude finding optimal net maximizes yield simple process depends geometry net physical properties interactions surrounding medium experiments pandey suggest maximum yield obtained entire set nets one first picks nets maximum number vertices cut nets lowest radius gyration vertex cut one whose number adjacent faces net polyhedral shell implementation global search ref implies considering possible cuts shell inefficient procedure time consuming technically demanding shapes impossible since show number possible cuts rapidly grows number edges example twelve edges cube possible cuts dodecahedron thirty edges million possible cuts consequently previous methods identify optimal net sufficiently large shell based random searches configuration space consider subset actually small subset possible nets guarantee unique globally optimal solution propose deterministic procedure identify optimal net requires generating subset possible cuts let exemplarily consider case cubic shell shown fig structure shell mapped shell graph black nodes links figure nodes represent vertices links represent edges second graph also defined face graph whose nodes faces links connect pairs adjacent faces blue graph every net cubic shell corresponds spanning tree face graph connected includes nodes minimum number links see fig nets obtained set cuts along edges shell graph constraint set nodes face graph remains connected cut defined shell graph contains removed links cut edges represented red figs consists nodes spanning tree shell graph main advantage mapping proposed makes possible implement systematic deterministic way two design rules discuss apply mapping identify cuts maximize number vertices cut rank increasing radius gyration first criterion vertices cut nodes unitary degree cut known leaves since cut spanning tree shell graph cuts maximize number vertices cut maximum leaf spanning trees mlst shell graph identify full set mlsts first identify minimum connected archimedean solids shell edges obtained dependence consistent predicted linear dependence solid line simple relation one estimate upper bound number mlsts nmlst function exact number spanning trees nst given kirchhoff theorem states total number labeled spanning trees given product eigenvalues laplacian matrix nst number vertices excluded product nevertheless also get upper bound nst function accordingly show supplemental material estimate upper bound ratio nmlst given nmlst fig net cubic shell cubic shell mapped shell graph black nodes links graph shell vertices edges respectively net obtained sequence edge cuts red links cut defined shell graph consisting set removed links nodes red face graph defined nodes faces cube links connect pairs adjacent faces blue nodes links cut net spanning trees shell face graphs respectively inant sets shell graph minimum sets nodes nodes either part set directly connected minimum sets identified obtaining mlsts straightforward remaining nodes leaves see details supplemental material algorithm identify mlst simpler efficient ones see ref find full list mlst others figures show three examples threedimensional shell structures one nets corresponding cuts mlsts examples number spanning trees mlsts increase number shell edges see figure caption values however fraction spanning trees mlst decays exponentially number shell edges shown fig understand result first estimate number leaves mlst scales shown detail supplemental material assuming shell convex polyhedral regular faces simplified approximated procedure identify mlst predict figure shows function platonic also consistent orders magnitude observed exponential dependence shown fig fast decay reinforces necessity deterministic method since chances obtaining mlst random search given ratio example dodecahedron twelve faces less million spanning trees mlsts obtain mlst one would need randomly sample configurations average largest shell considered number larger thus identifying optimal net random methods practically impossible large shells strategy proposed also consider open structures without one faces ones shown figs shells might relevant several applications involving example encapsulation drug delivery mechanism shell graph structures equal corresponding polyhedron however every cut includes edges adjacent missing face connecting two nodes face identify optimal net follow procedure constraint edges missing face polyhedron always cut note case cut longer tree edges missing face form loop possible loop cut split net pieces see supplemental materials details second criterion second criterion proposed ref select among possible cuts maximum number vertices cut one corresponding net lowest radius gyration apply criterion label individual nodes mlst irrelevant need identify first subset cuts fig five examples shells one nets corresponding cut maximum leaf spanning trees tetrahedron four faces nine edges four maximum leaf spanning trees one dodecahedron twelve faces thirty edges maximum leaf spanning trees small rhombicuboctahedron faces edges maximum leaf spanning trees open cubic shell five faces twelve edges one maximum leaf spanning tree small rhombicuboctahedron top nine faces removed faces edges nodes remaining maximum leaf spanning trees red circles nets indicate vertices cut platonic archimedean nmlst nst platonic archimedean fig fraction spanning trees nst maximum leaf spanning trees nmlst function number shell edges ratio calculated total shells including platonic solids archimedean solids shell edges see table supplemental material solid line corresponds estimation given subset cuts one get one another relabeling nodes faces much smaller subset example cubic shell mlsts identified four identify rely concept adjacency matrix aij graph nodes defined matrix aij either unity nodes connected zero otherwise two graphs isomorphic adjacency matrix one made equal fig number leaves function number shell edges number calculated total shells including platonic solids archimedean solids shell edges see table supplemental material solid line corresponds estimation given set line column swaps note swap corresponds relabeling nodes thus swap needs done lines columns figure shows study truncated icosahedron also known soccer ball buckyball vertices faces edges shell possible cuts mapping shell graph identify cuts maximum rgmin rgmin rank rank fig truncated icosahedron also known soccer ball buckyball spectrum radii gyration mlsts mlsts ordered increasing radius gyration mlst optimal rank intermediate rank largest rank radius gyration ber leaves leaves identify nonisomorphic corresponding nets calculate radius gyration net first obtain centroid defined total area shell faces calculate radius gyration respect centroid figure shows spectrum radii gyration nets truncated icosahedron ranked increasing radius gyration figures show nets ones lowest optimal intermediate maximum radius gyration rapidly increases position rank less optimal net fig radius gyration higher one optimal net fig previous methods identify optimal net based random search using methods even obtained net corresponds one maximum number vertices cut probability radius gyration differs less optimal one see inset fig since timescale yield depends strongly radius gyration efficiency approximated solution obtained previous methods based random search likely far optimal conclusions proposed method identify optimal net spontaneously closed open polyhedral shell structures method consists mapping shell structure shell graph sequentially apply two design rules identify cuts maximize number vertices cut among one minimum radius gyration adapting concepts methods graph theory show optimal solution obtained deterministic systematic manner previous methods based random search thus providing unique solution showed fraction cuts identified rule decays exponentially number edges polyhedron reinforcing necessity deterministic method also method proposed since complete list possible cuts respecting rule obtained design rules alternative rule implemented straightforwardly conjectured nets convex shells necessary condition obtaining template examples studied optimal shells nets concave shells likely overlap fact conjecture necessary optimal net identified tested selected net overlaps one proceed rank increasing radius gyration pick first net nets maximum leaf spanning trees obtained rule overlap one proceed iteratively considering spanning trees less less leaves found first overlap identifying maximum leaf spanning tree problem numerical complexity still grow rapidly number shell vertices number vertices large straightforward implementation deterministic algorithm spirit approach approximated algorithms used identify spanning trees number leaves close maximum suggest deterministic algorithm variations could used searching optimal design production even complex selfassembling systems acknowledge financial support portuguese foundation science technology fct contract grant nmaraujo fernandes gracias polymeric containers encapsulation delivery drugs adv drug deliv rev shim perdigou chen bertoldi reis encapsulation structured elastic shells pressure proc natl acad sci filippousi altantzis stefanou betsiou bikiaris angelakeris pavlidou zamboulis van tendeloo polyhedral iron oxide nanoparticles biodegradable polymeric matrix preparation characterization application magnetic particle hyperthermia drug delivery rsc adv sussman cho castle gong jung yang kamien algorithmic lattice kirigami route pluripotent materials proc natl acad sci zhang yan nan xiao liu luan wang yang wang ren liu yang wang guo luo wang huang rogers mechanically driven form kirigami route mesostructures proc natl acad sci lamoureux lee shlian forrest shtein dynamic kirigami structures integrated solar tracking nat commun collins science culture kirigami technology cut fine figure together proc natl acad sci jacobs frenkel structures addressable complexity chem soc whitesides grzybowski scales science leong lester koh call gracias surface polyhedra langmuir azam laflin jamal fernandes gracias micropatterned polymeric containers biomed microdevices pandey ewing kunas nguyen gracias menon algorithmic design polyhedra proc natl acad sci see supplemental material fernau kneis kratsch langer liedloff raible rossmanith exact algorithm maximum leaf spanning tree problem theor comput sci fujie exact algorithm maximum leaf spanning tree problem comput oper res lucena maculan simonetti reformulations solution algorithms maximum leaf spanning tree problem comput manag sci ravi approximating maximum leaf spanning trees almost linear time algorithms bonsma lowski algorithm finding spanning tree maximum number leaves algorithmica finding optimal nets kirigami supplemental material costa dorogovtsev mendes correspondence nets spanning trees net polyhedron corresponds cut along edges spanning tree shell graph unfold polyhedral shell net cut must reach every vertex shell graph must connected polyhedron faces remain connected single component cut contain loops subgraphs span every vertex connected contain loops spanning trees therefore maximizing number cut vertices net equivalent maximizing number leaves spanning tree maximum leaf spanning tree maximum leaf spanning tree mlst problem extensively studied scope graph theory computer science consist finding spanning tree largest possible number leaves given undirected unweighted graph finding mlst determining number leaves mlsts generic graph well known problem describe simple exact algorithm find full set labeled mlsts arbitrary undirected unweighted graph notice algorithms mlst problem typically find single optimal tree algorithm provides possible labeled mlsts number spanning trees mlsts grow exponentially graph size computation time algorithm lists mlsts grows least quickly number mlsts dominating set graph subset vertices vertices graph either belong set connected least one vertex set addition vertices together edges among form connected subgraph set called connected dominating set clearly vertices spanning tree form connected dominating set therefore finding maximum number leaves spanning tree equivalent determining minimum size connected dominating set rest section subtree graph whose vertices form connected dominating set called dominating subtree note vertices subtree connected definition algorithm order list mlsts use search algorithm finding full set dominating subtrees exactly vertices subtrees vertices connected dominating set graph start checking dominating subtrees vertices exist single vertex connected vertices original graph tree iteratively increase search search stops set dominating subtrees vertices minimum dominating subtrees interiors mlsts mlsts without leaf vertices respective edges finalize construction mlsts attach remaining vertices obtained minimum dominating subtree vertices leaves mlst every leaf vertex one edge original graph connecting dominating subtree one possible mlst particular interior subtree however leaf vertices may multiple edges original graph linking dominating subtree multiple mlsts interior subtree cases chose one possibilities leaf vertex since choices independent total number mlsts share particular interior subtree equals product numbers possibilities leaf vertex number different ways connecting leaf vertices particular dominating subtree algorithm recursively grows subtrees vertices edges graph starting single root vertex enumerates mlsts root vertex set vertices say algorithm uses roots one time consists specific arbitrary vertex neighbors dominating subtree vertices original graph must either subtree least one neighbor subtree find mlsts sufficient consider roots present work use vertex smallest degree neighbors set roots given size subtrees algorithm performs separate search root vertex need additional constraint avoid multiple counts mlsts one vertices introduce set vertices vexcl explicitly forbidden joining dominating subtree first search rooted first vertex say set vexcl empty algorithm enumerates mlsts vertex since vertex must leaf mlsts still found add vexcl second search rooted vertex never included dominating subtree insuring algorithm returns mlsts leaf vertex added vexcl third search made given graph vertices labeled let denote edge connects vertices eij following algorithms set vertices connected original graph denoted set provides complete information graph algorithm enumeration mlsts procedure initializes necessary variables calls function recursive algorithm time called recursive enumerates mlsts leaves supplied root vertex vertices vexcl leaves procedure list mlsts input list neighbors every vertex arbitrary vertex example vertex minimum degree append lst lst vexcl eri lst vexcl lst append lst lst vexcl append vexcl end end end procedure procedure list mlsts shown algorithm initializes set roots described current size searched subtrees initialized set lst store collection found mlsts starts empty set search mlsts larger smaller number leaves proceed set lst remains empty list vertices vexcl stores roots already used particular value initialized empty set every time incremented search performed recursive function recursive algorithm considered function recursive called procedure list mlsts root function recursive algorithm starts single vertex root recursively grows subtrees predetermined size vertices keeping track elements already tree border let lists vertices edges respectively currently list edges connected least one vertex exterior boarder algorithm currently exploring furthermore vexcl specific set vertices forbidden participate subtree vertices roots previous searches reaches target size vertices stops increasing vertices form dominating set algorithm enumerates possible ways joining vertex outside one one vertex edge different way making last connections represents different labeled spanning tree whose leaves vertices outside first stage number vertices growing subtree function recursive considers possibilities next edge addition subtree set adjacent edges possibilities recursive called updated note edge connects two vertices already added tree would close loop order keep vertices set vexcl outside edges lead vertices added update way recursive finds configurations subtrees vertices include root exclude vertices vexcl second stage finally recursive checks vertices form dominating set finishes construction spanning trees connecting leaves every possible way otherwise returns empty set search algorithm presented consider connected sets vertices subtrees check dominating furthermore consider subtrees include either specific arbitrary vertex least one neighbors subtree dominating every vertex graph fulfills requirement combination two strategies drastically reduces configurational space computation time algorithm recursive search function function generates possible subtrees vertices include root vertex exclude vertices vexcl dominating subtree function lists possible spanning trees obtained joining remaining vertices graph function recursive vexcl input list neighbors every vertex lists vertices edges currently list edges currently adjacent list vertices vexcl excluded final number vertices output list spanning trees least leaves spanning tree returned list edges add one edge ejk ejk else else continue end append ejk append eil vexcl append eil end end recursive vexcl append end return else dominating set expand corresponding spanning trees vertices vertices trees append eij end end end return else return end end function order edges picked outermost search recursive specified algorithm order produce result implementation function recursive picked edges using method effectively performed search instance edges picked fashion algorithm perform search instead algorithm described section listing mlsts defined labeled graphs finds set labeled mlsts original graph automorphisms relabeling results labeled graph set labeled mlsts may multiple copies unlabeled mlst different labelings isomorphic mlsts work consider nets polyhedra high regularity platonic archimedean solids vertices equivalent polyhedral graphs polyhedra vertices equivalent sets equivalent vertices automorphisms however geometrically distinct net entirely determined cut polyhedral graph along edges unlabeled spanning tree determine number distinct nets need disregard labels mlsts search isomorphisms set mlsts found algorithm check two labeled mlsts isomorphic simply check automorphic relabeling original graph maps one labeled mlst see section details typically algorithms mlst problem designed find number leaves mlst full set mlsts algorithms include multiple stages optimization include heuristics use approximated algorithms make initial guesses sophisticated approaches due complexity problem best knowledge fastest existing algorithms find one mlst graphs roughly vertices work find one mlst list also full set optimal nets unlabeled mlsts polyhedral graphs vertices number different optimal nets strongly depends details polyhedral graph instance truncated icosahedron truncated dodecahedron vertices edges faces however numbers optimal nets respectively table summarizes exact results obtained work algorithm presented namely number leaves mlst number optimal nets polyhedron considered table also shows figure optimal net minimizes radius gyration polyhedra finally mentioned several approximated algorithms find spanning trees high number leaves close maximum possible polyhedral graph large solve exact methods approximated algorithms find nopt nets tetrahedron octahedron cube octogonal piramid octogonal dipiramid truncated cube solution linear almost linear time shells holes consider problem finding optimal nets shells contain holes shells consisting faces polyhedron except one graph vertices edges shell polyhedral graph complete polyhedron difference edges adjacent missing face every cut effectively detaching face rest net intended edge adjacent face edge two vertices face reason vertex adjacent missing face singleedge cut vertex net since always two edges included cut subgraph cut edges presence hole pure tree contains single loop formed edges adjacent hole shell unfold net cut subgraph must reach vertices spanning connected also shell faces remain connected single component cut subgraph loop apart one surrounding hole cut subgraph consists loop adjacent hole loopless branches connected use following algorithm maximize number cut vertices nets shells contain holes algorithm simple adaptation algorithm calls recursive function algorithm gradually increasing allowed number vertices recursive function remains unchanged procedure version algorithm addition lists adjacencies supply list vertices adjacent hole instead single root vertex search starts subgraph already containing vertices edges adjacent hole vertices edges must present cuts optimal first call function recursive set edges adjacent hole set initialized edges connected vertices algorithm sake clarity algorithm denote set edges optimal cuts lst algorithm denote cuts presence holes cuts contain loop surrounding hole longer trees algorithm calls function recursive passes empty set fifth input argument set list vertices forced leaves called vexcl recursive use algorithm icosahedron dodecahedron truncated tetrahedron cuboctahedron snub cube rhombicuboctahedron truncated octahedron icosidodecahedron truncated cuboctahedron truncated icosahedron soccer ball truncated dodecahedron rhombicosidodecahedron snub dodecahedron triakis icosahedron pentakis dodecahedron min table optimal nets number leaves mlsts number distinct optimal nets unlabeled mlsts nopt nets obtained polyhedron algorithm numbers vertices faces edges also shown well optimal net smallest radius gyration cases nopt nets red circles indicate cut vertices algorithm enumeration optimal cuts shells hole procedure initializes necessary variables calls recursive function algorithm set cuts stores spanning subgraphs include edges adjacent hole maximize number leaves procedure list cuts hole input lists neighbors vertex vertices hole cuts cuts eij eij recursive cuts append cuts end end procedure estimations maximum number leaves aim section estimate number leaves mlst terms simplest possible polyhedral parameters preferably function number vertices edges exact number depends details graph determination requires actually solve maximum leaf spanning tree problem however obtain simple estimate considering local optimization algorithm finding approximated maximum leaf spanning trees algorithm iteratively grows tree progressively attaching vertices graph tree becomes spanning reaches vertices polyhedral graph following way seed iterative process connect highest degree vertex neighbors current tree spanning tree algorithm reaches end otherwise vertices already tree select one highest number neighbors still tree connect neighbors repeat step algorithm except highest degree vertex step every vertex added tree starts leaf attached vertex neighbors nonleaf vertices intermediate trees guaranteed tree well furthermore number leaves final spanning tree niter niter total number iterations due initial step number niter depends many vertices added tree iteration show following number vertices added tree iteration turns close regardless details polyhedral shell graph polyhedron triangular faces vertex degree selected step contributes new leaves vertex tree least one neighbor also tree however faces triangular two vertices connected share two common neighbors case selected vertex least neighbors already tree namely parent vertex plus two vertices common neighbors parent vertex therefore contributes new leaves convex polyhedra regular faces equilateral triangles squares regular pentagons etc sum internal angles attached vertex must smaller strongly constrains types faces attached vertex degree particular number triangles notice smaller feasible even triangles case sum angles equal let consider number new vertices added tree vertex selected step degree vertex mainly surrounded triangles least faces must triangles selected step times contribute new leaves tree vertex must triangles among faces contribute new leaves depending particular configuration faces finally vertex number triangles one hand triangles vertex never selected step neighbors still tree hand vertex triangular faces attached contribute new leaves due geometrical constraints average number new vertices added tree iteration essentially independent size local details polyhedral graph close furthermore expect arguments qualitatively hold also irregular convex polyhedra even internal angles face different sum average internal angles regular face assuming iteration adds approximately new vertices average growing tree tree becomes spanning niter iterations write highest degree polyhedral graph simplicity let use replacing niter get formula fits well general trend observed shown inset fig platonic archimedean spanning trees nst terms number edges labeled polyhedral graph numbers grow quickly size graph uniquely determined depend details graph obtain simple estimate calculate upper bounds nst nmlst use ratio estimator spanning tree edges total edges therefore upper bound number spanning trees nst number possible edges given binomial coefficient similarly maximum leaf spanning tree leaves total vertices upper bound nmlst number combinations vertices obtain estimation nmlst take ratio upper bounds replace approximations eqs respectively fig number leaf vertices maximum leaf spanning tree number edges polyhedra table different symbols represent different sets polyhedra solid line estimation provided inset shows number vertices dashed line estimation provided interestingly main panel fig shows dispersion points significantly smaller plot let recall euler polyhedron formula numbers vertices faces edges polyhedron respectively given different polyhedra different combinations one hand faces triangles case hand faces polyhedron many edges implies wide internal angles vertices attached three faces since sum angles must smaller convex polyhedra case taking account consider variety polyhedra different types faces use middle point estimation number vertices polyhedron edges replace finally obtain used stirling approximation figure clearly shows ratio nmlst decay number edges polyhedron simple form fits well decay nmlst nst platonic archimedean fig ratio total number spanning trees nst number maximum leaf spanning trees nmlst number edges polyhedra table different symbols represent different sets polyhedra ratio nmlst decays exponentially polyhedron size predicted formula plotted main panel fig solid line shows remarkable agreement results nmlst ratio nmlst estimate ratio nmlst numbers maximum leaf spanning trees nmlst cuts determine optimal nets polyhedron labels need find set distinct unlabeled mlsts however algorithm section distinguishes node label consider symmetries may exist polyhedral graph automorphisms automorphism labeled graph relabeling results graph polyhedral graph contains automorphisms set mlsts may contain isomorphisms multiple copies differently labeled unlabeled mlst isomorphic cuts correspond nets indistinguishable radius gyration second criterion optimization therefore need one member set isomorphic mlsts list optimal cuts determine two mlsts isomorphic employ adjacency matrix approach note approach valid isomorphic subgraphs mlsts adjacency matrix convenient representation labeled graphs element aij vertices labels connected edge otherwise aij switch labels pair vertices say simply switch rows columns matrix permutation labels mapped series switches pairs vertices number vertices find complete set automorphisms polyhedral graph comparing relabeled adjacency matrix original matrix relabeling automorphic similarly polyhedral graph represent mlst adjacency matrix element bij vertices connected edge mlst otherwise two mlsts cuts isomorphic automorphism polyhedral graph maps one mlst two cuts adjacency matrices unfold net automorphic relabelings map apply previously obtained automorphisms one matrices say compare relabeled matrix one relabeling gives two cuts isomorphic may discard one systematically compare mlsts remaining list others eliminate isomorphisms obtain full set nopt net distinct optimal nets polyhedron number labeled spanning trees kirchhoff theorem allows calculate exact number spanning trees labeled graph nst terms spectrum laplacian matrix laplacian matrix graph defined degree matrix diagonal matrix entry dii equal degree vertex adjacency matrix graph vertices matrix eigenvalues smallest theorem states total number spanning trees nst given product eigenvalues laplacian matrix nst excluded product connected graphs values nst used plot points fig fig main text obtained values nmlst obtained algorithm described section tetrahedron cube octahedron icosahedron dodecahedron nnets nst table comparison exact number nets nnets estimation nst regular convex polytopes top rows bottom rows side column table relative difference nnets nst numbers vertices edges denoted respectively include polytopes table show approaches quickly size polytope increases note everywhere else paper consider shells nets number nets shown ref wrong due mistake calculation graph spectrum correcting mistake gives exactly nets particular case actual number nets coincide exactly number cuts number cuts nets nnets polyhedron automorphisms equal nst polyhedral graph automorphisms number distinct nets actually smaller nst see section case exact number nets nnets obtained using approach ref involves detailed case case analysis polyhedron ref nnets obtained five platonic solids nevertheless estimate nnets polyhedron automorphisms archimedean solids high precision taking ratio nst number automorphisms graph naut ratio nst fact nnets ratio assumes unlabeled spanning tree contributes naut differently labeled copies set labeled spanning trees however spanning trees smaller number isomorphic copies due existence symmetries branches since naut found linear time algorithms calculation nst straightforward table clearly shows actually approaches nnets quickly large graphs happens fraction spanning trees symmetry structure quickly approaches size graph increases figure demonstrates number distinct nets nnets nst grows exponentially size polyhedral graph nst naut platonic archimedean nopt nets naut nst duced supplemental material fig generate nets method avoids sampling isomorphic cuts completely thus reducing search space probability randomly sampled net maximum number cut vertices nopt nets would still essentially nmlst figure shows probability net sampled random set nets maximum number cut vertices nopt nets nopt nets naut observe significant differences figs means searches unlabeled labeled configurations similar performances small fraction configurations sampled platonic archimedean fig precise estimations number distinct nets nnets nst fraction optimal nets nopt nets nopt nets naut number edges polyhedra table number nnets grows exponentially shows remarkably low level dispersion solid line plot panel fraction unlabeled optimal cuts nopt nets essentially equal fraction labeled optimal cuts nmlst see fig probability randomly sampled labeled spanning tree mlst equal ratio nmlst shown fig main text nmaraujo garey johnson computers intractability vol freeman company new york rosamond max leaf spanning tree encyclopedia algorithms springer fernau kneis kratsch langer liedloff raible rossmanith exact algorithm maximum leaf spanning tree problem theor comput sci fujie exact algorithm maximum leaf spanning tree problem comput oper res lucena maculan simonetti reformulations solution algorithms maximum leaf spanning tree problem comput manag sci bonsma lowski algorithm finding spanning tree maximum number leaves algorithmica ravi approximating maximum leaf spanning trees almost linear time algorithms buekenhout parker number nets regular convex polytopes dimension discrete math doob sachs spectra graphs theory applications wiley new york hopcroft wong linear time algorithm isomorphism planar graphs preliminary report proc annual acm symp theory computing acm
| 8 |
mar classification linearly reductive finite subgroup schemes mitsuyasu hashimoto department mathematics okayama university okayama japan dedicated professor ngo viet trung occasion sixtieth birthday abstract classify linearly reductive finite subgroup schemes algebraically closed field positive characteristic conjugation corollary prove correspondence isomorphism class gorenstein complete local rings coefficient field correspondence sym introduction classification finite subgroups dor section see theorem group corresponds dynkin diagram type singularity gorenstein rational quotient singularity finite subgroup singularities also called kleinian singularities classified via subgroups see dur indeed singularity mathematics subject classification primary secondary key words phrases group scheme kleinian singularity invariant theory gorenstein rational quotient singularity finite subgroup known characteristic version rational singularity precisely algebra field characteristic zero rational singularities modulo reduction almost prime numbers har complete local gorenstein rings algebraically closed field characteristic classified using dynkin diagrams based artin classification rational double points art see might well ask whether ring obtained invariant subring finite subgroup considering question consider several things first finite subgroup small sense element called pseudoreflection rank important studying ring invariants small finite subgroup recovered completion sym sense fundamental group spec unique closed point moreover category maximal modules canonically equivalent category yos however case char indeed finite subgroup may transvection called transvection nilpotent even subgroup may formal power series ring see proposition next even finite subgroup ring invariants sym may indeed singh sin proved alternating group acting canonically sym strongly char divide order generally yasuda yas proved small subgroup ring invariants sym strongly regular char divide order want classify subgroups order divisible char easy see must small classification known see theorem result except small divides order allowed precisely type must divide must divide must type respectively however restriction classification gorenstein complete local rings different arbitrary type respectively purpose paper show gap occuring type comes group schemes shown theorem corollary show gorenstein complete local rings algebraically closed coefficient field appear ring invariants action linearly reductive finite subgroup scheme see corollary already pointed artin art type trivial order group restriction new paper case moment author know recover group scheme although classification gorenstein singularities classification seems nontrivial author result recover sense correspondence key proof sweedler theorem theorem states connected linearly reductive group scheme field positive characteristic abelian author thanks professor watanabe valuable advice preliminaries let field denote ring say affine algebraic scheme linearly reductive semisimple lemma let exact sequence affine algebraic schemes linearly reductive linearly reductive proof prove part spectral sequence jan degenerates assumption thus required prove part first given short exact sequence hmodules also short exact sequence restriction assumption hence thus short exact sequence linearly reductive next prove linearly reductive let finite dimensional spectral sequence indg see jan affine indg jan linearly reductive assumption thus thus linearly reductive let element said swe set elements denoted note linearly independent let algebra antipode note subgroup unit group denote spec subgroup scheme spec note represents group rth roots unity reduced scheme char divides rest paper let algebraically closed affine algebraic group scheme let denote group characters representations note canonically identified see wat lemma let affine algebraic scheme following equivalent abelian product commutative linearly reductive linearly reductive simple diagonalizable closed subgroup scheme torus gnm coordinate ring coalgebra group ring finite direct product proof follows easily swe take finite dimensional faithful possible wat take basis kvi onedimensional embedding factors kvn gnm let laurent polynomial ring laurent monomial generated elements property obviously inherited quotient hopf algebra done apply fundamental theorem abelian groups easy category finitely generated abelian groups category diagonalizable schemes contravariantly equivalent equivalences spec diagonalizable scheme identified space nothing diagonalizable group scheme spec closed subgroup schemes correspondence quotient groups correspondence spec particular closed subgroup schemes since quotient groups following due sweedler theorem let connected linearly reductive affine algebraic kgroup scheme algebraically closed field positive characteristic abelian group hence diagonalizable isomorphisms form grm let affine algebraic scheme note spec gred gred gred reduced hence unit map spec inverse gred product gred gred factor gred gred closed subgroup scheme thus gred denote identity component connected component containing identity element gred homeomorphism gred connected component irreducible isomorphic spec irreducible easy see unit map inverse product factor hence closed open subgroup irreducible component image map given gng contained thus normal subgroup scheme map given gng factors inclusion gred surjective open immersion gred open subscheme gred finite semidirect product gred classification throughout section let algebraically closed field characteristic purpose section classify linearly reductive finite subgroup schemes conjugation starting point reduced case unfortunately author know proof theorem exactly stated proof dor section also works case positive characteristic see also chapter section theorem let algebraically closed field characteristic finite nontrivial subgroup assume order divisible conjugate one following denotes primitive rth root unity cyclic group generated binary dihedral group generated binary tetrahedral group generated binary octahedral group generated binary icosahedral group generated conversely resp zero resp defined linearly reductive finite subgroup order let linearly reductive finite subgroup scheme sequence gred exact gred linearly reductive lemma first consider case abelian vector representation direct sum two say hence may assume diagonalized thus also closed immersion assume abelian trivial gred classification case done theorem assume diagonalized since linearly reductive connected hence also abelian theorem consider case contained group scalar matrices case maschke theorem order gred odd according classification theorem gred must type cyclic shows abelian contradiction contained group scalar matrices note image easy see centralizer contained subgroup diagonal matrices assume abelian clearly cred index two gred shows order gred divided maschke theorem exists matrix gred taking conjugate obtain group scheme type see theorem appropriate conclusion following theorem let algebraically closed field arbitrary characteristic prime number let linearly reductive finite subgroup scheme conjugation agrees one following denotes primitive rth root unity group scheme lying subgroup scheme generated binary tetrahedral group generated binary octahedral group generated binary icosahedral group generated conversely linearly reductive finite subgroup scheme different type gives group scheme finite scheme define dimk theorem respectively independent hence case corollary let algebraically closed field positive characteristic let gorenstein complete local ring coefficient field linerly reductive finite subgroup scheme completion sym respect irrelevant maximal ideal isomorphic conversely group scheme completion sym gorenstein complete local ring coefficient field proof follows theorem list example let standard basis list theorem let case nothing deg deg degree component respect grading set easy see obviously quotient normal domain dimension two type case set group scheme type note quotient normal domain isomorphism hence normal easy see finite birational normal thus type cases constant groups omit proof remark note converse corollary also checked theoretically linearly reductive sym direct summand subring sym hence strongly thus completion also strongly see example gorenstein property consequence nevertheless moment author know theoretical reason recovered isomorphism class true seen result classification remark let set sym let completion respect irrelevant maximal ideal infinitesimal purely inseparable spec simply connected spec spec galois covering galois group gred fundamental group spec gred linearly reductive stated art references art artin coverings rational double points characteristic complex analysis algebraic geometry dedicated kodaira baily shioda eds cambridge dor dornhoff group representation theory part ordinary representation theory dekker dur durfee fifteen characterizations rational double points simple critical points enseignement math har hara characterization rational singularities terms injectivity frobenius maps amer math hashimoto equivariant twisted inverses foundations grothendieck duality diagrams schemes lipman hashimoto lecture notes math springer hashimoto homomorphisms strong comm algebra huneke leuschke two theorems maximal modules math ann leuschke wiegand representations ams jan jantzen representations algebraic groups second edition ams karagueuzian symonds module structure group action polynomial ring finiteness theorem amer math soc sin singh failure certain rings invariants illinois math smith rings rational singularities amer math swe sweedler hopf algebras benjamin sweedler connected fully reducible affine group schemes positive characteristic abelian math kyoto univ wat waterhouse introduction affine group schemes springer yas yasuda pure subrings regular local rings endomorphism rings frobenius morphisms algebra watanabe yoshida multiplicity inequality multiplicity colength algebra yos yoshino modules rings cambridge
| 0 |
arxiv jun combining inclusion polymorphism parametric polymorphism sabine glesner karl stroetmann institut programmstrukturen und datenorganisation lehrstuhl goos karlsruhe postfach karlsruhe tel fax glesner siemens tel fax abstract show question whether term typable decidable type systems combining inclusion polymorphism parametric polymorphism provided type constructors unary prove result first reduce typability problem problem solving system type inequations result obtained showing solvability resulting system type inequations decidable introduction common agreement flexible type system needs contain inclusion well parametric polymorphism unfortunately flexibility type system causes type inference become hard even undecidable paper investigate problem checking terms presence inclusion polymorphism combined parametric polymorphism show case typability decidable provided type constructors unary result stated result paper used design new type systems combine inclusion polymorphism parametric polymorphism type systems kind interest programming languages particular result applicable programming language java allow parametric polymorphism probably future versions another area result applicable logic programming number type systems designed area systems implemented far either offer inclusion polymorphism impose stronger restrictions type system would based result stands result applied functional programming languages languages allow binary type constructor takes two types returns type functions mapping general tiuryn urzyczyn shown type inference type system combines inclusion polymorphism parametric polymorphism undecidable types hand type inference inclusion polymorphism combined nullary type constructors decidable presents algorithm called match solves type inequations case inequations nullary type constructors allowed fuh mishra introduce similar algorithm solve problem logic programming language based type system paper organized follows section contains definition type language section define terms moreover show question whether term reduced problem solving system type inequations solvability systems shown decidable section section concludes type language section first introduce language describing types since types behave many ways like terms also notion substitution notion defined subsection types types constructed type constructors type parameters set type constructors partially ordered ordering extended types definition ordered type alphabet ordered type alphabet tuple type alphabet finite set type constructors elements denoted partial order function assigning arity every type constructor definition types define types assume ordered type alphabet set type parameters given set types defined inductively write instead types denoted parameters denoted monotype type constructed without type parameters type par denotes set type parameters next extend relation set types definition subtype relation let ordered type alphabet let set types constructed subtype relation defined inductively holds iff min without provisons partial order shown counter example given next example assume ordering defined following chain inequations however problem caused incompatibility arity function ordering type alphabet definition compatible assume type alphabet given arity compatible ordering iff following condition satisfied type constructors min convention rest paper assume following ordered type alphabet given compatible lemma ordered type alphabet partial order proof need show relation reflexive antisymmetric transitive order prove reflexivity show types done via trivial induction prove antisymmetry assume show proof proceeds induction parameter know must since partial order induction hypothesis yields immediate prove transitivity assume given need prove proof proceeds induction parameter similarly obviously assumption yields min similarly assumption yields min since partial order induction hypothesis shows min since arity compatible min therefore min min immediate parameter substitutions types behave many ways like terms therefore also notion substitution since type parameters substituted rather variables substitutions called parameter substitutions parameter substitutions denoted capital greek letters definition parameter substitution parameter substitution finite set pairs form distinct parameters types interpreted function mapping type parameters types otherwise function extended types homomorphically use postfix notation denote result evaluating type write instead domain defined dom set parameters appearing range parameter substitution defined par par dom parameter substitution called parameter renaming iff form permutation set parameter substitutions composition defined holds type parameters parameter substitutions respect ordering lemma parameter substitution proof proof done induction following definition case obvious min using induction hypothesis relevant therefore terms define set terms first subsection subsection reduce question whether term solvability system type inequations definition terms assume set functions symbols set variables given every function symbol supposed arity definition terms set terms defined inductively set variables occurring term defined obvious inductive definition denoted var set empty called closed term set closed terms denoted definition signature signature string types signature communicated writing following assume every function symbol signature signature appropriate function symbol iff exists parameter substitution definition type assignment type annotation pair written term type type annotation called variable annotation variable finite set variable annotations variables pairwise distinct call type assignment type assignment regard function domain mapping variables types dom definition term notion term defined via binary relation taking first argument type assignment second argument type annotation definition done inductively appropriate term iff exist type assignment type read entails call type judgement type checking subsection reduce question whether term solvability system type inequations type inequation pair types written parameter substitution solves type inequation denoted system type inequations set type inequations parameter substitution solves system type inequations denoted iff solves every type inequation assume type assignment type annotation var dom define function ineq induction ineq system type inequations parameter substitution solve ineq iff inductive definition ineq given follows ineq assume signature given type parameters appropriately renamed new new parameters may occur neither signatures used construct ineq ineq ineq starting proofs soundness completeness transformation state definitions type assignment type annotation called hypothetical type judgement parameter substitution solves hypothetical type judgement iff holds type constraint either type inequation hypothetical type judgement parameter substitution solves set type constraints iff solves every type inequation every hypothetical type judgement written define rewrite relation sets type constraints least transitive relation assume signature given type parameters appropriately renamed new hypothetical type judgement given two rewrite rules used repeatedly set ineq derived easily seen induction furthermore rewrite relation satisfies following invariants proving invariants show suffice verify soundness completeness transformation theorem soundness transformation assume type assignment type annotation ineq proof since assumption ineq know ineq invariant shows definition implies theorem completeness transformation assume type assignment type annotation parameter substitution extended parameter substitution solution ineq proof implies since ineq invariant shows extended parameter substitution ineq proof according definition rewrite relation suffices consider following two cases assumption therefore showing according assumption therefore yields claim prove invariant need following lemma follows directly defs lemma suppose iff parameter substitution proof suffices consider following two cases corresponding definition relation assumption therefore define assume type parameters occurring signature occur dom since type parameters signature renamed according assumption latter implies lemma shows parameter substitution assume dom contains type parameters occurring signature dom dom define checking whether term want compute type assignment type holds end define general type assignment general type let var variables define distinct new type parameters claim set type constraints ineq solvable proof assume exists type assignment type define parameter substitution setting var therefore since ineq invariant shows exists parameter substitution ineq hand ineq theorem shows holds therefore problem whether term reduced problem solving systems type inequations solving systems type inequations section assume type constructors unary given ordered type alphabet show decidable whether system type inequations solvable end present algorithm effectively tests possible instantiations type parameters type inequations fact type constructors unary enables guarantee three important properties instantiation process create additional parameters increase overall number inequations depth terms type inequations increase therefore generate finitely many systems instantiated type inequations one systems solvable construct solution definitions start definitions necessary formulate algorithm checking solvability systems type inequations solvability equivalence type inequations system type inequations solvable denoted iff parameter substitution two type inequations equivalent denoted iff parameter substitution solves solves def type inequation equivalent true denoted true iff every parameter substitution solves equivalent false denoted false iff parameter substitution solves two systems type inequations equivalent denoted iff parameter substitution solves solves def next system type inequations equivalent set systems type inequations denoted iff solvable system solvable def proceed define depth type inductively depth type parameters depth nullary type constructors depth depth depth type inequation defined taking maximum depth max depth depth furthermore define depth true depth false function depth extended systems type inequations depth max depth depth parameter substitution defined depth max depth dom define depth empty parameter substitution system inequations solvable depth denoted iff closed parameter substitution depth definition function takes type inequation input either produces equivalent type inequation yields true false function defined inductively every type parameter true iff false else true iff false else true iff false else iff false else easy see holds every inequation extend function systems type inequations first define auxiliary function nfaux nfaux true function defined false false nfaux nfaux otherwise easy see system type inequations definition allparsubst next define function allparsubst input allparsubst finite set type parameters output set parameter substitutions dom depth par therefore allparsubst equal set dom depth par function allparsubst following properties allparsubst finite true type alphabet assumed finite therefore given finite set type parameter finitely many types depth par allparsubst must finite parameter substitution depth par exist parameter substitutions allparsubst dom depth prove assume depth must type constructor type depth depth assume depth depth define claim obvious allparsubst par assume dom par previous property shows written allparsubst par conversely substitution allparsubst par allparsubst par depth depth assume inequation maximal depth first assume going inequation either disappears form depth inequation greater depth original inequation next parameter must either forms going inequation either disappears form depth inequation greater depth original inequation remaining cases similar definition inst function inst transforms single system type inequations equivalent set systems type inequations defined inst allparsubst par false function inst following properties inst finite inst inst inst inst par par inst depth depth properties immediate consequences definition inst properties function allparsubst deciding type inequations present algorithm solving refuting systems type inequations algorithm maintains two sets systems inequations call theses sets serves memory systems type inequations already encountered contains systems type inequations derived application function inst algorithm initializes singleton system type inequations solved initialization algorithm enters loop loop compute inst update follows inst apply inst systems discard systems appear already memory solvable algorithm halts success becomes empty algorithm halts failure otherwise update reenter loop figure specifies algorithm formally lemma termination algorithm given figure terminates proof every system inequations number inequations less equal number inequations par par depth depth since type alphabet finite size must therefore bounded assume algorithm given figure terminate set never empty therefore every time loop executed statement increases number elements set size would increase beyond every bound input system type inequations solved loop inst return true return false goto loop figure algorithm deciding solvability type inequations lemma soundness assume proof proof given induction since must claim trivial assume inst implies lemma assume minimal property proof proof done induction obvious assume minimal property inst assume since therefore lemma shows since contradicts minimality shows assumption wrong proof complete theorem algorithm given figure correct proof assume solvable lemma find holds algorithm returns true assume solvable algorithm would return true since lemma would give therefore algorithm return true since terminates must return false conclusion paper presented type system supports inclusion polymorphism parametric polymorphism able prove type system typability decidable provided use unary type constructors practice many interesting type constructors either nullary unary unary type constructors occur naturally dealing container types types interpreted sets lists bags convenient able cast example lists sets done type system proposed mitchell possible type system introduced paper previously know type inference decidable system restricts inclusion polymorphism nullary type constructors negative side tiuryn urzyczyn shown type inference problem types undecidable shown paper typability decidable type systems unary type constructors still open question whether typability decidable case binary type constructors acknowledgement authors would like thank pawel urzyczyn pointing technical weaknesses earlier version paper references krzysztof apt elena marchiori reasoning prolog programs modes types assertions formal aspects computing christoph beierle concepts implementation applications typed logic programming language christoph beierle lutz editors logic programming formal methods practical applications chapter pages elsevier science christoph beierle type inferencing polymorphic logic programs leon sterling editor proceedings international conference logic programming mit press fuh prateek mishra type inference subtypes theoretical computer science patricia hill john lloyd programming language mit press hill topor semantics typed logic programs pfenning pages andrew myers joseph bank barbara liskov parameterized types java proceedings symposium principles programming languages pages acm press john mitchell coercion type inference annual acm symposium principles programming languages pages john mitchell type inference simple subtypes journal functional programming martin odersky philip wadler pizza java translating theory practice proceedings symposium principles programming languages pages acm press frank pfenning editor types logic programming mit press zoltan somogyi fergus henderson thomas conway mercury efficient purely declarative logic programming language proceedings australian computer science conference pages glenelg australia february jerzy tiuryn pawel urzyczyn subtyping problem secondorder types undecidable proceedings ieee symposion logic computer science lics pages eyal yardeni thom ehud shapiro polymorphically typed logic programs pfenning pages
| 6 |
imag arxiv dec institut informatique grenoble lsr laboratoire logiciels rapport recherche jartege tool random generation unit tests java classes catherine oriat juin saint martin heres cedex france centre national recherche scientifique institut national polytechnique grenoble joseph fourier grenoble jartege tool random generation unit tests java classes catherine oriat grenoble email rapport jartege outil qui permet tests unitaires pour des classes java jml jml java modeling language est langage pour java qui permet des invariants pour des classes ainsi que des des pour des comme dans outil nous utilisons les jml une part pour des cas test non pertinents autre part comme oracle test jartege des cas test qui consistent une appels constructeurs des classes sous test aspect outil peut associant des poids aux classes aux nombre instances pour chaque classe sous test utilisation pratique jartege est par une petite cas test test unitaire cas test java jml abstract report presents jartege tool allows random generation unit tests java classes specified jml jml java modeling language specification language java allows one write invariants classes postconditions operations tool use jml specifications one hand eliminate irrelevant test cases hand test oracle jartege randomly generates test cases consist sequence constructor method calls classes test random aspect tool parameterized associating weights classes operations controlling number instances created class test practical use jartege illustrated small case study keywords testing unit testing random generation test cases java jml jartege tool random generation unit tests java classes catherine oriat grenoble email introduction main validation technique software engineering program testing aims ensuring program correct conforms specifications input domain program usually large infinite exhaustive testing consists testing program possible inputs general impossible objective testing thus rather improve software quality finding faults testing important activity software development whose cost usually estimated total cost software development exceeding cost code writing test campaign program requires several steps design development test sets execution results examination oracle considering cost testing interesting automate steps java programs junit framework jun allows developer write oracle test case automatically execute test sets junit particular permits automatically regression test several test sets formal specification available translated assertions checked runtime thus serve test oracle instance daists system compiles algebraic axioms abstract data type consistency checks rosenblum app allows programmer write assertions programs eiffel design contract approach integrates assertions programming language also interesting automate development tests distinguish two groups strategies produce test sets random systematic strategies systematic strategies functional testing structural testing consist decomposing input domain program several subdomains often called partitions many systematic strategies propose derive test cases formal specification instance dick faivre method consists constructing finite state automaton formal specification selecting test cases paths automaton used basis approaches particular casting bztt bztt uses specifications generate test cases consist placing system boundary state calling operation boundary value contrast systematic methods random testing generally use program specification produce test sets may use operational profile program describe program expected used utility random testing controversial testing community usually presented poorest approach selecting test data however random testing advantages make think could good complement systematic testing random testing cheap rather easy implement particular produce large large test sets detect substantial number errors low cost moreover operational profile program available random testing allows early detection failures likely appear using program used evaluate program reliability however partition testing much effective finding failures especially strategy defines small subdomains high probability cause failures among best practices extreme programming continuous testing code refactoring writing code developers write corresponding unit tests using testing framework junit unit testing often presented support refactoring gives developer confidence changes introduced new errors however code refactoring often requires change corresponding unit tests well class tested intensively amount code corresponding tests often exceeds amount code class practices therefore hard conciliate report proposes use random generation tests facilitate countinuous testing code refactoring context extreme programming presents jartege tool random generation unit tests java classes specified jml aims easily producing numerous test cases order detect substantial number errors low cost rest report organized follows section introduces approach section presents case study consists modeling bank accounts case study serve illustrate use testing tool section presents tool jartege used test bank account case study section introduces advanced features jartege allow one parameterize random aspect section shows errors detected test cases generated jartege case study section presents compares related approaches section discusses points random generation tests draw future work intend undertake around jartege approach jml java modeling language specification language java inspired eiffel vdm larch designed gary leavens colleagues jml several teams currently still working jml design tools around jml jml allows one specify various assertions particular invariants classes well postconditions methods jml compiler jmlc translates jml specifications assertions checked runtime assertion violated specific exception raised context given method call jml compiler makes useful difference entry precondition precondition given method internal precondition precondition operation called level given method tool generates junit test cases java program specified jml using jml compiler translate jml specifications test oracles test cases produced test fixture parameter values supplied user approach inspired tool propose generate random tests java programs specified jml using specification test oracle jmljunit way aim produce big number tests low cost order facilitate unit testing implement ideas started developing prototype tool called jartege java random test generator jartege designed generate unit tests java classes specified jml unit tests mean tests operations single class small cluster classes context test case java method consists sequence operation constructor method calls tool use jml specification assist test generation two ways permits rejection test sequences contain operation call violates operation entry precondition consider test sequences interesting detect errors correspond fault test program although sequences could used detect cases precondition strong goal would prevent producing long sequences calls thus choose trust specification rather code note operation uses method whose precondition strong produce internal precondition error sequence rejected specification also used test oracle test detects error another assertion invariant postcondition internal precondition violated error corresponds fault java program jml specification work influenced lutess tool aims deriving test data synchronous programs various generation methods particular purely random generation generation guided operational profiles good results obtained lutess best tool award first feature interaction detection contest encouraged consider random generation tests viable approach case study section present case study illustrate use jartege case study defines bank accounts operations accounts informal specification bank accounts describe informal specification bank accounts account contains certain available amount money balance associated minimum amount account may contain minimum balance possible credit debit account debit operation possible enough money account one several last credit debit operations may cancelled minimum balance account may changed bank account modeling represent accounts define class account two attributes balance min respectively represent balance minimum balance account three methods credit debit cancel order implement cancel operation associate account history linked list previous balances account class history one attribute balance represents balance associated account last credit debit operation history associated preceding history figure shows uml class diagram bank accounts hist account history balance min balance credit amount int debit amount int cancel prec figure uml diagram case study jml specification java implementation implement class account history java specify jml choose write lightweight public specifications emphasize specifications destined clients need complete order forbid uncontrolled modifications attributes declare private define associated access methods getbalance getmin gethist class account getbalance getprec class history methods specified pure methods jml side effect free allows use jml assertions class account invariant specifies balance account must always greater minimum balance class bank accounts public class account invariant class account public invariant getbalance getmin private int balance balance account private int min minimum balance private history hist history list account balance account public pure int getbalance return balance history list account public pure history gethist return hist minimum balance account public pure int getmin return min constructor class account constructs account specified balance specified minimum balance precondition asserts specified balance greater specified minimum balance constructs account specified balance minimum balance requires balance min public account int balance int min balance min null minimum balance min private attribute introduce method change value method setmin int min sets minimum balance specified value precondition asserts balance greater specified minimum value sets minimum balance specified value requires getbalance min public void setmin int min min method credit int amount credits account specified amount precondition requires amount positive postcondition asserts new balance former balance augmented specified amount new history created balance former balance account previous history former history account exceptional postcondition asserts method never terminate abruptly credits account specified amount requires amount ensures getbalance getbalance amount gethist gethist getbalance gethist gethist signals exception false public void credit int amount hist new history balance gethist balance balance amount debit operation similar credit operation detailed additional precondition balance decreased specified amount greater minimum balance method cancel cancels last credit debit operation precondition requires history null means least one operation credit debit taken place since account created postcondition ensures balance history account updated former values cancels last credit debit operation requires gethist null ensures gethist gethist getbalance gethist signals exception false public void cancel balance hist end class account define jml assertion class history class histories public class history private int balance balance history private history prec preceding history constructs history specified balance preceding history public history int balance history prec balance prec balance history public pure int getbalance return balance preceding history public pure history getprec return prec end class history jartege jartege java random test generator framework automatic random generation unit tests java classes specified jml approach consists producing test programs composed test cases test case consisting randomly chosen sequences method calls class test generated test program executed test classes later either corrected faults regression test tool designed produce unit tests tests composed calls methods belong classes noticed complex dependences exist classes programs usually possible test method class complete isolation jartege thus able generate test cases allow integration several classes practical use jartege suppose wish generate tests classes account history write following java program import jartege jartege test cases generator classes account history class testgen public static void main string args creates class tester classtester new classtester adds specified classes set classes test account history generates test class testbank made test cases test case tool tries generate method calls testbank main class jartege framework classtester class must instantiated allow creation test programs method addclass string classname adds class classname set classes test example wish generate tests classes account history method generate string classname int numberoftests int numberofmethodcalls generates file contains class called classname class composed numberoftests test cases test case tool makes numberofmethodcalls attempts generate method call one classes test using accessible constructors tool constructs objects serve parameters method calls program testgen executed produces file contains main program main program calls successively test methods test method contains method calls program generated jartege executes fly operation call allows eliminate calls violate operation precondition precondition strong may happen tool succeed generating call given method explains method calls generated test programs produced jartege test program produced jartege class main method consists calling sequentially generated test cases test case consists sequence constructor method calls classes test example test case test case number public void throws exception try account new account history new history history null history new history history int catch throwable except error except test method jml exception raised error message coming jmlc printed test program terminates printing assessment test example excerpt printed generated program error detected class testbank method method credit class account assertions specified number tests number errors number inconclusive tests program detected errors first error detected comes violation invariant class account specified line happened credit operation test program also indicates number inconclusive tests test case inconclusive allow one conclude whether program behaviour correct test program generated jartege indicates test case inconclusive contains operation call whose entry precondition violated jartege designed eliminate operation calls situation may arise code specification one classes test modified test file generated high number inconclusive tests indicates test file longer relevant controlling random generation leave everything chance jartege might produce interesting sequences calls jartege thus provides possibilities parameterize random aspect features useful stress testing instance want test intensively given method generally allow define operational profile classes test describe classes likely used components weights class operation class associated weight defines probability class chosen operation class called particular possible forbid call operation associating null weight default weights equal weight modified weight change method particular changeallmethodsweight string classname double weight changes weight methods class classname specified weight changemethodweight string classname string methodname double weight changes weight specified method class classname specified weight changemethodweight string classname string methodname string signature double weight changes weight method name signature specified weight creation objects objects creation commanded creation probability functions define probability creating new object according number existing objects class reusing already created object probability low jartege likely reuse already created object construct new one allows user either create predefined number instances given class opposite create numerous instances class example bank accounts interesting create many accounts possible test class account efficiently example creating unique account applying numerous method calls function changecreationprobability string classname creationprobability creationprobabilityfunction changes creation probability function associated specified class specified creation probability function interface creationprobability contains unique method double thefunction int nbcreatedobjects must satisfy condition thefunction thefunction class thresholdprobability allows one define threshold probability functions whose value threshold thefunction thefunction otherwise threshold probability function threshold allows one define instances given class instance forbid creation one instance account adding following statement test generator account new thresholdprobability parameter generation primitive types method strong precondition probability jartege without indication generate call method violate precondition low primitive types jartege provides possibility define generators parameters given method example precondition debit operation requires parameter amount range getbalance getmin let suppose range small words balance closed minimum balance jartege chooses amount debited entirely randomly amount likely satisfy method precondition jartege provides way generating parameter values primitive types operations define class jrt account follows import public class jrt account private account theaccount current account constructor public jrt account account theaccount theaccount generator first parameter operation debit int public int jrt debit int return class jrt account must contain private field type account contain current object operation class account applied constructor allows jartege initialize private field class also contains one parameter generation method parameter specify generation values example specify generation first parameter operation debit int amount define method int jrt debit int use signature operation name method allow overloading method int min int max chooses random integer range min max fixtures want generate several test cases operate particular set objects write test fixture similar way junit test fixture class contains attributes corresponding objects test operates optional setup method defines preamble test case typically constructs objects optional teardown method defines postamble test case applying jartege case study test cases generated jartege showing failures revealed three different errors one error caused credit operation two errors caused cancel operation extracted shorter sequence calls resulted failure obtained following results also changed parameter values added comments readability error credit operation produce balance inferior previous balance integer overflow public void account new account produces negative balance minimum balance error cancel operation produce incorrect result preceded setmin operation changes minimum balance account value superior balance cancellation public void account new account restores balance value inferior minimum balance error third error detected combination overflow debit operation similar error comes overflow credit operation second error public void account new account produces positive balance restores balance value inferior minimum balance feeling three errors detected test cases generated jartege totally obvious could easily forgotten manually developed test suite errors particular require three method calls executed specific order particular parameter values must noted case study originally written show use jml undergraduate students without aware faults comparison related work work widely inspired approach tool generates test cases method consist combination calls method various parameter values tester must supply object invoking method parameter values approach interesting values could easily forgotten tester moreover test case consists one method call possible detect errors result several calls different methods last approach compels user construct test data may require call several constructors approach thus advantage automatic able detect potential errors korat tool also based approach allows exhaustive testing method objects bounded size tools automatically construct non isomorphic test cases execute method test case korat therefore advantage able construct objects invoke method test however test cases constructed korat consist one object construction one method invocation object tobias combinatorial testing tool automatic generation test cases derived test pattern abstractly describes test case tobias first designed produce test objectives tgv tool adapted produce test cases java programs specified jml programs specified vdm main problem tobias combinatorial explosion happens one tries generate test cases consist couple method calls jartege designed allow generation long test sequences without facing problem combinatorial explosion discussion future work jartege infancy lot work remains done primitive values generation methods parameters currently done manually writing primitive parameters generating methods code methods could automatically constructed jml precondition method could consist extracting range constraints method precondition automatically produce method could generate meaningful values primitive parameters jartege easily constructs test cases consist hundreds constructors methods calls would useful develop tool extracting minimum sequence calls results given failure developed jartege java specified classes jml applied jartege classes produce test cases allowed experiment tool larger case study detect errors found much easier produce tests jartege write unit tests junit intend continue work specifying jartege jml testing classes hope real case study help evaluate effectiveness scalability approach comparison work testing strategies still remains done expect systematic methods using instance boundary testing bztt able produce interesting test cases goal certainly pretend tests produced randomly replace tests produced sophisticated methods carefully designed test set written experienced tester first goal developing jartege help developer write unit tests unstable java classes thus debug unit testing would also interesting use jartege evaluate reliability stable component released jartege provides features define operational profile component allow statistical testing however definition correct operational profile especially context programming difficult task moreover relation test sets generated jartege reliability component requires theoretical work one difficult point take account state component conclusion report presents jartege tool random generation unit tests java classes specified jml aim tool easily produce numerous test cases order detect substantial number errors without much effort designed produce automated tests part replace tests written developer using instance junit think automatic generation unit tests facilitate continuous testing well code refactoring context extreme programming jml specifications used one hand eliminate irrelevant test cases hand test oracle think additional cost specification writing compensated automatic oracle provided jml compiler long wish intensively test classes moreover approach advantage supporting debugging specification along corresponding program allows developer increase confidence specification use specification tools test generation methods deterministic approach statistical wish oppose approaches thinking advantages drawbacks combination could fruitful last found jml good language start learning formal methods syntax makes easy learn java programmers jml specifications included java source code comments easy develop debug java program along specification moreover automatic test oracles well automatic generation test cases good reasons using specification languages jml references lionel van aertryck marc benveniste daniel casting formally based software test generation method proceedings first ieee internatinal conference formal engineering methods icfem hiroshima japan pages november lilian burdy yoonsik cheon david cok michael ernst joe kiniry gary leavens rustan leino erik poll overview jml tools applications technical report department computer science university nijmegen march kent beck embracing change extreme programming ieee computer october kent beck extreme programming explained addison wesley kent beck erich gamma test infected programmers love writing tests java reports chandrasekhar boyapati safraz khurshid darko marinov korat automated testing based java predicates proceedings international symposium software testing analysis issta rome pages july yoonsik cheon gary leavens runtime assertion checker java modeling language jml hamid arabnia youngsong mun eds international conference software engineering research practice serp las vegas nevada pages csrea press yoonsik cheon gary leavens simple practical approach unit testing jml junit way boris magnusson european conference programming ecoop malaga spain number lecture notes computer science pages springerverlag june lydie bousquet farid ouabdesselam richier nicolas zuanon lutess testing environment synchronous software international conference software engineering icse los angeles usa acm press may lydie bousquet nicolas zuanon overview lutess tool testing synchronous software ieee international conference automated software engineering ase pages october jeremy dick alain faivre automating generation sequencing test cases specifications proceedings fme number lncs pages april joe duran simeon ntafos evaluation random testing ieee transactions software engineering phyllis frankl richard hamlet bev littlewood lorenzo strigini evaluating testing methods delivered reliability ieee transactions software engineering august gaudel testing formal proceedings tapsoft aarhus denmark number lncs pages springerverlag may john gannon paul mcmullin richard hamlet implementation specification testing acm transactions programming languages systems july richard hamlet random testing marciniak editor encyclopedia software engineering pages wiley dick hamlet ross taylor partition testing inspire confidence ieee transactions software engineering december thierry pierre morel test generation derived modelchecking proceedings international conference computer aided verification cav trento italy number lncs pages july jml java modeling language jml home page http jun junit home page http gary leavens albert baker clyde ruby preliminary design jml behavioral interface specification language java department computer science iowa state university yves ledru tobias test generator adaptation ase challenges position paper workshop state art automated software engineering ics technical report university california irvine usa gary leavens erik poll curtis clifton yoonsik cheon clyde ruby jml reference manual draft april bruno legeard fabien peureux mark utting automated boundary testing proceedings fme formal methods europe copenhaguen denmark number lncs pages springerverlag july yvan labiche pascale waeselynck durand testing levels software proceedings international conference software engineering icse limerick ireland pages acm june bertrand meyer software construction prentice hall bertrand meyer applying design contract ieee computer october olivier maury yves ledru pierre bontron lydie bousquet using tobias automatic generation vdm test cases third vdm workshop fme copenhaguen denmark july myers art software testing john wiley sons new york simeon ntafos comparisons random partition proportional partition testing ieee transactions software engineering july ioannis parissis test logiciels synchrones lustre phd thesis grenoble france septembre david rosenblum towards method programming assertions international conference software engineering icse ieee computer society press elaine weyuker bingchiang jeng analyzing partition testing strategies ieee transactions software engineering july
| 2 |
sequences codes mar mladen work motivated problem error correction channels input constraints successive required separated least zeros bounds size optimal codes correcting fixed number derived upper bound obtained packing argument lower bound follows construction based family integer lattices several properties sequences may independent interest established well particular capacity noiseless channel inputs characterized results relevant magnetic optical storage systems rfid channels communication models errors dominant sequences used modulation index channel peak shift timing errors code integer compositions manhattan metric asymmetric distance magnetic recording inductive coupling ntroduction hift timing errors dominant type noise several communication information storage scenarios examples include magnetic optical recording devices inductively coupled channels rfid channel parallel asynchronous communications various types timing channels etc designing codes able correct types errors studying fundamental limits therefore important applications addition interesting theoretical challenge problem complicated fact many mentioned applications particularly magnetic optical emerging dna storage systems codewords required satisfy modulation constraints introduced alleviate interference impairing effects perhaps example constraints runlength constraints minimum maximum number zero symbols two consecutive symbols specified motivated communication settings study present paper errorcorrection problem channels shift timing errors runlength input constraints precise channel model mind contributions described following two subsections date march author department electrical computer engineering national university singapore singapore emails work supported singapore ministry education moe tier grant network communication synchronization errors fundamental limits codes grant number channel model fix assume channel inputs binary strings length composed blocks set string zeros words set inputs usual notation set string denote hamming weight convenient introduce special notation set channel inputs given weight words contains input strings composed exactly blocks set definition ensures consecutive input string separated least zeros every input string starts string zeros length ends property defining property sequences however boundary conditions universally adopted literature shall nevertheless find convenient work definition evident later different boundary conditions would affect analysis significant way given input string channel outputs another binary string length weight think shifted channel number positions left right original position thus producing output say occurred channel resp position resp example consider input string corresponding output string think obtained shifting first one position left second two positions left third one position right say total number occurred channel emphasize output string always assumed length weight corresponding input string note however may general violate main results main object study present paper errorcorrecting codes channel model particular shall derive explicit bounds cardinality optimal codes focus asymptotic behavior regime growing blocklength despite sizable body literature channel related bounds best knowledge obtained even single consider two scenarios first one analyzed section iii corresponds situation shifts right shifts left treated independently separate requirements correctability imposed precisely codes case required capability correcting leftshifts fixed arbitrary second scenario analyzed section corresponds situation treated symmetric way codes required capability correcting shifts regardless direction individual shift section also demonstrate several properties code space particular characterize capacity noiseless channel constantweight inputs results needed prove bounds cardinality optimal codes sections iii also possible interest applications sequences channel models notation denotes integers integers log logarithm write understood ranges possible values clear context given two real sequences adopt following asymptotic notation means means lim inf means log log exponents asymptotic behavior means lim means denote vector indicating positions string meaning position example mapping clearly injective correspondence mind define convention hence different representation set channel inputs length weight namely corresponding representation set channel inputs length namely space case seems convenient describing constructions codes channel combinatorial description asymptotics denote see definition code space fact number number parts restricted set number expressed recursive form initial conditions follows log unique positive solution constant obtained similarly number compositions number exactly parts part belonging set quantity expressed recursive form ode pace section demonstrate properties set used derivations follow also potential interest applications also describe another representation space course equivalent one given may preferable depending problem analyzed equivalent representation another representation set channel inputs useful analyzing errors based specifying positions input string see initial conditions following lemma characterize asymptotic behavior simplicity fixed write instead ignoring fact former necessarily integer define function lemma log composition integer tuple positive integers called parts summing study compositions parts restricted subset see positive real polynomial unique let also fixed exponent strictly concave function attains maximal value value log unique positive solution proof asymptotics easily determined using machinery analytic combinatorics generating function bivariate sequence obtained wherefrom one verifies sequence riordan satisfies thm denoting conditions polynomial appearing inator generating function conclude thm unique positive solution proves second part claim obtained carefully analyzing involved functions root function weight implicitly defined relative differentiating equation also find log log implying concave weight maximizes exponent one quantity defined maximal information rate capacity achieved noiseless channel inputs relative weight refinement result states capacity noiseless channel inputs weight constraints log log namely latter statement recovered lemma noting log following lemma strengthening result asserts input strings weight approximately equal account bivariate sequences generating functions form called generalized riordan arrays see sec space informally say typical input strings relative weight lemma exists sublinear function proof follows lemma particular fact uniquely maximized contribution termsp sum asymptotically negligible fixed shall also need sequel estimate number blocks fixed typical input strings purpose formally stating result denote number input strings consisting blocks exactly equivalently number compositions number parts parts taking values exactly value every lemma fix denote unique positive solution every moreover exists sublinear function every proof let take denote simplicity proof general analogous following relation valid value distributed among parts ways remaining parts form composition number therefore since sum polynomially many terms know grows exponentially exponent one summands using stirling approximation exponent expressed form log log binary entropy function function implicitly defined calculating derivatives exponent function side respect one finds uniquely maximized implies proves also implies every part sum asymptotically negligible compared remaining part exponentially smaller fact proves words input strings length typically contain blocks runs zeros notice since expected total typical number blocks input strings length derivations asymptotic bounds following two sections shall restrict analysis input strings weight runs zeros length lemmas enable show restriction incurs loss generality iii odes orrecting symmetric hifts turn analysis channel input constraints scenario consider section channel shifts right possible interested codes enabling receiver reconstruct transmitted string whenever total shift exceed specified threshold particular derive bounds cardinality optimal codes setting shall make attempt optimize bounds every rather focus put asymptotic behavior geometric characterization suppose transmitted vector corresponding received vector see section shifted right positions channel therefore channel seen run zeros block contiguous zeros maximal length delimited sides either end string additive noise channel input alphabet shifted positions total say code corrects two different codewords produce output impaired arbitrary patterns symbols every noise vectors code said optimal code correcting remark shifts straightforward show code correct correct words correct every noise vectors suchp therefore results section apply also channels allowed difference respect model shall analyze section two types errors treated independently separate requirements correctability imposed consider following metric max distance importance theory codes asymmetric channels hence subscript vectors different dimensions corresponding strings different weights define minimum distance code respect metric denoted following proposition gives metric characterization codes proposition code correct proof suppose two distinct codewords define max max maximum taken last two inequalities together equivalent assumption means correct direction similar correct rightshifts exist two distinct codewords two noise vectors implies max note balls metric space uniform size ball radius depends center reason studying properties codes sometimes convenient represents average number lattice points per one point ambient space quantity interest present context maximum density lattice minimum distance required satisfy namely max lattice lemma every fixed fig space representing set binary strings length weight satisfying code correcting codewords depicted black dots gray regions illustrate balls metric radius around codewords consider unrestricted metric space effect example expression cardinality ball radius arbitrary ball radius upper bounded construction bounds resp cardinality proof shown thm every sublattice corresponds sidon order cardinality abelian group vice versa consequently largest possible density sublattice expressed cardinality smallest abelian group containing sidon set order cardinality statement follows constructions sidon sets singer arbitrary imply theorem let every fixed unique positive solution example unconstrained case bounds reduce optimal code resp correcting symbol superscript indicates allowed model since channel affectpthe weight transmitted string lower bound given theorem obtained constructing family codes done using good codes restricting reason best known lower bound codes given first lemma definitions needed state precisely say sublattice subgroup density defined quotient group see sec iii study geometry proof first derive lower bound consider class codes obtained following way take lattice minimum distance let arbitrary code minimum distance hence correct rightshifts give lower bound cardinality notice different translates disjoint whose union least one establishes existence code minimum distance cardinality lemma conclude sidon set order abelian group subset property distinct order summands see kwt finally get desired lower bound write optimizing weight given lemma sublinear function lemma follows code construction follows lemma follows lemma fact turn derivation upper bound approach essentially packing argument however due structure code space fact balls uniform sizes care needed making argument work let optimal code correcting equivalently shifts see remark consider codeword weight let resp denote number followed resp preceded exactly zeros resp number followed resp preceded run zeros whose length let also denote number preceded exactly zeros followed exactly zeros number preceded exactly zeros followed run zeros whose length similarly next show number strings obtained impaired least see first count number strings obtained shifting one position right words pick shift one position right notice choices result string belongs code space namely preceded exactly zeros followed exactly zeros last symbol string would result either string violates length excluding satisfying leaves least choose gives term righthand term obtained analogous way counting number strings obtained picking shifting one position left difference choosing first step exclude additional second step namely chosen first step excluded second step would potentially result string started resp resp could result run zeros length resp thus left least choose yields term proves claim expression lower bound number strings produce impaired let give asymptotic form fixed shall need conclude proof know lemma typical strings implies given preceded block probability words blocks fraction preceded block follows typical strings therefore expression following asymptotic form used fact fixed finally due assumption corrects rightshifts sets outputs obtained way two different codewords disjoint implies proves upper bound used fact asymptotic analysis safely ignore inputs see lemma derivation lower bound well theorem find asymptotic scaling redundancy optimal codes correcting corollary every fixed log log log odes orrecting ymmetric hifts section discuss slightly different one usually considered literature allowed model treated symmetric way object study codes enabling receiver reconstruct transmitted string whenever total shift exceed specified threshold regardless direction individual shifts geometric characterization suppose transmitted codeword corresponding received vector shifted positions therefore think channel additive noise channel input alphabet shifted positions total say code correct shifts two different codewords produce output arbitrary patterns shifts symbols every noise vectors code said optimal code correcting let denote manhattan distance vectors dimensions strings corresponding different weights define minimum distance code respect metric denoted proposition code correct shifts proof analogous proof proposition fig space representing set binary strings length weight satisfying constraints code correcting shift codewords depicted black dots gray regions illustrate balls metric radius around codewords balls metric space uniform size ball radius center unrestricted metric space effect occur following expression cardinality ball radius arbitrary ball radius upper bounded construction bounds let resp denote cardinality optimal code resp correcting shifts asymmetric case channel affect weight transmitted string analogy define maximum density lattice minimum distance max lattice lemma every fixed proof consider lattice defined let densest sublattice metric space fact isometric thm density expressed lattice unit vector words comprises infinitely many translates separated multiple follows construction therefore recalling lemma conclude lower bound given improved fact optimal density known exactly every namely follows existence perfect codes radius every indeed one construct codes periodically extending codes torus correcting errors lee periodic extension berlekamp codes lee metric gives best knowledge known construction codes lee metric gives lower bound density better one obtained except example berlekamp construction see also gives construction roth siegel gives smallest prime greater equal theorem let every fixed defined unique positive solution example unconstrained case bounds reduce proof proof analogous proof theorem asymmetric case difference proving lower bound need use metric instead see proposition following steps get result follows applying lemma upper bound let optimal code correcting shifts consider codeword notice every pattern shifts impair channel consists rightshifts reasoning identical used proof theorem conclude number strings produced impaired shifts least see equation paragraph following recalling find asymptotics expression form since corrects shifts assumption must proves upper bound corollary every fixed log log log bounds appearing literature aware however bounds explicit difficult compare oncluding emarks conclude paper remarks error models related studied paper applications reasonable assume shifts limited sense input string shifted positions precisely transmitted vector received vector assumption asymmetric case allowed model symmetric case models lower bound cardinality optimal shiftcorrecting codes possibly improved using known constructions codes errors see example asymmetric case lower bound improved factor using construction codes correcting asymmetric errors sec note used implicitly assumption derivation upper bounds therefore upper bounds likely improved models using approach used context one may also interested codes correcting possible patterns shifts shift bounded codes usually called codes studied several related settings channels timing channels parallel asynchronous communications etc many cases optimal codes found capacity corresponding channel determined acknowledgment author would like thank vincent tan nus detailed reading helpful comments preliminary version work mehul motani nus several discussions model related one studied paper anshoo tandon nus helpful discussions constrained codes related notions eferences weber bounds constructions block codes ieee trans inform theory vol may anantharam bits queues ieee trans inf theory vol berlekamp algebraic coding theory revised edition world scientific singapore bose chowla theorems additive theory numbers comment math vol cassuto schwartz bohossian bruck codes asymmetric errors application multilevel flash memories ieee trans inform theory vol apr chiang wolf channels codes lee metric inform control vol engelberg keren reliable communications across parallel asynchronous channels arbitrary skews ieee trans inform theory vol ferreira lin error erasure control block codes ieee trans inform theory vol golomb welch perfect codes lee metric packing polyominoes siam appl vol mar heubach mansour compositions parts set congr vol hilden howe weldon shift error correcting modulation codes ieee trans vol immink sequences proc ieee vol immink siegel wolf codes digital recorders ieee trans inform theory vol immink cai design constrained codes storage systems ieee commun vol error correcting codes asymmetric channel technical report dept informatics university bergen updated codes correcting single zero single ieee trans inform theory vol popovski capacity class timing channels ieee trans inform theory vol note parallel asynchronous channels arbitrary skews ieee trans inform theory vol tan capacity shift channels fifo queues ieee trans inform theory vol tan improved bounds sidon sets via lattice packings simplices siam discrete vol tan codes space coding permutation channels impairments ieee trans inform theory appear vol published online https krachkovsky bounds capacity inputconstrained channel ieee trans inform theory vol jul kuznetsov han vinck coding scheme single correction channels ieee trans inform theory vol jul levenshtein han vinck perfect capable correcting single ieee trans inform theory vol mar nakano eckford haraguchi molecular communication cambridge university press bryant complete annotated bibliography work related sidon sequences electron pemantle wilson analytic combinatorics several variables cambridge university press rosnes barbero ytrehus coding inductively coupled channels ieee trans inform theory vol roth siegel bch codes application constrained channels ieee trans inform theory vol jul shamai shitz zehavi bounds capacity bitshift magnetic recording channel ieee trans inform theory vol may siegel recording codes digital magnetic storage ieee trans vol siegel wolf modulation coding information storage ieee commun vol singer theorem finite projective geometry applications number theory trans amer math vol stanley enumerative combinatorics vol cambridge university press tang bahl block codes class constrained noiseless channels inform control vol ytrehus upper bounds block codes ieee trans inform theory vol may
| 7 |
ant colony system algorithm particle swarm optimization ranasinghe university colombo school computing reid avenue colombo sri lanka email dnr departamento operativa universitat valencia calle doctor moliner burjassot email lunacab colony system acs distributed agentbased algorithm widely studied symmetric travelling salesman problem tsp optimum parameters algorithm found trial error use particle swarm optimization algorithm pso optimize acs parameters working designed subset tsp instances first goal perform hybrid algorithm single instance find optimum parameters optimum solutions instance second goal analyze sets optimum parameters relation instance characteristics computational results shown good quality solutions single instances though high computational times may sets parameters work optimally majority instances introduction euristics algorithms higher forms metaheuristics widely used find reasonable good solutions problems performance heuristics based optimum set parameters problem parameters also hard problem unavoidable task needs correct design experiments statistical experimental design analysis heuristics see heuristics calibra designed procedures finding optimal parameters understood find set parameters performing well wide set instances designed algorithm using particle swarm optimization pso framework optimize parameters acs algorithm working single symmetric travelling salesman problem tsp instance instance algorithm computes optimal set acs parameters second goal analyze jointly sets parameters performance instances related instance characteristics related instances purpose finding correlations ant colony optimization aco generic framework optimization heuristic algorithms aco algorithms ants agents tsp case construct tours moving city city graph problem ants sharing information using pheromone trail first aco algorithm called proposed full review aco algorithms applications found acs version ant system modifies updating pheromone trail see chosen acs algorithm work theoretical background found see previous research parameters pso swarm intelligence method global optimization given domain defined function assigning point fitness value pso population swarm individuals named particles moving domain adjusting trajectory previous best position previous best position neighbourhood use global pso version considers neighbourhood swarm review pso example pso applications found chosen pso easy implementation integer real parameters genetic algorithms performs blind search possible sets parameters algorithm domain pso possible sets parameters acs position particle compute fitness running acs algorithm parameters given position tsp instance section describes acs pso algorithms section iii describes algorithm parameters used pso initial population particles set feasible parameters acs algorithm computational results given section finally section conclusions set pso acs algorithms let introduce notation tsp graph given instance denote set vertices set edges shortest paths vertices cost traversing edge ant colony system acs acs works follow population ants let denote arc graph initial heuristic value initial pheromone value originally set inverse cost traversing edge initially set edge lnn equal inverse tour length computed algorithm let real values integer values vertex neighbour set defined among nearest vertices given ant let set vertices denote set vertices among neighbour set given vertex given ant iteration ant constructs tour solution constructions phase works follows ant initially set randomly vertex step entire ants make movement vertex given ant actual position vertex computed reference value visiting vertex pkrs otherwise formula includes small modification respect original acs algorithm including exponent pheromone level allow deeper research effects possible combinations parameters sample random value computed visit city maximum exploitation knowledge otherwise acs follows rule biased exploration vertex neighbour vertex extend vertices visited ant included neighbour visits vertex maximum arc inserted route new vertex visited pheromone trail updated phase called local update inserted reduces pheromone level visited arcs exploration set possible tours increased tours computed global update phase done edge pertaining found lgb length found original ant algorithm later versions pheromone global update performed edges acs updates pheromone level set edges pertaining best tour consider trial performance iterations lowest length tour found iterations finished best solution found trial feasible set parameters running acs combination feasible number ants concrete neighbour definition particle swarm optimization pso described eberhert kennedy pso adaptative algorithm based social environment set particles called population visiting possible positions given domain position fitness value computed iteration particles move returning stochastically toward previous best fitness position population best fitness position particles population sharing information best areas search let denote set parameters let define population particles iteration xfp vfp denotes respectively actual position actual velocity parameter particle movement particles defined following equations integer values named cognitive social respectively sample random values real values named respectively inertia weight constriction factor bgp value parameter pertaining best set parameters found population social knowledge blfp value parameter pertaining best parameters set found particle first factor refers previous velocity second third factors related respectively distance best set parameters found particle distance iii algorithm pso parameters initial particles fitness value algorithm run time single set parameters acs define point pso domain number ants previously explained section denotes percentage vertices included vertex given ranges parameter shown table parameter pertains related minimum maximum table range acs parameters minimum maximum parameters used acs explained beginning section iii dpso domain pso define fitness value given position point length best tour computed acs using related parameters given instance comparing two different positions length value computing time considered consider better parameters minimize length tour secondly time computing computing fitness given position first integer parameters rounded shown fig secondly algorithm runs five trials acs algorithm using rounded parameters returns best value obtained trials point pso domain reflects set parameters modified values run acs fig modification pso points bold modified values set initially gradually decreasing pso iteration maximum number iterations set due computing time constraints day computing time necessary algorithm algorithm follows select initialize particles set particles perform trial acs particle parameters newvalue end end end compute update best parameters found particle update best parameters found population compute velocity movement particles end return set parameters related best tour length found tour length algorithm based pso framework particles initialized iteratively moving though domain set parameters goal algorithm given instance compute tour lowest length compute set acs parameters among dpso gets best acs performance final parameters related selected best set parameters found population particle population initial velocity set randomly half population initial position set using predefined parameters assuring every parameter particle containing value covering full range positions half initial population set randomly parameters pso see section set following inertia weight fig average time swarm first iterations instances average time iteration given fixed number particles number iteration algorithm fig first iterations algorithm related minimum tour obtained iteration related average fitness values related instance related examples typical behaviors first iterations algorithm table sets parameters acs tested value related values bold mean optimums fitness fitness computational results algorithm coded pentium ram algorithm run six widely used computational results given four parts behavior optimum values obtained best set parameters comparison among sets performance computationally iteration shows clear convergence optimum defined algorithm number ants nearly fixed computational time also fixed see fig less iterations algorithm computes optimum integer parameters iterations small differences among optimum found position real parameters fig see evolution algorithm first iterations average fitness swarm given iteration decreasing global tendency iteration see average fitness kept fixed range size range variable shown minimum value obtained swarm given iteration computational results show beginning increasing decreasing phases particles exploring local optimums moving also global one near iteration minimum table minimum tour length found instance table iii average performance instance expected best value best value obtained among new sets parameters values bold best values obtained maintained frequently visited fast convergence advantage well drawback lead fast convergence set reasons fast convergence pso framework used mainly method evaluating set parameters stochastic algorithm probability bad set parameters could perform well particles move area number iterations area increases leading probably good solutions cause algorithm remain nonoptimal area table shows optimum set parameters found running one instances selected set parameters obtained running instance similarly pacs set parameters proposed respectively considering several values parameter defines neighbourhood fitness minimal length tour obtained algorithm related instance clear cut rule define optimal parameters even recommended ranges guessed also normally bigger always bigger percentage vertices neighbour expected best value best value obtained values bold mean optimums pacs set parameters means normally less equal number ants normally set rule set parameters following guidelines performing better rest instances optimum found relevant enormous quantity time expended sometimes half day number times acs algorithm run given instance run one obtained set parameters previous selected instances check efficiency parameters acs performed trials iterations table iii show performance based average tour length minimum tour length found respectively sets parameters instances expect parameters related instance compute best results instance comparing hypothesis tables iii computational results show fact false instead working reduced set instances set parameters considered best overall one performing optimally table time comparison best sets parameters avg avg time best pacs best avg average time considering refereed instances used finding best solution proposed algorithm considering time trial solution computed avg time average computational time algorithm running proposed instances bold best times pacs lowest times possible values used named respectively best pacs best references instances efficient average value comparing sets parameters pacs one observe pacs performing better bigger instances smaller ones performs better average computational times pacs set parameters compared table pacs perform best measures worst set parameters performing better acs instances greater computational time conclusions computational results seem show uniquely optimal set acs parameters yielding best quality solutions tsp instances nevertheless able find set acs parameters work optimally majority instances unlike others known far algorithm works well across different instances adapts instance characteristics high computational overhead future work try modify algorithm framework reduce cost also fast convergence lead bad set parameters may due two reasons first specific pso framework used modifying expect obtain better results secondly way sets parameters evaluated may reviewed bad set parameters could lead convergence acknowledgment thanks university colombo school computing support given sri lanka contribution partially supported avcit generalitat valenciana ref laguna algorithms using fractional experimental designs local search appear operations research barr golden kelly resende stewart designing reporting computational experiments heuristic methods journal heuristics mauro birrattari thomas luis paquete klaus varrentrapp racing algorithm configuring metaheuristics langdon editors gecco proceedings genetic evolutionary computation conference morgan kaufmann publishers san francisco usa coy golden runger wasil using experimental design find effective parameter settings heuristics journal heuristics vol dorigo optimization learning natural algorithms italian phd thesis dip elettronica politecnico milano dorigo gambardella ant colony system cooperative learning aproach travelling salesman problem ieee transactions evolutionary computation dorigo maniezzo colorni positive feedback search strategy technical report dip elettronica politecnico milano dorigo maniezzo colorni ant system optimization colony cooperating agents ieee transactions systems man cybernetics part dorigo ant colony optimization mit press eberhart simpson robbins computational intelligence tools academic press professional boston gambardella dorigo reinforcement learning approach symmetric asymmetric travelling salesman problems proceedings ieee international conference evolutionary computation icec pages morgan kaufmann gies particle swarm optimization reconfigurable phase differentiated array design microwave optical technology letters vol kennedy eberhart particle swarm optimization proc ieee international conference neural networks piscataway usa kennedy eberhart swarm intelligence morgan kaufmann publishers laskari parsopoulos vrahatis particle swarm optimization integer programming proceedings ieee congress evolutionary computation pilat white using genetic algorithms optimize proceedings third international workshop ant algorithms pages reinelt travelling salesman problem library tsp applications orsa journal computing
| 9 |
ijacsa international journal advanced computer science applications vol systematic integrative analysis proteomic data using bioinformatics tools rashmi rameshwari prasad asst professor dept biotechnology manav rachna international university faridabad india dean university faridabad india analysis interpretation relationships biological molecules done help networks networks used ubiquitously throughout biology represent relationships genes gene products network models facilitated shift study evolutionary conservation individual gene gene products towards study conservation level pathways complexes recent work revealed much chemical reactions inside hundreds organisms well universal characteristics metabolic networks shed light evolution networks however characteristics individual metabolites neglected network current paper provides overview bioinformatics software used visualization biological networks using proteomic data main functions limitations software metabolic network protein interaction network visualization tools introduction molecular interaction network visualization one features developed simulation process biological interactions drawing molecule example protein may seem easy generating protein types conformation attain interactions simulate process quite difficult context one greatest manually produced molecular structures time done kurt kohn map cell cycle control protein interactions network visualization deals territory similar interaction prediction differs several key ways proteomics data often associated pathways protein interactions easily visualized networks even types data normally viewed networks microarray results often painted onto signaling metabolic pathways protein interaction networks visualization analysis visualization analysis tools commonly used interact proteomic data visualization tools developed simply illustrating big picture represented interaction data expression data qualitative assessment necessarily quantitative analysis prediction expression interaction experiments tend large scale difficult analyze indeed grasp meaning results analysis visual representation large scattered quantities data allows trends difficult pinpoint numerically stand provide insight specific avenues molecular functions interactions may worth exploring first bunch either confirmation rejection later significance insignificance research problem hand recent exceptions visualization tools designed intent used analysis much show workings molecular system clearly visualization tools also actually predict molecular interactions characteristics contrary visualization tools create graphical representation already known literature molecular interaction repositories gene ontology displaying interaction networks treating proteins members general family groupings going interactions different tissues analogues tissues organisms apparent display molecular interaction inferred may may actually observed documented something could misconstrued predicted interaction significant difference molecular interaction network visualization molecular interaction prediction nature information provides interaction prediction characterized concern proteins interact interact conditions interact parts necessary interaction characteristics governed physical chemical properties proteins molecules involved may actual molecules described extensive proteomics experiments hypothetical silico generated species investigated pharmaceutical applications interaction network visualization tools given knowledge physical chemical properties proteins interact result information inadvertently impart concerns whether certain proteins putatively interact certain proteins interact interact interact ijacsa international journal advanced computer science applications vol biological systems comprehensive data protein interactions also suitable systems level evolutionary analysis many commercial software ingenuity pathway analysis figure metacore pathway studio designed visualize data context biological networks biological networks modular organization scale free networks degree distribution follows means small number nodes called hubs highly connected hubs usually play essential roles biological systems hand groups proteins similar functions tend form clusters modules network architecture many commercial software network visualization follow law metacore integrated knowledge database software suite pathway analysis experimental data gene lists based manually curetted databases human proteinprotein protein compound interactions package includes easy use intuitive tools search data visualization mapping exchange biological networks interactomes figure figure comparative visualization drawn different tools like pajek cytoscape fig genes represented nodes interaction edges background related work software tool known pathway studio based mammalian database named resnet mammalian generated text mining pubmed database full text journals advantages using tool increases depth analysis data generation experiments like microarray gene expression proteomics metabolomics enables data sharing common analysis environment tool simplifies keeping date literature brings knowledge analysis environment also enables visualization gene expression values status context protein interaction networks pathways however free software like pathway voyager figure genmapp cytoscape also available pathway voyager applies flexible approach uses kegg database pathway mapping genmapp figure also designed visualize gene expression data maps representing biological pathways gene groupings genmapp option modify design new pathways apply complex criteria viewing gene expression data pathways recent advances technologies software tools developed visualize analyze data paper deals various visualization techniques proteomic data major emphasis network graph generated interaction many tools used purpose based different algorithm example pajek cytoscape use force directed layout algorithm produces graph computing force pairs nodes iteration optimization process networks visualized indicated fig fig protein interactions data also helps study related evolutionary analysis new era necessary understand components involve biological systems various biological data knowledge components molecular level reveals structure biological systems lead ontological comprehension figure family gtpases kinase cascades image generated ingenuity pathway analysis ijacsa international journal advanced computer science applications vol clustered graphs common occurrences biological field examples graphs include clustering proteins genes based biological functionality structural geometry expression pattern chemical property figure network generated metacore software iii limitations future developments completion human genome project genes discovered potential synthesize proteins less genes assigned putative biological function basis sequence data advancement technology many software designed explore biological networks like protein interactions network interactions based databases like dip mint mammalian database tools represented paper applicable wide range problems distinct features make suitable wide range applications figure actin cytoskeleton regulation network generated pathway studio omic analysis two essential concepts must applied understand biological functions systems level first integrate different levels information second view cells terms underlying network structure information biological entity scattered different databases hence information retrieval diverse databases done bit time consuming current databases good analysis particular protein small interaction networks useful integration complex information cellular regulation pathways networks cellular roles clinical data lack coordination ability exchange information multiple data sources need software integrate information different database well diverse sources analyze data present many software like ingenuity pathway analysis metacore pathway studio work owner curetted database time high price make unaffordable academic institute use time cytoscape many properties visualization high throughput data alternative users however network constructed cytoscape sometimes liable show errors need improve quality available curetted databases also develop integrative knowledge bases especially designed construct biological networks figure interactive kegg pathway display screenshot illustrates kegg pathway mapping glycolysis gluconeogenesis pathway using predicted orfeome gamola annotated acidophilus ncfm genome query template ijacsa international journal advanced computer science applications vol figure myometrial relaxation contraction pathway image generated genmapp metabolic network reliable source information reconstruction grns largely promoted advances technologies enable measure global response biological system specific interventions instance gene expression monitoring using dna microarrays popular technique measuring abundance mrnas however integration different types data genomics proteomics metabolomic studies undertaken although metabolic network important features drug discovery use case human limited proteomics may yield crucial information regulation biological functions mechanism diseases sense highly promising area drug discovery hence additional efforts required metabolic network reconstruction analysis conclusion tools concluded metabolic pathways stored directed acyclic graphs considered basic concept visualization tool metabolic pathway respect visualization single network views provide little brief glimpses large datasets visualization tools need support many different types views network view different level detail dynamic navigation one view another key showing connection different views navigating one time series point another instance could involve view showing differences two time points time points consecutive number differences tend quite small similar approach could applied localization information well adequately address issues active cooperation required variety research fields including graph drawing information visualization network analysis course biology though mentioned tool differs significantly approach pathway reconstructions hence future tool needed describe pathways make cell interact system overall physiology organism required figure pathway voyager mapping procedure references lenzerini data integration theoretical perspective pods gerber lee rinaldi yoo robert gordon jaakkola young gifford computational discovery gene module regulatory networks nat biotechnol fields song novel genetic system detect interactions nature spirin mirny pnas jeong mason barabasi oltvai lethality centrality protein networks nature batagelj mrvar pajek program large network analysis connections gene ontology consortium gene ontology tool unification biology nat genet shannon markiel ozier baliga wang ramage amin schwikowski ideker cytoscape software environment integrated models biomolecular interaction networks genome res hermjakob lewington mudali kerrien orchard vingron roechert roepstorff valencia margalit armstrong bairoch cesareni sherman apweiler intact open source molecular interaction database nucleic acids res database alfarano andrade anthony bahroos bajec bantoft betel bobechko boutilier burgess biomolecular interaction network database related tools update nucleic acids res rashmi rameshwari prasad survey various interaction tools proc national conference ijacsa international journal advanced computer science applications vol advances knowledge management ncakm university faridabad barabasi oltvai network biology understanding functional organization nat rev genet han bertin hao goldberg berriz zhang evidence dynamically organized modularity yeast proteinprotein interaction network nature dahlquist salomonis vranizan lawlor conklin genmapp new tool viewing analyzing microarray data biological pathways nat genet shannon markiel ozier baliga wang ramage cytoscape software environment integrated models biomolecular interaction networks genome res daraselia yuryev egorov novichkova nikitin mazo extracting human protein interactions medline using fullsentence parser bioinformatics altermann klaenhammer pathwayvoyager pathway mapping using kyoto encyclopedia genes genomes kegg database bmc genomics overington lazikani hopkins many drug targets nat rev drug discov lander linton birren nusbaum zody baldwin initial sequencing analysis human genome nature tucker gera uetz towards understanding complex protein networks trends cell biology funahashi celldesigner process diagram editor biochemical networks biosilico kell systems biology metabolic modelling metabolomics drug discovery development drug discovery today kitano using process diagrams graphical representation biological networks nat nikitin pathway studio analysis navigation molecular networks bioinformatics altermann klaenhammer gamola new local solution sequence annotation analyzing draft finished prokaryotic genomes omics goesmann haubrock meyer kalinowski giegerich pathfinder reconstruction dynamic visualization metabolic pathways bioinformatics ogata goto sato fujibuchi bono andkanehisa kegg kyoto encyclopedia genes genomes nucleic acids alfarano biomolecular interaction network database related tools update nucleic acids kohn molecular interaction maps bioregulatory networks general rubric systems biology mol biol cell zanzoni mint molecular interaction database febs xenarios dip database interacting proteins nucleic acids metacore pathway analysis software available authors profile rashmi rameshwari received two degree one bhagalpur bihar jamia hamdard new delhi area biotechnology bioinformatics respectively currently associated manav rachna international university assistant professor dept biotechnology research interests include systems biology proteomics microarray technology chemoinformatics etc prasad received degree computer science nagarjuna university india doctoral degree jamia milia islamia university new delhi india years academic professional experience deep interest planning executing major projects pursuing research interest bioinformatics authored publications reputed journals conferences also authored books also held respectable positions deputy director bureau indian standards new delhi areas interest include bioinformatics artificial intelligence consciousness studies computer organization architecture member reputed bodies like indian society remote sensing computer society india apbionet international association engineers etc ijacsa international journal advanced computer science applications vol table comparative statement various network visualization analysis software features ingenuity pathway analysis metacore pathway studio genmapp cytoscape pathway voyager pathfinder developed ingenuity systems genego ariadne genomics gladstone institutes description database biological networks created millions relationships proteins genes complexes cells tissues drugs diseases manually curetted database human proteinprotein interaction interactions transcriptional factors signaling metabolism bioactive molecules archived maps drawn based textbooks articles public pathway databases generated public database maintained gene ontology project availability based database commercial ingenuity pathways knowledge base commercial human database public kegg public kegg public kegg public rdbms usec along kegg web access platform special features enabled java higher solution given pharmaceutical academics enabled java graphics tools constructing modifying pathways used analyzing microarray data include statistical filters pattern finding algorithms hierarchical clustering enabled java mrna expression profiles gene annotations gene ontology kegg incorporates statistical analysis enabled dedicated hardware software necessary analyze given datasets enabled aim comparing pathways microscopic level therefore used dynamic visualizations metabolisms whole genome perspective drawbacks limited human mouse rat canine enabled java unique ability concurrently visualize multiple types experimental data gene expression proteomic metabolomics sage mpss snp hcs hts microrna clinical phenotypic data specific server required databases collection eukaryotic molecular interactions generated medscan text knowledge suit using entire pubmed database full text journals also works public database signaling biochemical pathways commercial mammalian database resnet resnet plant database kegg bind hprd enabled python analyze proteomic metabolomics high throughput data eric altermann todd klaenhammer utilizes kegg online database pathway mapping partial whole prokaryotic genomes alexander goesmann paul shanon andrew markiel software integrating biomolecular interaction networks expression data molecular states unified conceptual framework generally used explore microarray data focuses networks lowlevel models components interactions addressed ongoing projects ecell tomita virtualcell mechanisms bridging highlevel interactions lower level certain selectable pathways ribosomal reference pathway kegg yet support organism independent marking practical reasons hits displayed pathways uses rdbms based internet application integrated locally developed genome annotation system gendb extended functionality offers wizard interface creating simple network data queries biological networks provides language interface expressing queries tool dynamic visualization metabolic pathways based annotation data ijacsa international journal advanced computer science applications vol features ingenuity pathway analysis genmapp cytoscape pathway voyager pathfinder gamola gendb integrated storing result data format based ingenuity product present gene medscan mapp present present present present present present specific specific resnet exchange xml formats fasta files embl genbank graph comparison species graphical user interface visualization technique ease use report generation graphical representati classificatio technique present present present csv gpml wikipathways mapp present present present present present present present present present present present present present present present present present present excellent excellent excellent excellent good good good present present present present present present present compare affected pathways phenotypes across time dose patient population disease tissue species sub cellular localization interactions metabolites internet explorer higher interrogate different species multiple genomes hierarchical clustering none hierarchical clustering chunks subway internet explorer higher internet explorer higher internet explorer higher internet explorer higher internet explorer higher linux red hat windows windows window windows unix windows ools http web browser internet explorer higher memory operating system supported reference url minimum recommended vista window macintosh references pathway studio models specific biological processes required numerous metacore
| 5 |
von neumann problem locally compact groups feb friedrich martin schneider abstract note generalization whyte geometric solution von neumann problem locally compact groups terms borel clopen piecewise translations strengthens result paterson existence borel paradoxical decompositions locally compact groups along way study connection geometric properties coarse spaces certain algebraic characteristics wobbling groups introduction seminal article von neumann introduced concept amenability groups order explain paradox occurs dimension greater two proved group containing isomorphic copy free group two generators amenable converse question whether every group would subgroup isomorphic first posed print day became known von neumann problem sometimes von problem original question answered negative however interesting positive solutions variants von neumann problem different settings geometric solution whyte solution gaboriau lyons generalization locally compact groups gheysens monod well baire category solution marks unger whyte geometric version reads follows theorem theorem uniformly discrete metric space uniformly bounded geometry admits partition whose pieces uniformly lipschitz embedded copies tree particular applies cayley graphs finitely generated groups turn yields geometric solution von neumann problem aim present note extend whyte relaxed version von neumann conjecture realm locally compact groups purpose need view result slightly different perspective given uniformly discrete metric space wobbling group group bounded displacement defined sym wobbling groups attracted growing attention recent years since tree isomorphic standard cayley graph one easily date february mathematics subject classification primary research supported funding german research foundation reference schn well funding excellence initiative german federal state governments friedrich martin schneider reformulate whyte terms subgroups let recall subgroup sym said element fixed point corollary theorem uniformly discrete metric space uniformly bounded geometry isomorphic subgroup finitely generated group metrics generated two finite symmetric generating sets containing neutral element equivalent hence give rise wobbling group easy see group piecewise translations bijection belongs exists finite partition furthermore note requirement statement dropped fact van douwen showed contains isomorphic copy despite amenable turns embeds wobbling group coarse space positive asymptotic dimension see proposition remark going present natural counterpart corollary general locally compact groups let locally compact group call bijection clopen piecewise translation exists finite partition clopen subsets every member map agrees left translation easy see set clopen piecewise translations constitutes subgroup homeomorphism group topological space mapping embeds regular transitive subgroup similarly bijection called borel piecewise translation exists finite partition borel subsets likewise set borel piecewise translations subgroup automorphism group borel space contains subgroup locally compact group reasonable analogues wobbling group yet mere existence embedding semiregular subgroup even prevent amenable fact many examples compact thus amenable groups admit subgroup hence subgroup example since residually finite embeds compact group formed product finite quotients therefore seek topological analogue amounts short discussion remark let set subgroup sym exists necessarily surjective map obviously latter implies former see converse let orbit action every since unique readily implies desired purpose note show following von neumann problem locally compact groups theorem let locally compact group following equivalent amenable exist homomorphism borel measurable map exist homomorphism borel measurable map remark map theorem injective view discussion also note finitely generated discrete groups statement theorem reduces whyte geometric solution von neumann problem specifically existence map may thought borel variant embedding condition corollary general arrange continuous exist connected locally compact groups theorem may considered relaxed versions containing discrete subgroup according result feldman greenleaf metrizable closed countable discrete subgroup locally compact group right coset projection admits borel measurable hence map borel measurable particularly applies discrete proof theorem combines result rickert resolving original von neumann problem almost connected locally compact groups theorem slight generalization whyte result coarse spaces theorem turn refines argument paterson proving existence borel paradoxical decompositions locally compact groups fact theorem implies paterson result corollary paterson locally compact group admits borel paradoxical decomposition exist finite partitions borel subsets note organized follows building preparatory work concerning coarse spaces done section prove theorem section since approach proving theorem involves wobbling groups recent interest groups furthermore include complementary remarks finitely generated subgroups wobbling groups section revisiting whyte result proof theorem make use whyte argument form corollary precisely slightly generalize result metric spaces arbitrary coarse spaces however require minor adjustments include proof sake completeness convenience let recall terminology coarse geometry may found relation set let coarse space pair consisting set collection subsets called entourages diagonal belongs friedrich martin schneider also coarse space said bounded geometry finite uniformly bounded geometry among important examples coarse spaces metric spaces metric space obtain coarse space setting sup another crucial source examples coarse spaces given group actions indeed group acting set obtain coarse space uniformly bounded geometry finite note coarse structure induced finitely generated group acting left translations coincides coarse structure generated metric associated finite symmetric generated subset containing neutral element come amenability adopting notion metric coarse geometry call coarse space bounded geometry amenable finite easily seen equivalent saying finite definition compatible existing notion amenability group actions proposition recall action group set amenable space bounded functions admits mean exists positive linear functional proposition rosenblatt action group set amenable coarse space amenable proof generalizing work amenable groups rosenblatt showed action group set amenable finite finite easily seen equivalent amenability let turn attention towards theorem straightforward adaptation whyte original argument readily provides following slight generalization theorem binary relation denote associated undirected graph furthermore let map proof theorem utilize simple observation map graph forest contains cycles periodic points means empty theorem let coarse space bounded geometry nonamenable forest von neumann problem locally compact groups proof due standard fact isoperimetric constants regular trees example symmetric tree every finite subset course property passes forests readily settles desired implication suppose amenable symmetric entourage every finite consider symmetric relation since every every finite subset hall harem theorem theorem asserts exists function notice fixed points since set elements partitions set may choose subset furthermore choose functions follows injective functions disjoint ranges define setting even odd otherwise observe particular therefore moreover follows however every exists smallest conclude hence readily implies thus particular fixed points furthermore even odd otherwise thus hence forest theorem corresponds corollary translate theorem equivalent statement wobbling groups given coarse space define wobbling group group bounded displacement sym since tree isomorphic standard cayley graph free group two generators obtain following consequence theorem friedrich martin schneider corollary coarse space bounded geometry isomorphic subgroup note corollary group actions applied already though without proof recent work author thom corollary topological version whyte result general necessarily locally compact topological groups terms perturbed translations established present note corollary used prove theorem generalizes whyte result locally compact groups means clopen borel piecewise translations turn quite different corollary proving main result section prove theorem sake clarity recall locally compact group said amenable mean space bounded continuous functions positive linear map preparation proof theorem note following standard fact whose straightforward proof omit lemma let subgroup locally compact group consider usual action set left cosets mean mean mean given fact see section locally compact group considered together left haar measure amenable exists mean positive linear map easy calculation provides following lemma let locally compact group mean let locally compact group let homomorphism borel measurable amenable proof clearly implies prove converse suppose let let finite partition borel subsets every desired let mean define easy see mean furthermore asserts case ambiguity invariance shall always mean left invariance von neumann problem locally compact groups hence note lemma readily settles implication theorem remaining part proof theorem rely structure theory locally compact groups importantly following remarkable result rickert building recall locally compact group said almost connected quotient connected component neutral element compact theorem theorem almost connected locally compact group discrete subgroup isomorphic everything prepared prove main result proof theorem evidently implies subgroup furthermore implies due lemma let locally compact group follows classical work van dantzig locally compact group contains almost connected open subgroup see proposition choose almost connected open hence closed subgroup distinguish two cases depending upon whether amenable amenable according theorem contains discrete subgroup isomorphic result feldman greenleaf right coset projection admits borel measurable exists borel measurable map idf clearly map borel measurable readily settles first case maps desired amenable since amenable lemma implies action set amenable proposition means coarse space amenable due corollary exists embedding thus definition exists finite subset hence find finite partition along every consider projection since open subgroup quotient topology topology induced discrete finite partition clopen subsets therefore may define setting consider unique homomorphism satisfying since follows every appealing remark find mapping since quotient space friedrich martin schneider discrete map continuous therefore borel measurable finally note desired completes proof let deduce paterson result theorem proof corollary clear let locally compact group theorem exist homomorphism borel measurable map consider paradoxical decomposition given taking common refinement suitable finite borel partitions corresponding elements obtain finite borel partition along mappings borel measurable refinements finite borel partitions thus data constitute borel paradoxical decomposition remarks wobbling groups going conclude additional remarks wobbling groups consider noteworthy complements corollary van douwen result shows presence subgroup wobbling group imply coarse space turns containment witness positive asymptotic dimension proposition let recall terminology asymptotic dimension asdim coarse space defined infimum every exist concept asymptotic dimension first introduced metric spaces gromov later extended coarse spaces roe refer thorough discussion asymptotic dimension related results examples aim describe positive asymptotic dimension algebraic terms unravel case following lemma let denote equivalence relation set generated given binary relation von neumann problem locally compact groups lemma let coarse space asdim every proof let without loss generality assume contains asdim exists assertion implies two distinct members disjoint hence gives partition induced equivalence relation contains thus follows let straightforward check desired properties hence asdim proof proposition rely upon following slight modification standard argument residual finiteness free groups element let denote length respect generators smallest integer represented word length letters lemma let let exists homomorphism sym proof let course let let define map sym first let define sym case analysis follows even odd even odd analogously let define sym case analysis follows even odd friedrich martin schneider even odd easy check permutations moreover considering unique homomorphism sym observe thus also sake clarity recall group locally finite finitely generated subgroups finite subset group denote hsi subgroup generated proposition let coarse space uniformly bounded geometry following equivalent asdim locally finite embeds proof denote coarse structure let recall general fact finite group set group locally finite indeed considering finite subset induced equivalence relation observe finite due finite map induces homomorphism evidently contained finite group hsi suppose asdim consider finite subset aim show hsi finite end first observe sym constitutes embedding since belongs lemma asserts note hence every due uniformly bounded geometry exists thus every let follows group sym isomorphic subgroup sym virtue since finitely generated sym locally finite remark implies finite trivial suppose asdim lemma exists loss generality may assume hence let define claim every every finite subset exists proof claim let let finite put since conclude let every von neumann problem locally compact groups follows applying pigeonhole principle find hence desired since countable may recursively apply claim choose family every two distinct let define due lemma exists homomorphism sym since follows disjoint distinct may define homomorphism sym setting otherwise construction embedding furthermore every hence image contained desired remark assumption uniformly bounded geometry theorem needed prove implies fact similar argument proof involving lemma though shows wobbling group coarse space uniformly bounded geometry contains isomorphic copy sym hence one might wonder whether proposition could deduced readily van douwen result embedding however exist uniformly discrete metric spaces uniformly bounded geometry positive asymptotic dimension whose wobbling group contain isomorphic copy see example clarify situation proposition usual group called residually finite embeds product finite groups group called locally residually finite finitely generated subgroups residually finite let recall map two coarse spaces bornologous every entourage set entourage proposition let coarse space following equivalent bornologous injection locally residually finite contains subgroup isomorphic remark groups difference positive asymptotic dimension existence bornologous injection group asymptotic dimension locally finite group locally finite admits bornologous injection standard compactness argument see however arbitrary coarse spaces even uniformly bounded geometry situation slightly different see example one may equivalently replace item proposition one hand inclusion map constitutes bornologous injection friedrich martin schneider hand bornologous bijection given unless explicitly stated otherwise always understand equipped coarse structure generated usual euclidean metric iii bornologous injection two coarse spaces induces embedding via otherwise hence groups mutually embed thus may equivalently replaced item proposition proof proposition due remark iii suffices show locally residually finite result gruenberg states finite group restricted wreath product lamplighter group residually finite abelian action sym given sym defines embedding sym sym image contained every sym since embedded lamplighter groups finitely generated residually finite follows locally residually finite let denote coarse structure bounded geometry exist infinite thus existing injection bornologous hence may without loss generality assume bounded geometry hand must exist infinite otherwise would locally residually finite finite subset since homomorphism sym would embed product finite groups let infinite without loss generality may assume therefore conclude thus chain closed subsets compact topological space since member bornologous injection implies remark example let partition finite intervals consider metric space given max otherwise easy see uniformly bounded geometry moreover lemma unboundedness assumption interval lengths follows positive asymptotic dimension hand essentially finiteness considered intervals bornologous injection due proposition readily implies embed von neumann problem locally compact groups interplay certain geometric properties coarse spaces one hand algebraic peculiarities wobbling groups subject recent attention would interesting results direction understand specific positive values asymptotic dimension may characterized terms wobbling groups acknowledgments author would like thank andreas thom interesting discussions whyte variant von neumann conjecture well warren moors jens helpful comments earlier versions note references tullio michel coornaert hall harem theorem cellular automata groups springer monographs mathematics springer berlin heidelberg tullio rostislav grigorchuk pierre harpe amenability paradoxical decompositions pseudogroups discrete metric spaces proc steklov inst math yves cornulier irreducible lattices invariant means commensurating actions math david van dantzig zur topologischen algebra iii brouwersche und cantorsche gruppen compositio math mahlon day amenable semigroups illinois math eric van douwen measures invariant actions topology applications jacob feldman frederick greenleaf existence borel transversals groups pacific math erling groups full banach mean value math scand damien gaboriau russell lyons solution von neumann problem invent math frederick greenleaf invariant means topological groups applications van nostrand mathematical studies van nostrand reinhold new kate juschenko nicolas monod cantor systems piecewise translations simple amenable groups ann math kate juschenko mikael salle invariant means wobbling group bull belg math soc simon stevin andrew marks spencer unger baire measurable paradoxical decompositions via matchings adv math maxime gheysens nicolas monod fixed points bounded orbits hilbert spaces august arxiv appear annales scientifiques normale michail gromov asymptotic invariants infinite groups geometric group theory vol sussex london math soc lecture note ser cambridge univ press cambridge karl gruenberg residual properties infinite soluble groups proc london math soc pierre harpe topics geometric group theory chicago lectures mathematics university chicago press chicago john von neumann die analytischen eigenschaften von gruppen linearer transformationen und ihrer darstellungen math alexander question existence invariant mean group uspekhi mat nauk theodore palmer banach algebras general theory vol encyclopedia mathematics applications cambridge university press cambridge alan paterson nonamenability borel paradoxical decompositions locally compact groups proc amer math soc neil rickert amenable groups groups fixed point property trans amer math soc friedrich martin schneider neil rickert properties locally compact groups austral math soc john roe lectures coarse geometry university lecture series american mathematical society providence joseph rosenblatt generalization condition math scand friedrich schneider andreas thom sets topological groups august arxiv kevin whyte amenability equivalence von neumann conjecture duke math institute algebra dresden dresden germany current address department mathematics university auckland private bag auckland new zealand address
| 4 |
modal analysis laminated glass usability simplified methods enhanced effective thickness jan alena jan zeman janda jaroslav schmidt michal department mechanics faculty civil engineering czech technical university prague abstract paper focuses modal analysis laminated glass beams multilayer elements stiff glass plates connected compliant interlayers behavior aim study assess whether approximate techniques accurately predict behavior laminated glass structures propose easy tool modal analysis based enhanced effective thickness concept galuppi purpose consider four approaches solution related nonlinear eigenvalue problem solver based newton method modal strain energy method two effective thickness concepts comparative study free vibrating laminated glass beams performed considering different geometries boundary conditions material parameters interlayers two ambient temperatures viscoelastic response polymer foils represented generalized maxwell model show simplified approaches predict natural frequencies acceptable accuracy examples however considerable scatter predicted loss factors enhanced effective thickness approach adjusted modal analysis leads lower errors quantities compared two simplified procedures reducing extreme error loss factors one half compared modal strain energy method one quarter compared original dynamic effective thickness method keywords free vibrations laminated glass complex dynamic modulus dynamic effective thickness enhanced effective thickness modal strain preprint submitted arxiv january energy method newton method introduction laminated glass multilayer composite made glass layers plastic interlayers typically polymers foils improve behavior originally brittle glass elements increase damping therefore allow applications prohibited traditional glass transparent structures avoiding resonance reducing noise vibrations laminated glass components thus substantial building structures also car ship design process others thus reliable prediction natural frequencies damping characteristics associated vibration mode essential issue design dynamically loaded structures case laminated glass free vibration analysis leads viscoelastic behavior foil eigenvalue problem complex eigenvalues eigenvectors corresponding natural angular frequencies mode shapes addition nonlinearity due response polymer interlayer adds complexity analysis several approaches analyzing vibrations viscoelastically damped layered composites found literature paper broadly divide methods three groups numerical approaches solving complex eigenvalue problem directly simplified numerical approximations dealing real eigenmode problem iii analytical methods effective thickness methods derived analytical models comparison selected solvers problems shows converge towards eigenvalues computational time number iterations differ computational cost reduced using simplified numerical methods deal real eigenvalue problem corresponding delayed elasticity take account real part complex stiffness core damping parameters obtained eigenvalues eigenmodes using modal strain energy method discussed later paper structures simple boundary conditions geometry analytical solutions derived frequencydependent behavior polymer foil provide natural frequencies loss factors using iterative algorithm recently dynamic effective thickness approach laminated glass beams proposed pelayo using complex flexural stiffness introduced concept extended towards plates multilayer laminated glass beams validation dynamic effective thickness method results experimental testing shows using approach natural frequencies predicted good accuracy high scatter loss factors therefore want analyze accuracy response effective thickness simplified approaches investigate usability modal analysis laminated glass elements propose improvements specifically perform comparative study free vibrating beams using selected solvers representing three groups introduced propose easy tool modal analysis based enhanced effective thickness concept galuppi best knowledge comparison complex approximate models performed laminated glass far methods compared beams symmetric asymmetric different ambient temperatures viscoelastic behavior polymer foil described generalized maxwell model several sets parameters chain taken literature used case study evaluate discuss effect various materials used laminated glass structures also different maxwell chain parameters type interlayer structure paper follows geometry laminated glass beam material characterization glass polymer layers outlined section approaches based finite element methods newton method modal strain energy method introduced section formula natural frequencies presented section combined effective thickness concept using dynamic effective thickness enhanced effective thickness adjust modal analysis results case study presented analyzed section finally summarize findings section characterization laminated glass structures configuration laminated glass beam paper common configuration two face glass plies one polymer interlayer see figure handled simplicity however extension towards multilayer elements possible discussed approaches slipping interface glass ply polymer foil assumed glass polymer glass figure basic composition laminated glass sandwich materials constitutive behavior glass polymer layers remains presented methods glass treated elastic material whereas polymer behavior assumed linear viscoelastic supposed damping glass small comparison interlayer thus negligible therefore behavior glass layer described three parameters young modulus poisson ratio density resp face layers different types glass variety interlayer materials laminated glass broad ranging across common polyvinyl butyral pvb acetate eva thermoplastic polyurethane tpu stiffest ionoplast sgp sentryglas plus density interlayer material assume poisson ratio constant see viscoelastic behavior polymers commonly described generalized maxwell model fractional derivative model study generalized maxwell model used two reasons laminated glass description common literature according comparison hamdaoui requires less computational effort fractional derivative model eigenvalue problems figure generalized maxwell chain consisting viscoelastic units one elastic spring schematic representation maxwell model figure corresponds relaxation function provided prony series page exp exp stands current time shear modulus denotes shear modulus unit relaxation time related viscosity elastic shear modulus whole chain frequency domain shear modulus given complex valued quantity composed real part complex part dependent angular frequency according real part storage modulus refers elastic behavior whereas imaginary part loss modulus represents energy dissipation effects decomposition useful formulation free vibration problem unlike elastic shear modulus whole chain set real part instead shear modulus avoid computational difficulties chains maxwell chain parameters different interlayer materials found instance selected five representative sets prony series related parameters reported appendix loss modulus mpa storage modulus mpa sgpm tpum pvb pvb pvb frequency frequency figure dependence real part storage modulus imaginary part loss modulus shear modulus pvb tpu sgp frequency maxwell chain parameters taken evident figure data differ even polymer type different test methods may result slightly different parameters interlayer properties different content additives plasticizers used manufacturing also partially contributes discrepancy besides authors mostly specify frequency range prony series determined except pvba effect temperature accounted using superposition principle given ambient temperature relaxation times shifted factor derived equation log material constants correspond reference temperature finite element models refined beam element laminated glass small thickness interlayer assume shear deformation viscoelastic foil responsible damping transverse compressive strain negligible thus treat layer beam element numerical analysis assume planar individual layers whole composite three layers laminated glass beam constrained together compatibility equations constraint taken account using lagrange multipliers additional unknown nodal forces holding adjacent layers together static condensation dependent generalized displacements second option used study delamination assumed modal analysis master dofs slave dofs glass polymer glass figure master slave degrees freedom dofs refined beam element horizontal central displacements deflections rotations according timoshenko beam theory nine unknowns per beam horizontal vertical centerline displacements rotations layer see figure four unknowns eliminated using four compatibility conditions assuming perfect adhesion horizontal vertical directions elimination together outline element stiffness mass matrices found appendix note use linear basis functions evaluation stiffness mass matrices selective integration scheme avoid shear locking governing equations nonlinear eigenvalue problem discretization free vibration problem described governing equation eigenvalues eigenvectors solving problem represent squared angular frequencies associated mode shapes mass matrix constant frequency independent whereas complex stiffness matrix depends complex frequency system similarly stiffness matrix decomposed elastic matrix includes contributions glass faces stiffness matrix interlayer corresponding instantaneous shear modulus frequency dependent part constant matrix free vibration problem rewritten introduced obtain problem see mode shape eigenvector solving problem damping thus assumed auxiliary problem stiffness matrix constant solve used solver based implicitly restarted arnoldi method natural frequencies loss factors natural frequency fhz modal loss factor associated mode shape determined relevant squared frequencies according thus fhz note decided omit index referring given mode shape avoid profusion notation later section compare natural frequencies loss factors corresponding first three mode shapes solver using newton method cnm newton method applied extended system obtain pair eigenvalue eigenvector approach belongs class methods iterating individual eigenpair independently starting initial eigenpairs solving express approximations searched frequency mode shape previous approximations increments form linearization evaluation jacobian matrix newton method leads system linear equations page operator derivative provided initial pair obtained realeigenvalue problem stopping criterion defined norm residual weighted norm relevant mode shape user defined tolerance proposed eigenvalue solver verified model mead results model adopted laminated glass beam free ends response good agreement errors frequencies loss factors less minor discrepancies caused different assumptions model zero young modulus interlayer different stopping tolerance iterative solvers specified ambiguity prony series conversion young modulus shear modulus unclear assumption constant value poisson ratio bulk modulus see solver modal strain energy method mse especially structures direct evaluation complexvalued solution expensive therefore natural frequencies often approximated simplification problem recall due approximation stiffness matrix estimates eigenpairs real squared angular frequencies real mode shape vectors consider approximate stiffness corresponds real part whole stiffness matrix evaluated relevant frequency therefore stiffness matrix updated iteratively convergence angular frequency estimate loss factors converged real eigenpairs modal strain energy method introduced ungar kerwin later popularized finite element models johnson kienholz method assumes changes occur damped mode shapes therefore suitable lightly damped structures modal loss factor individual mode determined according derivation formula rayleigh quotient described iterative procedure provides accurate values natural frequencies loss factors original approximation technique different ways determining approximate stiffness matrix found literature see original one starts constant initial value interlayer shear modulus adjusts computed loss factor correction factor taking account change material properties due frequency shift effective thickness approaches expression natural angular frequencies beams effective thickness formulations found literature laminated glass beams plates static loading whereas best knowledge one effective thickness approach exists dynamic problems general effective thickness methods based calculating constant thickness monolithic element width length gives response laminated glass beam identical loading boundary conditions example thickness thicknesses defined laminated glass structures static bending obtain extreme values deflection stresses paper effective thickness concept applied vibrating laminated glass beams dynamic effective thickness used modal parameters calculation analytical expressions natural angular frequencies monolithic beam given wavenumber young modulus glass mass per unit length width beam hef dynamic effective thickness two expressions effective thickness introduced next two sections note wavenumbers usually expressed form length beam depends boundary conditions mode shape chapter due dependency dynamic effective thickness frequency search requires iterations convergence achieved according criterion frequency fhz loss factor determined complex valued eqs dynamic effective thickness concept det dynamic effective thickness introduced pelayo iterative algorithm extended plates effective thickness derived formula effective stiffness ross beam purely elastic face layers linearly viscoelastic core however analytical model also used different boundary conditions using known relevant wavenumbers beams assumption glass parameters glass layers expression dynamic effective thickness holds hef geometric parameter depends thicknesses individual layers shear parameter additionally includes wavenumber young modulus glass complex shear modulus polymer recall enhanced effective thickness adjusted modal analysis eet one effective thickness approaches layered beams static loading enhanced effective thickness method galuppi royercarfagni galuppi derived variational formulation assumption previous section enhanced effective thickness deflection expressed hef coefficient shear cohesion ratio glass interlayer stiffnesses shape coefficient dependent normalized shape deflection curve homogeneous beam parameters follow areas second moments area itot defined relations recall figure itot bhi two intuitive adjustments method made use modal analysis shear modulus interlayer used coefficient shape function deflection static loading replaced one corresponding mode shape monolithic beam given boundary conditions chapter resulting parameters corresponding three basic boundary conditions summarized table beam mode table shape coefficients three basic boundary conditions first three mode shapes beam length effective thickness also det approach previous section natural frequencies damping evaluated according note procedure extended plate structures similarly static variant leave future work case study section usability three methods introduced assessed laminated glass beams section divided two parts first introducing selected test examples second discussing results effect input data usability modal strain energy method two effective thickness approaches examples examined collection examples results combinations input data clampedclamped beams table attribute supports interlayer sgpm tpum pvbm pvbs pvba temperature pvbs pvba units table input data boundary conditions length width laminated beam constant whereas three configurations considered assess effect layout thickness interlayer see figure material properties glass three interlayer materials three different sources figure summarized appendix calculations carried two ambient temperatures room temperature cases elevated temperature two parameters specified finally sake completeness table contains relation wavenumbers used effective thickness approaches results discussion natural frequencies loss factors first three determined according detailed simplified algorithms described beam first three modes corresponding rigid body motion skipped solvers considered comparison beam mode table wavenumbers three basic boundary conditions first three mode shapes beam length sections set tolerance methods finite element solvers beam layer discretized elements compared results obtained using refined discretization elements per length largest errors tested examples natural frequencies loss factors modal response obtained simplified methods compared reference method based solver using newton method recall section method discussed separate section mse cnm figure visualize errors natural frequencies loss factors obtained using mse method results reference method cnm red mark inside box indicates median bottom top edges box indicate percentiles respectively boxplot whiskers standard maximum length times interquartile range remaining data points outliers plotted individually natural frequencies error examples less specifically error less tested cases average error less thus solver provide reasonable approximation natural frequencies mostly sufficiently accurate design purposes substantial difference errors three tested boundary conditions errors slightly decrease higher mode shapes however loss factors differ significantly reference solution errors stay configurations increase cases error decreases increasing number kinematic boundary constraints higher modes highest errors loss factors correspond first mode shape errors decrease error loss factor error frequency mode shape mode shape figure errors natural frequencies loss factor modal strain energy method mse reference solution cnm first three mode shapes simplysupported beams frequency mse beam beam pcc beam pcc pcc loss factor mse frequency cnm pcc pcc pcc loss factor cnm sgpm tpum pvb pvb pvb pvb pvb figure plot first mode shape natural frequencies loss factors response simplified method mse plotted reference solution cnm along pearson correlation coefficient pcc maxwell chain parameters pvb tpu sgp taken second third mode shape consideration plots figure show values natural frequencies loss factors simplified method mse reference solution cnm corresponding first mode shape cases table values frequencies loss factors strongly influenced effect temperature interlayer type also highest errors loss factors appears pvbm foil errors remain foils samples pvba foil show entirely different response two pvb foils corresponding eigenfrequencies fall outside frequency range prony series det cnm comparison modal response dynamic effective thickness method det reference method cnm similarly previous section shown terms errors natural frequencies loss factors figure plots figure clear method gives accurate results simplysupported beams errors frequencies loss factor predictions quite accurate well errors boundary conditions method provide satisfactory approximations especially loss factors even used adjusted wavenumber given boundary conditions also seen quantilequantile plot figure errors frequencies exceed percentile errors loss factors errors decreasing higher mode shapes eet cnm analogy previous sections errors associated eet method reference solution cnm plotted figure plots appear figure simply supported beam errors remain frequencies loss factors case det method boundary conditions errors frequencies loss factors lower case det approach specifically frequencies loss factors highest errors appear beams lowest beam method provides good approximations natural frequencies error loss factor error frequency mode shape mode shape figure errors natural frequencies loss factor dynamic effective thickness method det reference solution cnm first three mode shapes simplysupported beams frequency det beam beam pcc beam pcc pcc loss factor det frequency cnm pcc pcc pcc loss factor cnm sgpm tpum pvb pvb pvb pvb pvb figure plot first mode shape natural frequencies loss factors response dynamic effective thickness method det plotted reference solution cnm along pearson correlation coefficient pcc maxwell chain parameters pvb tpu sgp taken error loss factor error frequency mode shape mode shape figure errors natural frequencies loss factor adjusted enhanced effective thickness method eet reference solution cnm first three mode shapes beams frequency eet beam beam pcc beam pcc pcc loss factor eet frequency cnm pcc pcc pcc loss factor cnm sgpm tpum pvb pvb pvb pvb pvb figure plot first mode shape natural frequencies loss factors response adjusted enhanced effective thickness method eet plotted reference solution cnm along pearson correlation coefficient pcc maxwell chain parameters pvb tpu sgp taken three boundary conditions best estimates loss factors tested simplified methods comparison simplified approaches beam mse det set errors cnm beam beam pvb pvb pvb pvb pvb tpum pvb sgpm pvb pvb pvb tpum pvb sgpm pvb pvb pvb pvb tpum sgpm pvb figure summary errors natural frequencies obtained simplified methods reference solution cnm corresponding first mode shape tested cases simplified approaches modal strain energy mse dynamic effective thickness det enhanced effective thickness eet methods finally errors quantities predicted three methods summarized figures tested examples first mode shape present study thus shows natural frequencies modal strain energy effective thickness methods give good approximations mse eet det beam mse det set errors cnm beam beam pvb pvb pvb pvb pvb tpum pvb sgpm pvb pvb pvb tpum pvb sgpm pvb pvb pvb pvb pvb tpum sgpm figure summary errors loss factors obtained simplified methods reference solution cnm corresponding first mode shape tested cases simplified approaches modal strain energy mse dynamic effective thickness det enhanced effective thickness eet methods effective thickness approaches provide excellent results simplysupported laminated glass beams error less however methods predict cases loss factors laminated glass beams errors tens percent eet mse det therefore provide informative estimate damping beams effective thickness methods deliver loss factors errors less room temperature evaluated temperature typical exterior facade panel summer algorithm based eet provides best estimates loss factors tested simplified methods symmetry asymmetry geometry influence level errors worth mentioning beams enhanced effective method adjusted dynamics eet gives exactly results original dynamic effective thickness method det see tables shape coefficients squared wavenumbers simplysupported beam replaced used shape coefficient derived table squared wavenumbers table would obtain results methods boundary conditions detailed study unexpected connection goes beyond current work performed separately conclusions four methods modal analysis laminated glass structures introduced paper numerical eigensolver based newton method eigensolver complemented modal strain energy method original enhanced dynamic effective thickness method aim paper assess usability last three practical methods comparing predictions complexvalued eigensolver enhanced effective thickness method galuppi proposed presented extension current method modal analysis laminated glass following conclusions made present study study underlines importance careful predicting damping laminated glass loss factor sensitive quantity affects errors approximations provided simplified methods particular natural frequencies predicted errors less using suitable method hold loss factors level errors approximated quantities depends boundary conditions effective thickness approaches parameters generalized maxwell model methods material parameters interlayers literature leads different natural frequencies loss factors even type polymer enhanced effective thickness approach adjusted modal analysis provides approximations quantities natural frequencies loss factors lower errors compared two simplified procedures simplified methods approaches reduce computational time cost important place design engineering structures limitations study confirms suitability modal analysis laminated glass structures acknowledgments publication supported czech science foundation project references andreozzi bati fagone ranocchiai zulli dynamic torsion tests characterize properties polymeric interlayers laminated glass construction building materials bedon kalamar low velocity impact performance investigation square hollow glass columns via experiments finite element analyses composite structures bennison qin davies laminated glass structurally efficient glazing innovative structures sustainable facades hong kong bilasse daya azrar linear nonlinear vibrations analysis viscoelastic sandwich beams journal sound vibration biolzi cagnacci orlando piscitelli rosati long term response joints composites part engineering christensen theory viscoelasticity introduction elsevier second edition clough penzien dynamics structures computers structures daya numerical method nonlinear eigenvalue problems application vibrations viscoelastic structures computers structures duser jagota bennison analysis butyral laminates subjected uniform pressure journal engineering mechanics galuppi manara practical expression design laminated glass composites part engineering galuppi effective thickness laminated glass plates journal mechanics materials structures galuppi effective thickness laminated glass beams new expression via variational approach engineering structures giovanna zulli andreozzi fagone test methods determination interlayer properties laminated glass journal materials civil engineering asce hamdaoui akoussan daya comparison nonlinear eigensolvers modal analysis frequency dependent laminated sandwich plates finite elements analysis design hooper blackman dear mechanical behaviour poly vinyl butyral different strain magnitudes strain rates journal materials science huang qin chu damping mechanism sandwich structures composite structures johnson kienholz finite element prediction damping structures constrained viscoelastic layers aiaa journal larcher solomos casadei gebbeken experimental numerical investigations laminated glass subjected blast loading international journal impact engineering pelayo frequency response laminated glass elements analytical modeling effective thickness applied mechanics reviews pelayo dynamic effective thickness beams plates composites part engineering mead measurement loss factors beams plates constrained unconstrained damping layers critical assessment journal sound vibration mead markus forced vibration damped sandwich beam arbitrary boundary conditions journal sound vibration mead markus loss factors resonant frequencies encastr damped sandwich beams journal sound vibration mohagheghian wang jiang zhang guo yan kinloch dear bending low velocity impact performance monolithic laminated glass windows employing chemically strengthened glass european journal mechanics mohagheghian wang zhou guo yan charalambides dear deformation damage mechanisms laminated glass windows subjected high velocity soft impact international journal solids structures pelayo natural frequencies damping ratios laminated glass beams using dynamic effective thickness journal sandwich structures materials pelayo hermans fraile modal scaling laminated glass plate international operational modal analysis conference pages radke matlab implementation implicitly restarted arnoldi method solving eigenvalue problems phd thesis rice university rao recent applications viscoelastic damping noise control automobiles commercial airplanes journal sound vibration rikards chate barkanov finite element analysis damping vibrations laminated composites computers structures ross ungar kerwin damping plate flexural vibrations means viscoelastic laminae proc colloq structural damping american society mechanical engineers pages schreiber nonlinear eigenvalue problems methods nonlinear rayleigh functionals phd thesis technischen universitat berlin shitanoki bennison koike practical nondestructive method determine shear relaxation modulus behavior polymeric interlayers laminated glass polymer testing treviso van genechten mundo tournour damping composite materials properties models composites part engineering ungar kerwin loss factors viscoelastic systems terms energy concepts journal acoustical society america williams landel ferry temperature dependence relaxation mechanisms amorphous polymers glassforming liquids journal american chemical society zeman comparison viscoelastic finite element models laminated glass beams international journal mechanical sciences appendix material properties glass interlayers appendix summarize used material properties glass three different polymers taken literature tables glass density young modulus elasticity poisson ratio gpa table glass properties density poisson ratio sgpm tpum pvbm pvbs pvba table interlayer properties sgpm tpum pvbm instantaneous shear moduli mpa mpa coefficients relaxation times table parameters generalized maxwell model sgpm tpum pvbm pvbs pvba shear moduli shear moduli relaxation times mpa mpa parameters temperature shifting table parameters generalized maxwell model pvbs pvba appendix numerical aspects finite element discretization briefly presented section following exposition rikards study deals laminated glass beam made three layers discretization unknowns per element unknowns per cross section recall figure therefore kinematics element specified vector nodal displacements rotations ufull using timoshenko beam theory express compatibility conditions corresponding perfect horizontal vertical adhesion interface therefore decompose vector generalized nodal displacements follows uslave independent unknowns uslave dependent ones compatibility conditions written compact form ufull transformation matrix using elimination unknowns element mass matrix initial stiffness matrix matrix recall corresponding independent generalized nodal displacements follow full full full therefore matrices derived original one use consistent mass matrix stiffness matrices derived using selective integration scheme avoid shear locking
| 5 |
online representation learning single hebbian networks image classification yanis bahroun andrea soltoggio jan loughborough university computer science department leicestershire united kingdom abstract unsupervised learning permits development algorithms able adapt variety different data sets using underlying rules thanks autonomous discovery discriminating features training recently new class local unsupervised learning rules neural networks developed minimise similarity matching costfunction shown perform sparse representation learning study tests effectiveness one learning rule learning features images rule implemented derived nonnegative classical multidimensional scaling applied single architectures features learned algorithm used input svm test effectiveness classification established image dataset algorithm performs well comparison unsupervised learning algorithms networks thus suggesting validity design new class compact online learning networks keywords classification competitive learning feature learning hebbian learning online algorithm neural networks sparse coding unsupervised learning introduction biological synaptic plasticity hypothesized one main phenomena responsible human learning memory one mechanism synaptic plasticity inspired hebbian learning principle states connections two units neurons strengthened simultaneously activated artificial neural networks implementations hebbian plasticity known learn recurring patterns activations use extensions rule oja rule generalized hebbian rule also called sanger rule permitted development algorithms proved particularly efficient tasks online dimensionality reduction two important properties models namely competitive learning sparse coding performed using hebbian learning rules properties achieved inhibitory connections extend capabilities learning rules beyond simple extraction principal component input data continuous local update dynamics hebbian learning also make suitable learning continuous stream data algorithm take one image time memory requirements independent number samples online representation learning hebbian networks study employs learning rules derived similarity matching applies perform online unsupervised learning features multiple image datasets rule proposed applied first time online features learning image classification single architectures quality features assessed visually performing classification linear classifier working learned features simulations show simple hebbian network outperform complex models sparse autoencoders sae restricted boltzmann machines rbm image classifications tasks applied architectures rule learns additional features study first kind perform sparse dictionary learning based similarity matching principle developed apply image classification network derived similarity matching rule implemented network used work derives adaptation classical multidimensional scaling cmds cmds popular embedding technique unlike dimensionality reduction techniques pca cmds uses input matrix similarity inputs generate set embedding coordinates advantage mds kind distance similarity matrix analyzed however simplest form cmds produces dense features maps often unsuitable considered image classification therefore adaptation cmds introduced recently used overcome weakness model implemented nonnegative classical multidimensional scaling three properties takes similarity matrix input produces sparse codes implemented using new biologically plausible hebbian model rule introduced given follows set inputs concatenation inputs defines input matrix output matrix encodings element corresponds sparse overcomplete representation input embedding input objective function proposed arg min frobenius norm gram matrix inputs corresponds similarity matrix solving directly requires storing increases time making online learning difficult thus instead online learning version expressed arg min components solution found using coordinate descent yit max wit mit online representation learning hebbian networks wijt yit xtj mijt yit yit yjt yit found using recursive formulations wij wij wij yit green arrows blue arrows interpreted respectively feedforward synaptic connections input hidden layer lateral synaptic inhibitory connections within hidden layer weight matrices fixed sizes updated sequentially makes model suitable online learning architecture network represented figure input layer hidden layer output layer connections lateral synaptic connections fig network lateral connections derived model learn features images new model presented study input data vectors composed patches taken randomly training dataset images every new input presented model first computes sparse activity second synaptic weights modified based local learning rules requiring current neuronal activities model seen sparse encoding followed recursive updating scheme well suited solve online problems svm classifies pictures using output vectors obtained simple pooling feature vectors obtained input images trained network particular given input image neuron output layer produces new image called feature map pooled quadrants form terms input vector svm online representation learning hebbian networks neural network proposed approach layers network stacked similarly convolutional dbn hierarchical network weights first layer second layer continuously updated unlike cnns used layer due positivity constraint combination rectified linear unit activation function interneuronal competition model combines powerful architecture convolutional neural networks using relu activation interneuronal competition synaptic weights updated using online local learning rules layers average pooling used downsample feature maps overcompleteness representation part evaluation new model important assess performance different sizes hidden layers number neurons exceeds size input representation called overcomplete overcompleteness may beneficial requires increased computation particularly deep networks number neurons grow exponentially order keep property one motivation overcompleteness may allow flexibility matching output structure input however learning algorithms learn take advantage overcomplete representations behaviour algorithm analysed transition undercomplete overcomplete representations although model might benefit large number neurons practical perspective increase number neurons challenge models due number operations required coordinate descent order limit computational cost training large network still benefiting overcomplete representations study proposes train simultaneously three neural networks different receptive field sizes pixels thus variation model tested composed three different networks architecture parallel networks different receptive field sizes requires less computational time memory model one receptive field size total number neurons synaptic weights connect neurons within neural network model called following parameters preprocessing architecture used following tunable parameters receptive field size neurons number neurons parameters standard cnns influence online model needs investigated computer vision models understanding influence input preprocessing critical importance biological plausibility practical applicability recent findings confirm partial decorrelation input signal retinal ganglion cells influence input decorrelation applying whitening investigated online representation learning hebbian networks results effectiveness algorithm assessed measuring performance image classification task acknowledge classification accuracy best implicit measure evaluating performance representation learning algorithms provides standardised way comparing following single multilayer neural networks combined standard svm trained dataset evaluation model first experiment tested performance model without whitening input data although exist hebbian networks perform online whitening offline technique based singular value decomposition applied experiments figure show features learned network raw input whitened input respectively features learned raw data neither sharp localised filters slightly capture edges whitened data features sharp localised resemble gabor filters observed primary visual cortex fig sample features learned raw whitened input classification accuracy raw whitened input features learned raw data features learned whitened data accuracy using raw data accuracy using whitened data neurons neurons neurons neurons accuracy accuracy neurons neurons neurons neurons receptive field size receptive field size second set experiments performance network tested varying receptive field sizes varying network sizes neurons results show performance peaks receptive field size pixels begins decline property common unsupervised learning algorithms showing difficulty learning spatially extended features online representation learning hebbian networks ures also show every configuration performance algorithm largely uniformly improved whitening applied input comparison performances online training various unsupervised learning algorithms tested dataset spherical particular proved outperform autoencoders restricted boltzmann machines providing simple efficient solution dictionary learning image classification thus spherical used benchmark evaluate performance network unsupervised learning algorithms increasing number output neurons reach overcompleteness also improved classification performance although singlelayer neural network higher degree sparsity proposed results shown appear performance optimal configurations classification accuracy network training shown graph suggests features learned network time help system improve classification accuracy significant demonstrates first time effectiveness features learned minimisation obvious priori online optimisation sparse similarity matching produces features suitable image classification fig proposed model classification accuracy online training optimal setup hebbian network number neurons accuracy accuracy hebbian network number input streamed shown table network outperforms single resolution network algorithm reaching accuracy model shows better performance requiring less computation memory single resolution model also outperforms single layer nomp sparse tirbm complex models outperformed combined models models three layers online representation learning hebbian networks algorithm accuracy single resolution neurons neurons neurons layers neurons sparse rbm convolutional dbn sparse tirbm neurons combined transformations neurons single layer nomp neurons nomp layers neurons combining table comparison network unsupervised learning algorithms evaluation model single resolution neural network different numbers neurons layer trained similarly network previous section table correspond respectively features learned first second layer results show alone less discriminative indicated fig however combined model achieves better performance layer considered separately nevertheless preliminary results indicate sizes two layers unevenly affect performance network future test may investigate architecture outperform largest shallow networks neurons layer neurons layer neurons layer table classification accuracy network conclusion work proposes neural network exploiting rules learn features image classification network trained image dataset prior feeding linear classifier model successfully learns online discriminative representations data number neurons number layers increase overcompleteness representation critical learning relevant features results show minimum unsupervised learning time needed optimise network leading better classification accuracy finally one online representation learning hebbian networks key factor improving image classification appropriate choice receptive field size used training network findings prove neural networks trained solve problems complex sparse dictionary learning hebbian learning rules delivering competitive accuracy compared encoder including deep neural networks makes deep hebbian networks attractive building image classification systems competitive performances suggests model offer alternative batch trained neural networks ultimately thanks architecture learning rules also stands good candidate memristive devices moreover decaying factor added proposed model might result algorithm deal complex datasets temporal variations distributions references pehlevan chklovskii retinal ganglion cells project natural scenes principal subspace whiten arxiv preprint coates lee analysis networks unsupervised feature learning aistats vol cox cox multidimensional scaling crc press krizhevsky hinton convolutional deep belief networks unpublished manuscript krizhevsky hinton learning multiple layers features tiny images lin kung stable efficient representation learning nonnegativity constraints proceedings international conference machine learning mairal koniusz harchaoui schmid convolutional kernel networks advances neural information processing systems oja neural networks principal components subspaces international journal neural systems olshausen emergence receptive field properties learning sparse code natural images nature pehlevan chklovskii normative theory adaptive dimensionality reduction neural networks advances neural information processing systems pehlevan chklovskii network derived online nonnegative matrix factorization cluster discover sparse features asilomar conference signals systems computers ieee poikonen laiho online linear subspace learning analog array computing architecture cnna rumelhart zipser feature discovery competitive learning cognitive science sanger optimal unsupervised learning linear feedforward neural network neural networks sohn lee learning invariant representations local transformations proceedings international conference machine learning
| 1 |
attaching leaves picking cherries characterise hybridisation number set dec simone charles abstract throughout last decade seen much progress towards characterising computing minimum hybridisation number set rooted phylogenetic trees roughly speaking minimum quantifies number hybridisation events needed explain set phylogenetic trees simultaneously embedding phylogenetic network mathematical viewpoint notion agreement forests underpinning concept almost results related calculating minimum hybridisation number however despite various attempts characterising number terms agreement forests remains elusive paper characterise minimum hybridisation number arbitrary size consists necessarily binary trees building previous work sequences first establish new characterisation compute minimum hybridisation number space networks subsequently show characterisation extends space rooted phylogenetic networks moreover establish particular hardness result gives new insight limitations agreement forests key words agreement forest sequence minimum hybridisation phylogenetic networks reticulation networks ams subject classifications introduction quest faithfully describing evolutionary histories currently witnessing shift representation ancestral histories phylogenetic evolutionary trees towards phylogenetic networks latter represent speciation events also like events hybridisation horizontal gene transfer played important role throughout evolution certain groups organisms example plants fish paper focus problem related reconstruction phylogenetic networks called minimum hybridisation formally stated end section problem first introduced baroni minimum hybridisation historically motivated attempting quantify hybridisation events broadly regarded tool quantify like events collectively refer reticulation events pictorially speaking minimum hybridisation aims reconstruction phylogenetic network simultaneously embeds given set phylogenetic trees minimising number reticulation events represented vertices network whose least two formally problem based following underlying question given collection rooted phylogenetic trees set taxa correctly reconstructed different parts species genomes smallest number reticulation events needed explain last ten years seen significant progress characterising computing minimum number see however except heuristic approaches less known due submitted editors december funding thank new zealand marsden fund financial support department computer science university auckland new school mathematics statistics university canterbury new zealand zealand simone linz charles semple fact notion agreement forests underlies almost results related minimum hybridisation appears ungeneralisable two trees previously together humphries introduced sequences characterised restricted version minimum hybridisation binary arbitrary size instead minimising number reticulation events needed explain space rooted phylogenetic networks restricted version considers binary temporal networks networks binary intersection classes temporal networks networks introduced moret cardona respectively disadvantageously restriction strong even guaranteed solution may network explaining figure advance work sequences establish two new characterisations quantify amount reticulation events needed explain set necessarily binary phylogenetic trees first characterisation solves problem space networks unlike temporal networks show every collection rooted phylogenetic trees solution trees simultaneously embedded network subsequently extend characterisation space rooted phylogenetic networks hence provide first characterisation minimum hybridisation general form characterisations based computing sequence latter characterisation makes also use operation attaches auxiliary leaves trees addition two new characterisations return back agreement forests investigate seem limited use solve minimum hybridisation arbitrary size set rooted phylogenetic trees roughly speaking given one compute particular type agreement forest smallest size one component contributes exactly one minimum number reticulation events needed explain hand contribution component minimum number much less clear motivated drawback agreement forests consider set rooted binary phylogenetic trees well agreement forest induced formally defined section phylogenetic network explains minimises number reticulations events ask whether computationally hard calculate minimum number reticulation events needed explain call associated decision problem scoring optimum forest problem first mentioned authors conjecture scoring optimum forest using machinery sequences show scoring optimum forest one considers smaller space networks paper organised follows remainder introduction contains definitions preliminaries phylogenetic networks section state two new characterisations terms sequences first optimises minimum hybridisation within space networks second optimises minimum hybridisation within space phylogenetic networks second characterisation extension first additionally allowing attachment auxiliary leaves establish proofs characterisations section well formal description analogous algorithm section establish upper bound number auxiliary leaves given collection phylogenetic trees needed characterise minimum hybridisation minimum hybridisation space rooted phylogenetic networks lastly section formally state problem scoring optimum forest show finish paper concluding remarks section throughout paper denotes finite set phylogenetic network rooted acyclic digraph parallel edges satisfies following properties unique root two set set vertices zero one iii vertices either one two least two one technical reasons additionally allow consist single vertex set leaf set vertices called leaves sometimes denote leaf set two vertices say parent child edge furthermore vertices one two tree vertices vertices least two one reticulations edge directed reticulation called reticulation edge edge called tree edge say binary reticulation exactly two lastly directed path ending leaf tree path every intermediate vertex tree vertex phylogenetic network tree child vertex parent tree vertex leaf example two networks given bottom figure note phylogenetic network obtained deleting leaf labelled suppressing resulting vertex results network tree child rooted phylogenetic rooted tree vertices except possibly root degree least two leaf set consists single vertex phylogenetic networks set called leaf set denoted addition binary apart root degree two interior vertices degree three since interested rooted phylogenetic trees rooted binary phylogenetic trees paper refer trees simply phylogenetic trees binary phylogenetic trees respectively phylogenetic consider two types subtrees let subset minimal subtree connects leaves denoted moreover restriction denoted phylogenetic obtained suppressing vertices apart root lastly two phylogenetic say refinement obtained contracting possibly empty set internal edges addition binary refinement binary let phylogenetic phylogenetic network displays suppressing vertices exists binary refinement obtained deleting edges leaves resulting vertices zero case call resulting acyclic digraph embedding collection phylogenetic displays tree displayed example two phylogenetic networks bottom figure display four trees shown top part figure let phylogenetic network vertex set root hybridisation simone linz charles semple fig top set four phylogenetic bottom two networks displaying number denoted value denotes example phylogenetic networks shown figure hybridisation number respectively observe tree vertex leaf contributes zero sum reticulation contributes furthermore set phylogenetic denote htc respectively values min network displays min phylogenetic network displays remark definition phylogenetic network restricted networks whose tree vertices exactly two note results paper also hold networks tree vertices whose least two particularly set phylogenetic displayed phylogenetic network whose tree vertices least two refining vertices obtain phylogenetic network whose tree vertices exactly two displays thus generality lost restriction next formally state two decision problems paper centred around minimum hybridisation instance set phylogenetic positive integer question exist network displays minimum hybridisation instance set phylogenetic positive integer minimum hybridisation question exist phylogenetic network displays see end section given set phylogenetic minimum hybridisation solution exists network displays shown minimum hybridisation even consists two rooted binary phylogenetic see minimum treechild hybridisation also computationally hard consider restricted version problem recall following observation first mentioned derived slightly modifying proof theorem observation let collection two binary phylogenetic exists phylogenetic network displays also exists network displays next theorem whose straightforward proof omitted follows observation fact given network binary phylogenetic tree checked polynomial time whether displays theorem decision problem hybridisation end section showing every collection phylogenetic trees displayed network let unique binary phylogenetic tree two leaves say whose root vertex end pendant edge adjoined original root positive integer obtain adding edge joins new vertex new leaf tree edge subdividing new vertex adding edge call universal network leaves note unique relabelling leaves theorem let universal network tree child displays binary phylogenetic proof construction straightforward check tree child tree child see displays binary phylogenetic use induction clearly displays unique binary phylogenetic tree two leaves assume universal network displays binary phylogenetic observe obtained deleting parent incident edges suppressing resulting vertices let binary phylogenetic let furthermore let subset consists descendant leaves parent displays exist embedding edge set descendants precisely tree edge easily checked displays construction hand reticulation edge let unique edge directed note tree child tree vertex tree edge subdivided new vertex construction follows displays completes proof theorem next corollary immediate consequence theorem fact simone linz charles semple every phylogenetic tree binary refinement leaf set corollary let set phylogenetic exists network displays every collection phylogenetic displayed network simple counting argument shows analogous result true binary networks specifically binary network reticulations proposition displays distinct binary phylogenetic large enough many distinct binary phylogenetic related results refer interested reader characterisations section state two cherrypicking characterisations whose proofs given next section let phylogenetic root leaf denote operation deleting incident edge parent suppressing resulting vertex note parent denotes operation deleting incident edge deleting incident edge observe phylogenetic tree subset cherry parent clearly every phylogenetic tree least two leaves contains cherry paper typically distinguish leaves cherry case write ordered pair depending roles let set phylogenetic sequence ordered pairs sequence following algorithm returns set phylogenetic trees consists single vertex algorithm picking cherries input set phylogenetic sequence output set phylogenetic trees step set tree set set step set set phylogenetic trees obtained performing exactly one following two operations tree cherry set else set step increment one repeat step otherwise return say obtained picking furthermore say ordered pair essential weight denoted value observe sequence minimum hybridisation element must appear first element ordered pair particular type sequence underlies characterisation htc end let set phylogenetic sequence called sequence let sequence call sequence smallest value sequences smallest value denoted stc follow results next section lemma every collection phylogenetic trees sequence stc well defined referring figure sequence weight four trees shown top figure remark noted introduction sequences introduced paper difference follows instead cherrypicking sequence consisting set ordered pairs sequence consists ordering elements moreover ordering additional property step analogous step picking cherries part cherry every tree step deleted tree iterative process continues weighting sequence based across number different cherries part difficult see could interpreted special type sequence first new characterisations next theorem given set phylogenetic writes htc terms sequences theorem let set phylogenetic htc stc state second characterisation require additional concept let phylogenetic consider operation adjoining new leaf one following three ways subdivide edge new vertex say add edge view root vertex adjacent original root add edge iii add edge interior vertex refer operation attaching new leaf generally finite set elements empty attaching operation attaching turn element eventually obtain phylogenetic tree refer set auxiliary leaves lastly attaching set phylogenetic operation attaching tree let set phylogenetic sequence leaf added sequence set phylogenetic trees obtained simone linz charles semple fig two sets phylogenetic trees obtained attaching parts adapted figure attaching set auxiliary leaves denote minimum weight amongst sequences course stc inequality also strict illustrate consider two sets phylogenetic trees shown figure sequence weight fact follows htc see section details hand sequence weight since obtained attaching follows sequence given set phylogenetic next theorem characterises terms sequences theorem let set phylogenetic worth noting set phylogenetic follows theorems htc determined without constructing phylogenetic network proofs theorems section prove theorems work proving theorem begin showing htc stc lemma let set phylogenetic let sequence exists network displays satisfying following properties minimum hybridisation tree vertex parent reticulation leaves end tree paths starting children respectively element tree vertex parent reticulation leaves end tree paths starting respectively proof let sequence proof induction tree consists single vertex immediately follows choosing phylogenetic network consisting single vertex establishes lemma suppose lemma holds sequences sets phylogenetic trees leaf set whose length let let set phylogenetic trees obtained picking first assume tree leaf set namely sequence induction network displays satisfies since tree leaf set cherry tree therefore displays binary refinement tree network obtained subdividing edge directed new vertex adding edge displays furthermore satisfies relative easily seen satisfies relative assume every tree leaf set let denote subset trees whose leaf set since exists ordered pair whose first coordinate note otherwise ordered pair whose second coordinate sequence let first ordered pair let tree using consider applying iterations picking cherries let denote subset leaves deleted process observe second coordinate next add obtain phylogenetic sequence let unique vertex closest root property descendant leaf child path descendant leaves let binary phylogenetic obtained adding edge show sequence suppose sequence let parent amongst first ordered pairs ordered pair form essential using picking cherries applied descendant leaf descendant leaf contradicting choice repeating placement tree obtain set phylogenetic let observe sequence therefore induction network displays satisfies simone linz charles semple let denote parent reticulation let phylogenetic network obtained subdividing edge directed new vertex adding edge since tree child displays follows tree child displays furthermore additionally also follows satisfies relative satisfies relative thus may assume tree vertex let denote child reticulation satisfies contains cherry second coordinate first ordered pair tree child never second coordinate ordered pair contradiction therefore either tree vertex leaf satisfies ordered pair second coordinate follows contains ordered pair say leaf end tree path starting let phylogenetic network obtained subdividing edges directed new vertices respectively adding edge since tree child easily seen tree child furthermore displays well therefore thus displays see satisfies relative suffices show satisfies indeed two ordered pairs verify respectively completes proof lemma next corollary immediately follows lemma corollary let set phylogenetic htc stc proof converse corollary begin additional lemma let phylogenetic network let two leaves generalising cherries phylogenetic networks say cherry common parent moreover call reticulated cherry parent say parent say joined reticulation edge case say reticulation leaf relative next define two operations first reducing cherry operation deleting one two leaves suppressing resulting vertex second reducing reticulated cherry operation deleting reticulation edge joining parents suppressing resulting vertices proof next lemma similar analogous result binary networks lemma omitted lemma let network following hold contains either cherry reticulated cherry obtained reducing either cherry reticulated cherry network lemma let set phylogenetic htc stc proof let network displays corollary network exists establish lemma explicitly constructing sequence let denote root let denote reticulations let denote leaves end tree paths minimum hybridisation starting respectively observe paths pairwise vertex disjoint construct sequence ordered pairs follows step set empty sequence set step consists single vertex set concatenation return step cherry one say equates reticulation set concatenation otherwise set concatenation set network obtained deleting thereby reducing cherry increase one step step else reticulated cherry say reticulation leaf set concatenation set network obtained reducing reticulated cherry increase one step first note easily checked construction well defined returns sequence ordered pairs moreover iteration construction follows lemma tree child next show cherry equate respectively elements exactly one reticulation see reticulations vertex disjoint contradiction hand suppose neither reticulations without loss generality may assume first cherry holds since tree child therefore tree vertex parent two reticulations iteration cherry concatenated cherry contradict construction choice also contradict construction hence may assume remainder proof exactly one reticulation let sequence returned construction prove induction sequence whose weight consists single vertex construction correctly returns sequence suppose consider first iteration construction either cherry reticulated cherry cherry cherry tree instance let denote set phylogenetic obtained picking observe network displays assume reticulated cherry reticulation leaf let subset trees displayed let note simone linz charles semple cherry tree tree delete edge incident suppress resulting vertex reattach rest tree containing subdividing edge new vertex adding edge joining vertex resulting phylogenetic displayed easily seen always possible let denote resulting collection trees obtained instance let observe displays complete induction suffices show sequence whose weight sequence whose weight first assume cherry cherry tree follows induction sequence furthermore tree child also tree child since appears first coordinate ordered pair assume reticulated cherry reticulation leaf without loss generality let denote associated reticulation since tree cherry sequence follows sequence next show tree child least three exists construction appear second coordinate ordered pair well therefore least three tree child suppose two establish tree child assume contrary appears second coordinate ordered pair let denote first ordered pair iteration either cherry reticulated cherry cherry since step iteration construction ordered pair contradiction hand reticulated cherry reticulation leaf construction one parents parent two reticulations namely reticulation construction tree path starting reticulation ending contradiction tree child hence tree child furthermore completes proof lemma proof theorem combining corollary lemma establishes theorem next establish theorem proof theorem first show let set phylogenetic trees obtained attaching set empty stc follows theorem network displays stc observe displays let phylogenetic network obtained deleting every vertex minimum hybridisation directed path root leaf suppressing resulting vertex degree two noting deleted vertex used display phylogenetic tree easily checked root one displays furthermore therefore theorem stc particular prove converse let phylogenetic network displays let phylogenetic network obtained attaching new leaf reticulation edge reticulation edge subdivide new vertex add new edge easily checked tree child let denote set new leaves attached tree let denote binary refinement displayed let binary phylogenetic tree leaf set displayed obtained attaching note binary refinement tree obtained attaching set note htc tree child displays since tree binary refinement tree obtained tree attaching stc thus theorem stc htc end section pseudocode construct constructs network sequence specifically given sequence set phylogenetic construct network returns network displays construction used prove lemma proof correctness given algorithm construct network input set phylogenetic sequence output network displays step set phylogenetic network consisting single vertex return otherwise set phylogenetic network consisting single edge set step depending holds exactly one following three steps parent reticulation obtain subdividing edge directed new vertex adding new edge parent reticulation obtain subdividing edge directed new vertex subdividing edge new vertex adding new edge simone linz charles semple else obtain subdividing edge directed new vertex adding new edge step set network obtained deleting unique edge incident return otherwise decrement one step let sequence set phylogenetic exists set phylogenetic trees obtained attaching sequence straightforward check network resulting calling construct network subsequently restricting vertices edges path root leaves described first direction proof theorem displays bounding maximum number auxiliary leaves light theorem natural question ask many auxiliary leaves need attached given set phylogenetic order calculate attaching auxiliary leaves necessary whenever htc provide upper bound number auxiliary leaves terms htc start introducing two operations repeatedly applied transform phylogenetic network displays network without increasing displays set binary phylogenetic trees obtained attaching auxiliary leaves let phylogenetic network let edge reticulations obtain phylogenetic network contracting resulting pair parallel edges repeatedly deleting one two edges parallel suppressing resulting vertex say obtained contraction lemma let collection phylogenetic let phylogenetic network displays let phylogenetic network obtained contraction displays proof let edge incident two reticulations contracted process obtaining furthermore let vertex results identifying reticulations correspond reticulation follows let tree used display also used display easily seen displays without using hand used display exactly one parent say used display clear displays furthermore exactly one parent say used display regardless whether also parent case suppressed obtaining follows displays hence displays tree lemma follows call phylogenetic network edges whose end vertices reticulations stack free follows repeated applications lemma collection phylogenetic network displays second operation let phylogenetic network let tree vertex whose two children reticulations furthermore let minimum hybridisation fig phylogenetic networks bottom obtained respective networks top contraction operation reticulation edge let obtain phylogenetic network subdividing edge new vertex adding new edge say obtained operation figure illustrates contraction operation position establish main result section theorem let collection phylogenetic exists set auxiliary leaves following two properties htc collection binary phylogenetic trees leaf set obtained attaching htc proof let network displays lemma exists obtain phylogenetic network minimum number repeated applications operation tree vertex resulting network least one child tree vertex leaf clearly moreover since operation results new edge incident two reticulations stack free follows tree child let construction size equal number tree vertices whose children reticulations let set phylogenetic trees obtained attaching displays since displays set always exists construction htc moreover tree restriction tree follows htc thereby establishing part theorem using construction previous paragraph establish part theorem let set reticulation edges let set tree vertices whose children reticulations recall next make two observations first vertex incident two edges second edge incident one vertex summary implies furthermore set reticulations therefore htc follows establishes part theorem htc simone linz charles semple scoring optimum forest collection binary phylogenetic forests characterise consists exactly two trees indeed many algorithms theoretical results deal minimum hybridisation two trees notion forests section establish particular hardness result contributes explanation forests appear however little use solve minimum hybridisation two trees result particular instance conjecture purpose upcoming definitions regard root binary phylogenetic vertex labelled end pendant edge adjoined original root furthermore view element leaf set thus let two binary phylogenetic agreement forest partition following conditions satisfied trees subtrees respectively let agreement forest let directed graph vertex set arc precisely iii root ancestor root root ancestor root call forest directed cycle moreover contains smallest number elements forests say maximum forest case denote number minus one baroni established following characterisation collection binary phylogenetic contains exactly two trees theorem let collection two binary phylogenetic xtrees let phylogenetic network root displays set binary phylogenetic regard vertex end pendant edge adjoined original root obtain forest deleting reticulation edges repeatedly contracting edges one degree one deleting isolated vertices lastly suppressing vertices one one say forest induced moreover said optimum network htc example regarding new vertex adjoined original root two phylogenetic networks shown figure induced forest respectively authors investigate minimum hybridisation three trees conjecture given set three binary phylogenetic induced forest phylogenetic network displays minimum hybridisation determine affirmatively answer conjecture context networks precisely using sequences show following decision problem scoring optimum forest instance integer collection binary phylogenetic optimum forest induced network displays question htc observation htc maximum acyclicagreement forest htc hence scoring optimum forest polynomial time however general problem theorem problem scoring optimum forest remainder section consists proof theorem establish result use reduction particular instance problem shortest common supersequence let finite alphabet let finite subset words word common supersequence word subsequence shortest common supersequence scs instance integer finite alphabet finite subset words question supersequence words letters timkovskii theorem established next theorem orbit letter set occurrences words note word uses letter say twice word contributes two occurrences orbit theorem decision problem scs even word letters size orbits consequence theorem next corollary corollary decision problem scs even word letters size orbits word contains letter twice proof let instance scs word letters orbits size let subset consists words letter occurs twice observe occurs twice word word contains furthermore regards word letters size orbits word contains letter twice let denote number distinct letters occur two distinct words note construction computation done time polynomial corollary follow theorem showing scs parameters supersequence length scs parameters supersequence length suppose scs parameters supersequence length iteratively extend sequence follows let simone linz charles semple contains two occurrences letter say let denote third letter note occurs word occurs exactly one word first assume occurs word depending whether first second third letter extend adding end adding beginning end adding beginning respectively resulting sequence supersequence second assume occur word occurs word let denote letter occurring twice extend adding beginning resulting sequence add two occurrences add one occurrence one occurrence two occurrences depending whether first second third letter respectively resulting sequence supersequence taking resulting sequence repeating process remaining word respectively eventually obtain supersequence moreover length converse suppose supersequence length let sequence obtained deleting occurrence letter occurs twice word deleting exactly one occurrence letter occurs two distinct words hence occur word easily checked supersequence furthermore since deletions first type deletions second type follows length completes proof corollary decision problem described statement corollary one use reduction proving theorem let instance scs word letters size orbits word contains letter twice without loss generality may assume word containing letter contained word let let also let denote fixed ordering letters denote sequence obtained removing three letters construct instance scoring optimum forest rooted caterpillar binary phylogenetic tree whose leaf set ordered say cherry denotes parent edge denote rooted caterpillar let denote rooted caterpillar let note trees leaf set minimum hybridisation constructed time polynomial size next establish lemma reveals relationship weight sequence length supersequence let sequence set binary phylogenetic say corresponds trees cherry exactly trees obtained picking lemma let positive integer sequence weight supersequence length proof first suppose common supersequence length let let denote sequence since supersequence easily seen sequence moreover suppose sequence weight without loss generality may assume ordered pair essential first show positive integer iterations picking cherries applied trees cherry consisting two elements word either wij wij ordered pair considering word containing wij associated tree easily seen elements appear least twice first element ordered pair contradicting assumption consider first ordered pairs tree exactly three ordered pairs whose first second elements picking cherries picks three elements since size orbits ordered pair corresponds two trees next construct sequence ordered pairs obtained start modifying first ordered pairs follows amongst first ordered pairs replace ordered pair form sequence obtained completed sequentially move along sequence ordered pair replacing ordered pair form one following ways corresponds exactly one tree replace depending whether respectively earlier ordered pair simone linz charles semple corresponds two trees say order letters replace depending whether respectively earlier ordered pair iii corresponds two trees say order letters replace occurs ordered pair earlier sequence otherwise occurs ordered pair earlier sequence replace modification completed let denote subsequence first ordered pairs whose coordinates let denote subsequence let denote concatenation considering tree together corresponding ordered pairs associated ones routine check shows first iterations picking cherries applied sets tree corresponding rooted caterpillar particular subsequence next extend weight consider denotes subsequence ordered pairs corresponding elements well least one element appears first coordinate ordered pair element may counted twice appears first coordinate two ordered pairs follows sequence ordered pairs concatenation sequence whose weight let sequence since subsequence tree common supersequence moreover follows supersequence length complete proof theorem let partition next show optimum forest induced network root displays let common supersequence minimum length suppose length since minimum hybridisation orbits size minimum length letter appears twice let rooted caterpillar let network root obtained follows identify leaves labelled adjoin new pendant edge identified vertex labelled since common supersequence easily checked displays furthermore theorem lemma htc induced follows optimum forest given arbitrary phylogenetic network verified polynomial time whether tree child displays hybridisation number induces hence scoring optimum forest theorem follows combining corollary theorem lemma concluding remarks paper generalised concept sequences introduced shown generalisation used characterise minimum number reticulation events needed explain set phylogenetic space networks well space phylogenetic networks see two minima different fixed set phylogenetic trees consider set trees presented figure shown six phylogenetic networks displays hybridisation number three however none six phylogenetic networks tree child moreover using sequences straightforward check shows htc furthermore shown scoring optimum forest hence given optimum forest computationally hard compute htc set binary phylogenetic contrasts case scoring optimum forest solvable hints agreement forests limited use beyond case course restricting collections binary phylogenetic trees one could generalise definition forest two binary phylogenetic trees two trees obvious way one requires conditions iii definition forest hold tree arbitrarily large collection binary phylogenetic generalisation mind observing number components forest induced network equal number reticulations plus one one might conjecture given set binary phylogenetic number components maximum forest minimum number components optimum forest see true refer back figure let forest induced let forest induced since maximum forest three elements moreover since tree child htc indeed checked htc moreover network displays induces optimum forest also maximum forest consequently approach exploits maximum forests set binary phylogenetic trees compute htc computing maximum forest subsequently scoring way reflects number edges directed reticulation vertex network simone linz charles semple induces unlikely give desired result lastly computational viewpoint introduction forests triggered significant progress towards development ever faster algorithms solve minimum hybridisation input contains exactly two phylogenetic trees see look forward seeing similar development solving minimum hybridisation arbitrarily many phylogenetic trees using sequences turn likely benefit biologists often wish infer evolutionary histories entirely data sets usually consists two phylogenetic trees references albrecht scornavacca cenci huson fast computation minimum hybridization networks bioinformatics baroni moulton semple bounding number hybridization events consistent evolutionary history journal mathematical biology bordewich semple computing minimum number hybridization events consistent evolutionary history discrete applied mathematics bordewich semple computing hybridization number two phylogenetic trees tractable transactions computational biology bioinformatics bordewich semple determining phylogenetic networks distances journal mathematical biology cardona valiente comparison phylogenetic networks transactions computational biology bioinformatics chen wang algorithms reticulate networks multiple phylogenetic trees transactions computational biology bioinformatics chen wang ultrafast tool minimum reticulate networks journal computational biology collins linz semple quantifying hybridization realistic time journal computational biology drezen gauthier josse herniou huguet foreign dna acquisition invertebrate genomes journal invertebrate pathology doi humphries linz semple complexity computing temporal hybridization number two phylogenies discrete applied mathematics humphries linz semple cherry picking characterization temporal hybridization number set phylogenies bulletin mathematical biology van iersel kelk whidden zeh hybridization number three rooted binary trees ept siam journal discrete mathematics van iersel semple steel locating tree phylogenetic network information processing letters kelk personal communication kelk van iersel linz scornavacca stougie cycle killer que est comparative approximability hybridization number directed feedback vertex set siam journal discrete mathematics mallet besansky hahn reticulated species bioessays marcussen sandve heier spannagl pfeifer international wheat genome sequencing consortium jakobsen wulff steuernagel mayer olsen ancient hybridizations among ancestral genomes bread wheat science moret nakhleh warnow linder tholse padolina sun timme phylogenetic networks modeling reconstructibility accuracy transactions computational biology bioinformatics minimum hybridisation piovesan kelk simple fixed parameter tractable algorithm computing hybridization number two necessarily binary trees ieee transactions computational biology bioinformatics semple networks preparation simpson tree display networks phd thesis university canterbury preparation soucy huang gogarten horizontal gene transfer building web life nature reviews genetics timkovskii complexity common subsequence supersequence problems related problems cybernetics wang fast computation exact hybridization number two phylogenetic trees international symposium bioinformatics research applications springer close lower upper bounds minimum reticulate network multiple phylogenetic trees bioinformatics
| 8 |
nov perl package alignment tool phylogenetic networks francesc research institute health science university balearic islands palma mallorca spain gabriel cardona department mathematics computer science university balearic islands palma mallorca spain gabriel valiente algorithms bioinformatics complexity formal methods research group technical university catalonia barcelona spain january abstract phylogenetic networks generalization phylogenetic trees allow representation evolutionary events acting population level like recombination genes hybridization lineages lateral gene transfer phylogenetics tools implement wide range algorithms phylogenetic trees exist applications work phylogenetic networks libraries either order improve situation developed perl package relies bioperl bundle implements many algorithms phylogenetic networks also developed java applet makes use aforementioned perl package allows user make simple experiments phylogenetic networks without develop program perl script perl package accepted part bioperl bundle downloaded url http webbased application available url http perl package includes full documentation features background briefly recall definitions results phylogenetic networks phylogenetic network set taxa rooted directed acyclic graph whose leaves nodes without outgoing edges bijectively labeled set let phylogenetic network node said tree node one incoming edge otherwise called hybrid node phylogenetic network phylogenetic network every node either leaf least one child tree node let set leaves define node vector number different paths leaf multiset called provided phylogenetic network turns completely characterize isomorphisms among phylogenetic networks allows define distance set phylogenetic networks two given networks symmetric difference defines true distance phylogenetic trees coincides partition distance representation also allows define optimal alignment two treechild phylogenetic networks say given two networks sake simplicity assume alignment injective mapping weight alignment stands manhattan norm vector tree nodes hybrid nodes one tree node one hybrid node optimal alignment alignment minimal weight extended newick format enewick extended newick string defining phylogenetic network appeared packages phylonet netgen related phylogenetic networks differences former encodes phylogenetic network hybrid nodes series trees newick format latter encodes single tree newick format repeated nodes whereas perl module introduce accepts formats input complete standard enewick implemented based mainly netgen following suggestions huson morin among others make complete possible adopted standard practical advantage encoding whole phylogenetic network single string also includes mandatory tags distinguish among various hybrid nodes network procedure obtain enewick string representing phylogenetic network goes follows let set hybrid nodes ordered fixed way hybrid node say parents children split different nodes let first copy child children let copies children one children label copies label type tag parameters label optional string providing labelling node type optional string indicating node corresponds hybridization indicated lateral gene transfer indicated lgt event note types considered future figure phylogenetic network left tree right associated computing enewick string figure representation lateral gene transfer event left hybrid node phylogenetic network right tag mandatory integer identifying node optional number giving length branch copy consideration parent way get tree whose set leaves set leaves original network together set hybrid nodes possibly repeated newick string obtained tree note internal nodes labeled leaves repeated enewick string phylogenetic network leftmost occurrence hybrid node enewick string corresponds full description network rooted node although node labels optional labeled occurrences hybrid node enewick string must carry label consider example phylogenetic network depicted together decomposition figure enewick string network would internal nodes labeled leftmost occurrence hybrid node latter string corresponds full description network rooted node obviously procedure recover network enewick string simple recovering tree identifying nodes labeled hybrid nodes identifier notice gene transfer events represented unique way hybrid nodes consider example lateral gene transfer event depicted figure gene transferred species species divergence species species enewick string describes phylogenetic network program interpreting enewick string use information node types different ways instance render tree nodes circled hybridization nodes boxed lateral gene transfer nodes arrows edges perl module perl module bio implements data structures needed work phylogenetic networks well algorithms reconstructing network enewick string different flavours reconstructing network exploding network set induced subtrees computing network two networks computing optimal alignment two networks computing tripartitions tripartition error two networks testing network time consistent case computing temporal representation underlying data structure graph object extra data instance network makes use perl module bio network implements basic arithmetic operations two extra modules bio bio provided sequential random generation respectively phylogenetic networks given set taxa web interface java applet web interface available http allows user input one two phylogenetic networks given enewick strings perl script processes strings uses bio package compute available data including plot networks downloaded format plots generated application graphviz companion perl package given two networks set leaves also computed well optimal alignment algorithm compute alignment relies hungarian algorithm sets leaves topological restriction set common leaves first computed followed optimal alignment java applet displays networks side side whenever node selected corresponding node network respect optimal alignment highlighted provided exists also extended edges similarities networks thus evident glance since weight matched node also shown easy see differences authors contributions authors conceived method prepared manuscript contributed discussion approved final manuscript implemented software also implemented part software acknowledgements research described paper partially supported spanish cicyt project tin grammars spanish dgi projects comgrio references mihaela baroni charles semple mike steel hybrids real time syst gabriel cardona francesc gabriel valiente comparison phylogenetic networks http gabriel cardona francesc gabriel valiente tripartitions always discriminate phylogenetic networks math press bernard moret luay nakhleh tandy warnow randal linder anna tholse anneke padolina jerry sun ruth timme phylogenetic networks modeling reconstructibility accuracy ieee comput morin moret netgen generating phylogenetic networks diploid hybrids bioinformatics munkres algorithms assignment transportation problems siam rice university bioinformatics group phylonet phylogenetic networks toolkit available http robinson foulds comparison phylogenetic trees math
| 5 |
journal latex class files vol december deep metric learning practical person jul dong zhen lei member ieee stan fellow ieee features metric learning methods prevail field person compared methods paper proposes general way learn similarity metric image pixels directly using siamese deep neural network proposed method jointly learn color feature texture feature metric unified framework network symmetry structure two connected cosine function deal big variations person images binomial deviance used evaluate cost similarities labels proved robust outliers compared existing researches practical setting studied experiments training test different datasets cross dataset person intra dataset cross dataset settings superiorities proposed method illustrated viper prid index deep metric learning convolutional network cross dataset ntroduction task person judge whether two person images belong subject practical applications two images usually captured two cameras disjoint views performance person closely related many applications cross camera tracking behaviour analysis object retrieval algorithms proposed field also overlapped fields pattern recognition recent years performance person increased continuously increase essence person similar biometric recognition problems face recognition core find good representation good metric evaluate similarities samples compared biometric problems person challenging due low quality high variety person images person usually need match person images captured surveillance cameras working mode therefore resolution person images low around pixels lighting conditions unstable furthermore direction cameras pose persons arbitrary factors cause person images surveillance scenarios two distinctive properties large variations intra class ambiguities inter classes summary challenges person come following aspects dong national laboratory pattern recognition institute automation chinese academy sciences beijing china email dyi manuscript received june revised june camera view change pose variation deformation unstable illumination low resolution however another challenge less studied existing work cross dataset person practical systems usually collect large datasets first train model trained model applied datasets videos person call training datasets source domain test datasets target domain source target datasets totally different usually captured different cameras different environments different probability distribution practical person algorithm good generalization respect dataset changes therefore cross dataset person important rule evaluate performance algorithms practice since pixels person images unstable effective representations important needed person reidentification end existing methods borrow many sophisticated features fields hsv histogram gabor hog based features direct matching discriminative learning used evaluate similarity existing methods mainly focus second step learn metric discriminate persons many good metric learning methods proposed context kissme rdc majority existing methods include two separate steps feature extraction metric learning features usually come two separate sources color texture designed hand learned finally connected fused simple strategies contrary paper proposes new method combine separate modules together learning color feature texture feature metric unified framework called deep metric learning dml main idea dml inspired siamese neural network originally proposed signature verification given two person images want use siamese deep neural network assess similarity specific original work dml first abstracts siamese network two connection function cost function see figure carefully design architecture person images way dml adapt well person denoting connection function similarity equation dml written denote two dml depending specific applications need share share journal latex class files vol december parameters compared existing person methods dml following advantages dml learn similarity metric image pixels directly layers dml optimized common objective function effective features traditional methods filters learned dml capture color texture information simultaneously reasonable simple fusion strategies traditional methods feature concatenation sum rule structure dml flexible switch view specific general person tasks whether sharing parameters dml tested two popular person datasets viper prid using common evaluation protocols results show dml outperforms par methods appeal practical requirements evaluate generalization dml also conduct challenging cross dataset experiments training cuhk campus testing viper prid results cross dataset experiments significantly better existing methods similar experimental settings knowledge first work conduct strict cross dataset experiment field person finally fuse features learned multiple datasets improve performance viper prid elated ork work uses deep learning learn metric person related works four aspects reviewed section feature representation metric learning person siamese convolutional neural network cross dataset related methods early papers mainly focus construct effective feature representation numerous features used proposed person features divided two categories color based texture based features popular features include hsv color histogram lab color histogram sift lbp histogram gabor features fusion among features color contribution final results recent advance aspect color invariant signature combing color histogram covariance feature segmentation information achieved viper hand proved using silhouette symmetry structure person improve performance significantly therefore color texture features usually extracted predefined grid finely localized parts proposed mcmc based method part localization person simultaneously obtained viper work prove performance improved significantly know geometric configuration person explicitly implicitly compared part based method salience based methods proposed zhao relaxed spatial constraint could deal larger pose variations similar history face recognition future direction feature representation must based precise body parts segmentation person alignment pose normalization based extracted features naive feature matching unsupervised learning methods usually got moderate results results achieved supervised methods boosting rank svm pls metric learning among methods metric learning main stream due flexibility compared standard distance measures norm learned metric discriminative task hand robust large variations person images across view papers used holistic metric evaluate similarity two samples first divided samples several groups according pose learned metrics group using pose information explicitly obtains highest performance viper recently proposed novel human loop style method illustrated performance person improved drastically human intervention although results paper hard reproduce supplies benchmark reflect performance human viper closing gap human performance target researchers early siamese neural network proposed evaluate similarity two signature samples year neural network similar structure proposed fingerprint verification different traditional neural networks siamese architecture composed two sharing parameters subnetwork convolutional neural network siamese neural network used face verification research group best property siamese neural network unified clear objective function guided objective function neural network learn optimal metric towards target automatically responsibility last layer siamese neural network evaluate similarity output two form norm cosine although good experimental results obtained disadvantages lacking implementation details lacking comparison methods paper design siamese neural network person image apply person problem paper implementation details described extensive comparisons reported young field cross dataset problem attracted much attention majority researchers best improve performance within single dataset training viper test viper long ago started concern issue authors proposed transfer rank svm dtrsvm adapt model trained source domain prid target domain viper image pairs source domain negative image pairs target domain used training different dtrsvm cross dataset experiments proposed network trained journal latex class files vol december input cnn connection function cost function label cnn fig structure siamese convolutional neural network scnn composed three components cnn connection function cost function parameters sharing convolution source domain performance tested target domain convolution normalization max pooling full connection normalization max pooling fig structure cnn used method iii eep etric earning joint influence resolution illumination pose changes ideal metric person may highly nonlinear deep learning exact one effective tools learn nonlinear metric function section introduces architecture parameters cost function implementation details proposed convolutional network deep metric learning architecture pattern recognition problems neural network works standalone mode input neural network sample output predicted label mode works well handwritten digit recognition object recognition classification problems labels training set test set person problem subjects training set generally different test set therefore sample label style neural network apply deal problem construct siamese neural network includes two working sample pair label mode flowchart method shown figure given two person images sent siamese convolutional neural network scnn two images scnn predict label denote whether image pair comes subject many applications need rank images gallery based similarities probe image scnn outputs similarity score instead structure scnn shown figure composed two convolutional neural networks cnn two cnns connected connection function existing siamese neural networks constraint two share parameters weights biases studied previous work constraint could removed conditions without parameters sharing network deal view specific matching tasks naturally parameters sharing network appropriate general task cross dataset person call two modes general view specific scnn cross dataset problem main concern paper focus general scnn convolutional neural network cnn paper see figure composed convolutional layers max pooling layers full connected layer shown figure number channels convolutional pooling layers output cnn vector dimensions every pooling layer includes normalization unit convolution input data padded zero values therefore output size input filter size layer filter size layer relu neuron used activation function layer capture different statistical properties body parts train cnn part based way previous work person images cropped three overlapped parts three networks trained independently differently use faster scheme paper three parts trained jointly first three parts share layer second part layer help learn filters third high level features parts fused layer sum rule similarity fused features evaluated connection function driven common cost function three parts contribute training process jointly overall two main differences proposed network figure network layer shared three parts contribution three parts fused feature level score level parameter sharing low level reduce complexity network fusion feature level make three parts train jointly improve performance slightly moreover training test single network convenient efficient using three independent networks cost function learning learning parameters scnn revisit structure shown figure structure scnn journal latex class files vol december abstracted three basic components two subnetworks connection function cost function connection function used evaluate relationship two samples cost function used convert relationship cost choose connection function cost function closely related performance scnn many distance similarity functions used candidates connect two vectors euclidian distance cosine similarity absolute difference vector concatenate formulas seuc scos sabs scon equations negate distance functions make consistent similarity advantage euclidian distance derivation simple form output unbounded could make training process unstable absolute difference points cosine function bounded invariant magnitude samples good property cosine function used widely many pattern recognition problems choose connection function cost function given analysis square loss exponential loss binomial deviance chose binomial deviance final cost function discuss another popular cost function pattern recognition fisher criterion given training set corresponding similarity matrix mask matrix binomial deviance fisher criterion formulated jdev matrix product sij mij wij pij output cnn output connection function sij sij xti xtj differentiating cost function respect get bij bij xtj xti xti dij dij bij repmat aij cij cij cij bij isher pairs mean binomial deviance numerator eqn class divergence similarity matrix denominator total variance minimizing eqn eqn learn network separating positive negative pairs far possible comparing eqn eqn see binomial deviance cost focus false classified samples samples near boundary fisher criterion focus every elements similarity matrix equally intuition binomial deviance cost make network trained mainly hard samples likely get good model verified experiments following sections fix connection function cosine cost function binomial deviance connection cost functions determined used learn parameters scnn plugging eqn eqn get forward propagation function calculate cost training set jdev sij positive pair negative pair mij neglected pair positive pair negative pair wij neglected pair positive pair negative pair pij neglected pair repmat aij dij sij denotes similarity sample mij denotes whether come subject count positive pairs count negative repmat function create matrix tiling vector many times function matlab one refer appendix detail derivation eqn trains network pairwise way paper formulates cost gradient totally matrix form network trained stochastic gradient descent sgd new formulation process sample pairs one batch size batch set batch includes positive negative sample pairs paper batch generate journal latex class files vol december pairs makes network scans training data faster saves training time based eqn eqn learn parameters scnn sgd algorithm general neural network error backward propagated top single path contrary error specific scnn backward propagated two branches eqn eqn respectively described practice also assign asymmetry cost label mij positive negative pairs tune performance network positive pairs negative pairs effect asymmetry cost performance discussed experiments xperiments many popular datasets built person reidentification viper prid cuhk campus among datasets evaluation protocols viper prid version clearest two therefore compare method methods experiments done two settings intra dataset experiments training test viper training test prid cross dataset experiments training test viper training test prid training cuhk campus test viper training cuhk campus test prid intra dataset experiments conducted illustrate basic performance proposed method cross dataset experiments illustrate generalization ability intra dataset except papers conduct experiments setting training test dataset viper includes subjects images per subject coming different camera views camera camera split viper disjoint training subjects testing set subjects randomly repeat process times first split dev view used parameter tuning number training epoch learning rate weight decay splits test view used reporting results similar viper prid captured cameras camera camera camera shows subjects camera shows subjects first subjects appear cameras follow testing protocols randomly select subjects first subjects training remain subjects testing test set composed probe gallery information follows probe set remain subjects first subjects camera except training subjects gallery set remain subjects camera except training subjects whole process repeated times first split used parameter tuning splits used reporting results training stage training images camera camera merged randomly shuffled sent generic scnn corresponding mask matrix generated according label samples pairs subjects assigned different subjects assigned mask matrix symmetric set elements lower triangular part avoid redundant computation testing stage one image subject used gallery one used probe evaluate test view use dev view investigate three important factors affect performance network data augmentation asymmetric cost positive negative sample pairs cost function besides set parameters cost function eqn data augmentation data augmentation widely used trick training neural network experiments mirror person images double training test sets although trick used effect performance analyzed compare performance dev view viper without data augmentation training set number images subject increased testing stage original images mirrored version generate similarity scores final score fused sum rule results table see significant improvements brought data augmentation especially top ranks indicates scale dataset crucial train good networks know geometry information person image pose person augment datasets guided pose generate virtual images improve performance sense human pose estimation important direction person following years asymmetric cost described section generating mask matrix labels training samples number negative sample pairs far positive pairs however practice usually split training samples many batches first positive negative sample pairs generated within batch therefore negative sample pairs batches covered training process may cause negative pairs prone balance weight positive negative sample pairs assign asymmetric costs fixing cost positive pair tune cost negative pair asymmetric cost apply easily eqn eqn setting positive pair negative pair mij neglected pair table shows relationship negative cost journal latex class files vol december table rank rank recognition rates view vip without data augmentation rank aug without training set test set training set test set cost cost epoch epoch fig curves training test set dev view viper using binomial deviance left fisher criterion right cost function fig similarity score distribution viper computed based human labelled attributes attributes binary similarity scores evaluated hamming distance recognition rate matter using cost function cost training set always drop continually contrary cost test set drops significantly beginning training process gradually becomes converged tens epochs figure see cost gap training test set small using binomial deviance fisher criterion gap obviously bigger binomial deviance reflects fisher criterion easily overfiting training set inspecting curves set based experience following experiments fig similarity score distributions training test set dev view viper using binomial deviance left fisher criterion right cost function recognition rate dev view viper highest overall performance achieved illustrates negative pairs paid attention training batch binomial deviance fisher criterion next compare two cost functions discussed section binomial deviance eqn fisher criterion eqn like two prior two experiments comparison conducted dev view viper comparison experiments mirror images double dataset set cost negative pairs connection function fixed cosine differences binomial deviance fisher criterion evaluated three aspects curve similarity score distribution recognition rate figure shows curves dev view viper low cost reflects high performance approximately although explicit relationship cost shown figure similarity distributions two cost functions different distribution generated attributes viper also given figure reference experiment shown using attributes hamming distance achieve high recognition rate attributes labelled human seen baseline human performance distribution figure seen ideal distribution viper binomial deviance distribution negative similarity scores significantly wider positive scores coincide ideal distribution generated attributes fisher criterion distributions positive negative scores standard gaussian variance although nearly perfect results obtained training set using cost function performance test set different see table iii performance see binomial deviance suitable person reidentification problem mainly focus samples near boundary less affected distributions positive negative samples fisher criterion ideal distributions highly heteroscedastic conflicted assumption fisher maybe modifications made fisher criterion solve problem leave work future journal latex class files vol december table recognition rate view vip different negative costs rank table iii rank rank recognition rates view vip using different cost functions rank binomial deviance fisher criterion results analysis parameter tuning test performance network test view viper prid following configuration augment training set set mirror samples fused score sum set negative cost set number use cosine connection function binomial deviance cost function experiments repeated times mean recognition rates list table table distinguish modified dml original one rename proposed method paper improved dml compared pairwise version improved dml training speed significant performance improvement notably recognition rate increases results compared methods copied original papers results unavailable leaved table see proposed method outperforms compared methods viper including current methods method outperforms remarkably except recognition rate higher slightly among methods method nearly simple elegant one need pose information segmentation bottom top layers network every building blocks contribute common objective function optimized algorithm simultaneously prid get similar results viper proposed method outperforms art method rpml however superiority method prid smaller viper reason scale prid dataset small train good network using sample pairs network outperforms compared methods illustrate power deep metric learning compared viper quality prid poorer size gallery bigger therefore recognition rates prid overall lower viper cross dataset section conduct many experiments cross dataset setting coincide practical table ross dataset experiment comparison proposed method dtrsvm vip methods dml dtrsvm dtrsvm improved dml improved dml improved dml set cuhk prid cuhk cuhk tions following cuhk campus used training set viper prid used test set captured airport indoor environment cuhk campus captured campus viper prid captured street due totally different capture environments devices cross dataset experiments challenging previous experiments different use samples viper prid adapt classifiers target domains dataset people total images captured multiple cameras average images person many images large illumination changes subject occlusions scale smaller viper prid resolution images varied cuhk campus large scale dataset captured two cameras sessions includes subjects images subject images camera views resolution cuhk campus cross dataset networks trained parameters previous experiments training resize images cuhk campus convenience test set use setting intra dataset experiments viper half subjects images randomly selected construct test set includes subjects images prid test set includes subjects images test process repeated times average recognition rate reported results first use cuhk campus training set respectively test performance trained networks viper recognition rates viper shown table results dtrsvm journal latex class files vol december table ntra dataset experiment comparison proposed method state art methods vip rank method elf rdc ppca salience rpml laft dml improved dml table ntra dataset experiment comparison proposed method state art methods prid rank method descr model rpml improved dml table vii ross dataset experiment comparison proposed method dtrsvm prid methods dtrsvm dtrsvm improved dml improved dml improved dml set viper cuhk cuhk viper prid listed comparison results see method outperforms original dml slightly training method par dtrsvm recognition rates better dtrsvm slightly training cuhk campus performance improved significantly possible reason compared quality aspect ratio images cuhk campus similar viper combining similarity scores two networks sum performance improved even approaches performance methods intra dataset setting elf rdc test two trained networks prid give results table vii using training set method better dtrsvm significantly different results viper training cuhk campus decreases recognition rates remarkably performance decline caused big difference cuhk campus prid datasets fusing similarity scores chuk campus increases performance especially recognition raises compared intra dataset experiments performance cross dataset experiments decline sharply recognition rate number viper prid drop indicates trained models hard generalize across datasets due distinctive properties dataset figure shows filters learned cuhk fig filters first layer cnn top learned viper prid cuhk campus respectively size filters order sorted hue component best view different datasets see distributions color texture datasets diverse besides experimental results diverse filters also prove model learned dataset hardly adapt another one transfer model target domain fully use multiple heterogeneous datasets hand improve performance target domain important research topics future onclusions paper proposed deep metric learning method using siamese convolutional neural network structure network training process described journal latex class files vol december detail extensive intra dataset cross dataset person reidentification experiments conducted illustrate superiorities proposed method first work apply deep learning person problem also first work study person problem fully cross dataset setting experimental results illustrated network deal cross view cross dataset person problems efficiently outperformed methods significantly future apply dml applications explore way network investigate embed geometry information network improve robustness pose variations moreover continue research train general person matching engine good generalization across view dataset substituting eqn eqn eqn eqn substituting eqn eqn dev get final formulation aij bij aij aij cij aij dij defined eqn defined eqn ppendix radients iew pecific scnn although paper use specific asymmetric scnn experiment give gradients cost function reference denote output two number samples number samples dimensions corresponding similarity matrix mask matrix following derivation process appendix gradients jdev respect follows fij fij xti yjt gij gij fij xti xti repmat eij gij bij cij dij defined eqn eqn eqn respectively derivatives xti xti xtj respect follows cij dij seen weight sample pair derivative similarity sij respect expand cosine similarity sij eqn get xti xtj xti xtj xti xti general scnn two share parameters therefore output sample input training set denote output cnn number samples cost produced training set calculated eqn gradient eqn respect derived follows ppendix radients eneral scnn xti yjt repmat eij hij hij hij fij journal latex class files vol december acknowledgment work supported chinese national natural science foundation projects national science technology support program project chinese academy sciences project jiangsu science technology support program project authenmetric funds eferences kostinger hirzer wohlhart roth bischof large scale metric learning equivalence constraints computer vision pattern recognition cvpr ieee conference zheng gong xiang reidentification relative distance comparison pattern analysis machine intelligence ieee transactions vol bromley guyon lecun shah signature verification using siamese time delay neural network nips gray brennan tao evaluating appearance models recognition reacquisition tracking ieee international workshop performance evaluation tracking surveillance rio janeiro hirzer beleznai roth bischof person descriptive discriminative classification image analysis ser lecture notes computer science heyden kahl eds springer berlin heidelberg vol wang locally aligned feature transforms across views computer vision pattern recognition cvpr ieee conference zhao ouyang wang person salience matching proceedings ieee international conference computer vision zheng gong xiang person probabilistic relative distance comparison computer vision pattern recognition cvpr ieee conference yuen domain transfer support vector ranking person without target camera label information proceedings ieee international conference computer vision javed shafique shah appearance modeling tracking multiple cameras computer vision pattern recognition cvpr ieee computer society conference vol vol farenzena bazzani perina murino cristani person accumulation local features computer vision pattern recognition cvpr ieee conference zhao ouyang wang unsupervised salience learning person computer vision pattern recognition cvpr ieee conference kviatkovsky adam rivlin color invariants person reidentification pattern analysis machine intelligence ieee transactions vol lin zheng liu human matching compositional template cluster iccv zhang gao face recognition across pose review pattern recognition vol gray tao viewpoint invariant pedestrian recognition ensemble localized features computer vision eccv ser lecture notes computer science forsyth torr zisserman eds springer berlin heidelberg vol prosser zheng gong xiang person support vector ranking bmvc liu loy gong wang pop person optimisation international conference computer vision baldi chauvin neural networks fingerprint recognition neural computation vol chopra hadsell lecun learning similarity metric discriminatively application face verification cvpr lei liao deep metric learning person proceedings international conference pattern recognition krizhevsky sutskever hinton imagenet classification deep convolutional neural networks nips qin zhang tsai wang liu loss functions information retrieval information processing management vol nguyen bai cosine similarity metric learning face verification computer vision accv ser lecture notes computer science kimmel klette sugimoto eds springer berlin heidelberg vol friedman tibshirani hastie elements statistical learning data mining inference prediction ser springer series statistics new york lecun bottou bengio haffner learning applied document recognition proceedings ieee vol hirzer roth bischof relaxed pairwise learned metric person computer vision eccv ser lecture notes computer science fitzgibbon lazebnik perona sato schmid eds springer berlin heidelberg vol meier masci gambardella schmidhuber flexible high performance convolutional neural networks image classification proceedings international joint conference artificial volume two aaai press layne hospedales gong reidentification person springer mignon jurie pcca new approach distance learning sparse pairwise constraints computer vision pattern recognition cvpr ieee conference dong received degree electronic engineering degree communication information system wuhan university wuhan china received degree pattern recognition intelligent systems casia beijing china research areas unconstrained face recognition heterogeneous face recognition deep learning authored acted reviewer tens articles international conferences journals developed face biometric algorithms systems immigration control project beijing olympic games place photo zhen lei received degree automation university science technology china ustc degree institute automation chinese academy sciences casia assistant professor research interests computer vision pattern recognition image processing face recognition particular published papers international journals conferences journal latex class files vol december stan received degree hunan university changsha china degree national university defense technology china degree surrey univerplace sity surrey currently professor photo director center biometrics security research cbsr institute automation chinese academy sciences casia worked microsoft research asia researcher prior associate professor nanyang technological university singapore research interest includes pattern recognition machine learning image vision processing face recognition biometrics intelligent video surveillance published papers international journals conferences authored edited eight books
| 9 |
low noise sensitivity analysis oversampled systems feb haolei weng arian maleki abstract class least squares lqls considered estimating noisy linear observations performance schemes studied asymptotic setting dimension signal grows linearly number measurements asymptotic setting phase transition diagrams often used comparing performance different estimators specifies minimum number observations required certain estimator recover structured signal sparse one noiseless linear observations although phase transition analysis shown provide useful information compressed sensing fact ignores measurement noise limits applicability many application areas also may lead misunderstandings instance consider linear regression problem signal exactly sparse measurement noise ignored systems regularization techniques lqls seem irrelevant since even ordinary least squares ols returns exact solution however much larger regularization techniques improve performance ols response limitation analysis consider sensitivity analysis show analysis framework reveals advantage lqls ols captures difference different lqls estimators even iii provides fair comparison among different estimators high ratios application framework show mild conditions lasso outperforms lqls even signal dense finally simple transformation connect sensitivity framework classical asymptotic regime characterize regularization techniques offer improvements ordinary least squares regularizer gives improvement sample size large key words linear model least squares ordinary least squares lasso phase transition asymptotic mean square error expansion classical asymptotics introduction problem statement modern data analysis one fundamental models extensively studied linear model length signal order cases larger number observations since ordinary least squares ols estimate accurate regime researchers proposed wide range regularization techniques recovery algorithms beyond ols existence variety algorithms regularizers turn called platforms provide fair comparisons among one popular platforms phase transitions analysis intuitively speaking phase transition diagram measures minimum number observations algorithm estimator requires recover understand limitations study performance following least squares lqls bridge estimators frank friedman arg min usual norm tuning parameter family covers lasso tibshirani ridge hoerl kennard two well known estimates statistics compressed sensing phase transition analysis studies asymptotic mean square error amse asymptotic setting considers calculates smallest inf paper consider situations exactly sparse intuitively expected discussed later paper phase transition analysis implies every inf inf simple application reveals limitations phase transition analysis phase transition analysis concerned sparse lqls different values phase transition hence clear whether regularization improve performance ordinary least squares ols regularizer best expect choice regularizer matter add noise measurements phase transition diagram sensitive magnitudes elements intuitively speaking seems major impact performance different estimators noise present system paper aims present generalization phase transition called sensitivity analysis assumed small amount noise present measurements calculates amse estimate framework following two main advantages phase transition analysis reveals certain phenomena important applications captured analysis instance one immediately sees impact regularizer magnitudes elements amse furthermore relations expressed explicitly interpreted easily provides bridge phase transition analysis proposed compressed sensing classical large asymptotic discuss implications connection classical asymptotics section consequence low noise sensitivity analysis enables present fair comparison among different lqls reveal different factors affect performance organization rest paper section discuss related works highlight differences section formally introduce asymptotic framework adopted analyses section present discuss main contributions details section prove main results related work phase transition analysis compressed sensing evolved series papers donoho tanner donoho donoho tanner donoho tanner characterized phase transition curve lasso variants inspired donoho tanner breakthrough many researchers explored performance different algorithms asymptotic settings reeves donoho stojnic amelunxen thrampoulidis karoui karoui donoho montanari donoho donoho montanari bradic chen donoho foygel mackey zheng rangan krzakala bayati montanri bayati montanari reeves gastpar reeves pfister paper use message passing analysis developed series papers donoho maleki bayati montanri bayati montanari maleki characterize asymptotic mean square errors lqls result calculation obtain equations whose solution specify amse estimate unfortunately complexity equations allow interpret results obtain useful information hence develop machinery simplify amse formulas turn explicit informative quantities note amse formulas derive theorem calculated framework developed thrampoulidis furthermore phase transition formulas derive theorem derived framework amelunxen well low noise sensitivity analysis use paper also used weng analysis weng concerned sparse signals paper avoids sparsity assumption perhaps surprisingly proof techniques developed weng sparse signals come short characterizing higher order terms dense cases response limitation propose delicate chaining argument paper offers much accurate characterization higher order terms study dense signals provides much complete understanding bridge estimators particular sparse settings weng reveals monotonicity lqls performance lasso optimal closer better lqls performs however paper show comparison optimality characterization lqls becomes much subtle signals general detailed treatment different types dense signals given section finally also emphasize machinery required sensitivity analysis dense signals different much complicated one developed sparse signals paper performs asymptotic analysis lqls many researchers used frameworks purpose among lqls lasso best studied one past decade witnessed dramatic progress towards understanding performance lasso tasks parameter estimation donoho donoho tanner donoho bickel raskutti variable selection zhao wainwright meinshausen reeves gastpar prediction greenshtein refer reader van geer eldar kutyniok complete list references ones related work donoho candes tao candes plan first two papers authors considered constraint ith largest component decays papers derived optimal logarithmic factor upper bounds mean square error lasso however paper characterize performance lasso generic derive conditions lasso outperforms bridge estimators also emphasize thanks asymptotic settings unlike two papers able derive exact expressions amse sharp constants finally candes plan studied fixed signal obtained oracle inequality tuning chosen explicit function results general bounds suffer loose constants sufficient provide sharp comparison lasso lqls moreover tuning parameter case set optimal one minimizes amse every lqls paves way accurate comparison different lqls finally performance lqls classical asymptotic setting fixed studied knight author obtained convergence lqls estimates derived asymptotic distributions results used calculate amse lqls optimal tuning show equal however demonstrate section analysis accurate comparison performances different lqls possible particular lasso shown outperform others certain type dense signals also mention idea obtaining higher order terms large sample scenarios first introduced wang however results wang concerned sparse signals paper consider dense signals discussed main challenges switching sparse signals dense signals earlier section asymptotic framework main goal section formally introduce asymptotic framework study lqls current section write vectors matrices emphasize dependence dimension similarly may use substitute first define specific type sequence known converging sequence definition borrowed papers donoho bayati montanri bayati montanari minor modifications recall linear model definition sequence instances called converging sequence following conditions hold empirical distribution converges weakly probability measure bounded second moment converges second moment empirical distribution converges weakly zero mean distribution variance furthermore elements iid distribution problem instances converging sequence solve lqls problem obtain estimate goal evaluate accuracy estimate define asymptotic mean square error asymptotic measures performance definition let sequence solutions lqls converging sequence instances asymptotic mean square error defined following almost sure limit amse lim note amse depends factors like suppressed notations simplicity performance lqls defined affected tuning parameter paper consider value gives minimum amse let denote value minimizes amse given arg min amse lqls solved specific value arg min best performance lqls value achieve terms amse use corollary weng obtain precise formula optimal amse theorem given consider converging sequence suppose solution lqls optimal tuning defined amse amse defined amse min two independent random variables distributions respectively proximal operator function kqq unique solution following equation min theorem provides first step analysis lqls first calculate incorporating solution gives result amse given distribution variance number observations normalized number predictors error straightforward write computer program numerically find solution calculate value amse however needless say approach shed much light performance different lqls estimates since many factors involved computation affects result fashion paper perform analytical study solution obtain explicit characterization amse high ratio regime expressions derived offer accurate view lqls quantify impact distribution performance different lqls clear definition converging sequence theorem main property affects amse probability measure rest paper assume point mass zero use notation denote one dimensional random variable distributed according present findings high ratio regime noise level either zero small sections respectively discuss implications analysis framework classical asymptotics section turns optimal value estimated accurately asymptotic settings discussed paper see mousavi information proximal operator kqq defined arg minz information functions please refer lemma section main contributions phase transition suppose noise linear model first goal phase transition analysis find minimum value amse next theorem characterizes phase transition theorem let amse result also derived several different frameworks including statistical dimension framework amelunxen derive simple byproduct results section discuss proof result surprising since none coefficients zero exact recovery impossible also note even ordinary least squares capable recovering hence result phase transition analysis provide additional information performance different regularizers even capable showing advantage regularization techniques standard least squares algorithm due fact result theorem holds noiseless case intuitively speaking practical settings existence measurement noise inevitable expect different lqls behave differently instance even though signal study sparse large mass around zero approximately sparse expect sparsity promoting lasso offer better performance lqls however distribution effect phase transition diagram motivated concerns next section investigate performance lqls noisy setting study noise sensitivity noise level small new analysis offer informative answers noise sensitivity analysis amse immediate generalization phase transition analysis study performance different estimators presence small amount noise formally derive asymptotic expansion amse every discussed later generalization phase transitions presents delicate analysis lqls start study amse ordinary least squares ols result ols later used comparison purposes lemma consider region ols estimate amse prove lemma section note proof presented used independence noise elements often assumed analysis ols discuss lqls optimal choice defined turns distribution impacts amse subtle way analysis purposes first study signals whose elements bounded away zero theorem study distributions theorem theorem consider region suppose positive constant amse amse positive number smaller proof found section observe first dominant term expansion amse exactly values including equal also amse ols may consider term phase transition term since zero nutshell first term expansion provides phase transition information however able derive second order term amse term gives beyond phase transition analysis impact signal distribution regularizer omitted diagram revealed second order term result compare performance lqls different values low noise regime compare second order terms first note regularizers studied theorem improve performance ols distribution coefficients bounded away significant gain obtained lasso since second dominant term expansion amse exponentially small however rate second order term exhibits interesting transition exponential polynomial decay increases fact seems bridge regularizers offer substantial improvements ols even though lasso suboptimal clear value provides best performance among lqls optimality determined constant involved second order term orders simplify discussions define arg max lqls perform best provide insights focus special family distributions lemma consider mixture denotes probability measure putting mass proof clear thus consider denote write define would like show large enough give hence finishes proof show note log therefore sufficiently large lemma implies ridge regularizer optimal two point mixture components coincide optimal value shift towards ratio two points goes infinity intuitively speaking one would expect ridge penalize large signals aggressively hence cases signal large dynamic range ridge penalizes large signal values expected outperform values note mixture signals optimal value arbitrarily close however lasso never optimal second order term exponentially small theorem studied distributions bounded away zero next study informative practical case distribution mass around zero theorem consider region assume given suppose positive constant amse amse suppose log log log times amse natural number proof presented section discussing implications result let mention points conditions imposed theorem note condition necessary otherwise appearing second order term unbounded intuitively speaking every even though probability density function pdf still infinity zero however condition requires infinity fast would like explain interesting implications theorem compared results theorem see expansion amse theorem remains general rate second order term lasso changes polynomial exponential means lasso sensitive distribution lqls second order term lasso becomes smaller decreases note decreases mass distribution around zero increases hence theorem implies lasso performs better probability mass signal concentrates around zero well explained sparsity promoting feature lasso case first dominant term hence compare second order term given suppose lasso ignore second term amse order logarithmic factor since terms negative conclude lasso performs better lqls value performs worse observation important implication behavior distribution around zero important factor comparison lasso lqls pdf zero zero use lasso goes infinity lasso performs better lqls least values theorem applicable regrading case probability density function finite positive zero calculations lasso sharp enough give accurate comparison lasso lqls however comparison lqls different values shed light performance different regularizers case following consider two popular families distributions present accurate comparison among lemma consider density function normalization constant best bridge estimator one uses max proof simple integration parts yields hence first consider due inequality obtain regrading case let independent copy result derive conclude max note lemma studies family distributions whose probability density function exists nonzero zero exhbit different tail behaviors confirmed lemma case tail behavior influence performance lqls particular lqls optimal distriq butions exponential decay tail since considered maximum posterior estimate map result suggests map offers best performance low noise regime among bridge estimators general true see zheng counterexample large noise cases also interesting observe tail becomes heavier laplacian distribution optimal approaches observation consistent fact ridge often penalizes large signal values aggressively estimators hence tail distribution light like gaussian distributions ridge offers best performance otherwise values offer better results next lemma support claim considering special family distributions light tails lemma consider follows uniform distribution density function location parameter proof clear hence arg arg implications classical asymptotics section would like show results derived previous sections perhaps surprisingly interesting connections classical asymptotic setting fixed analysis far focused setting furthermore assumed noise variance small classical asymptotics assumed ratio observation fixed note measurements intuitive level equivalent less noise hence expect sensitivity implications classical asymptotics goal formalize connection explain implications analysis framework classical asymptotics towards goal consider scenarios sample size much larger dimension analytically let infinity calculate expansions amse terms large similar section low noise section write amse amse make clear expansion derived terms getting results clarify important issue recall definition converging sequence definition straightforward confirm ratio measurement snr hence take snr measurement zero inconsistent classical asymptotic setting snr general assumed fixed fix inconsistency scale noise term consider scaled linear model follows converging sequence specified definition snr remained positive constant model well aligned classical setting comparison purposes start ordinary least squares estimate lemma consider model ols estimate amse proof lemma simple application lemma model lemma shows amse expansion easily verified discuss bridge estimators theorem consider model suppose positive constant amse proof found section since theorems concerned signals bounded away zero compare results lqls first dominant term however large sample regime second order term lasso order lqls interestingly comparison constant second order term consistent low noise case hence obtain conclusions mixture distributions instance bridge outperforms ols optimal mass concentrated one point see lemma information comparison discuss implications theorem classical asymptotics classical setting fixed performance lqls studied knight particular lqls estimates shown regular convergence setting first let apply theorem knight straightforward calculation asymptotic variance give first dominant term amse words classical asymptotic result lqls knight provides information regarding mean square error values optimal tuning virtue asymptotic framework offer second order term used evaluate compare lqls accurately similar results derived signals mass around zero presented next theorem theorem consider model introduced assume given suppose positive constant amse amse suppose log log log amse times natural number proof presented section theorem compared theorem see expansion remains general signals second order term lasso becomes smaller signals put mass around zero given clear lasso outperforms lqls implies even case much larger underlying signal many elements small values regularization improve performance characterized second order analysis available convergence result regrading distributions tail see comparison among low noise regime carries fact regularization improve performance maximum likelihood estimate ols context linear regression gaussian noise seems contradictory classical results imply mle asymptotically optimal mild regularity conditions however note optimality mle concerned asymptotic variance equivalently first order term estimate results show many estimators share first order term actual performance might different second dominant terms provide much accurate information cases proofs main results notations preliminaries throughout proofs random variable probability measure appears definition converging sequence refer standard normal random variable also use denote density function represent cumulative distribution function define following useful notations arg min independent recall proximal operator function since using extensively later proofs present useful properties next lemma explicit forms focus case notational simplicity may use represent partial derivative respect ith argument lemma function satisfies following properties sign iii sign function differentiable respect proof please refer lemmas weng proofs next write stein lemma stein apply several times proofs stein lemma suppose function weakly differentiable proof lemma since well defined probability sufficiently large first derive amse ridge estimate obtain amse ols letting according theorem weng known given amse solution following equation calculations obtain amse clearly amse derive amse ols according identity utilize result let smallest singular values hard confirm khx khx khx since bai yin belong converging sequence defined conclude moreover obtain almost surely results imply note term left hand side depend therefore letting sides finishes proof proof theorem roadmap since proof several long steps lay roadmap help readers navigate details according lemma part iii theorem know amse unique solution note equation function regime show fact combined tells order derive expansion amse function sufficient characterize convergence rate purpose first study convergence rate enables obtain convergence rate utilize result derive rate give proof sections respectively proof case due explicit form thus focus proof results section easily verified lemma let optimal threshold value defined proof proof essentially one lemma weng hence repeat arguments lemma suppose positive constant fixed positive constant proof aim derive convergence rate proof may write denote notational simplicity according lemma parts stein lemma following formula straightforward confirm following lim lim lim last equality obtained dominated convergence theorem dct condition dct holds due lemma part focus analyzing obtain dzdf dzdf consider separately dzdf regarding dct enables conclude lim lim lim note dct works small enough lemma parts implies combining together completes proof lemma shows choosing appropriate small enough less result used show converge zero fast utilize fact derive exact convergence rate done next lemma lemma suppose positive constant proof choosing lemma lim means sufficiently small hence conclude small enough moreover slight change arguments proof lemma summarized fact used several times lemma still holds sufficient bounding term depend sufficient obtain show lim exp fixed positive constant implies otherwise exists sequence result combined contradicts fact minimizer use two aforementioned properties showed far following proof notational simplicity rest proof may use denote whenever confusion caused firstly since finite value solution first order optimality condition written sign used lemma part derive obtain used following steps used lemma part conclude sign used expression derived lemma part employed stein lemma simplify note according lemma part differentiable respect first argument hence stein lemma applied evaluate three terms individually goal show following iii term apply dominated convergence theorem dct lim lim derive convergence rate dzdf dzdf dzdf first note dzdf last step due fact evaluate first derive following bounds small enough hence able apply dct obtain lim combining proves result use similar arguments show result iii finally utilize convergence results equation derive lim lim since know exact convergence order shows exact order position derive expansion amse according equation fact minimizes clear combined condition implies result enables conclude lim used lemma finally utilize lemma equations derive expansion amse following way amse completes proof theorem proof case lemma suppose positive constant proof first claim otherwise exists sequence limit finite suppose true since sign apply fatou lemma conclude lim inf lim inf contradicting fact calculate following limit lim lim lim last step due dominated convergence theorem dct condition dct verified based fact also choose positive constant smaller use similar argument obtain means large enough contradicting fact minimizes next derive following bounds obtain used stein lemma note weakly differentiable function inequality holds since equality result mean value theorem hence dent inequality straightforward verify choose small enough means optimal threshold finite value hence solution implies use represent simplicity last inequality holds small values due condition since first conclude turn use implies turn analyzing due holds result combined finishes proof position derive expansion amse similarly proof use lemma derive apply lemma obtain exp exp amse closes proof proof theorem similar proof theorem consider two cases prove separately follow closely roadmap illustrated section proof case results section proved easily consider start proof main result mention simple lemma used multiple times proof lemma let two nonnegative sequences property lim proof proof simple application scale invariance property lemma part iii lim lim last step result lemma part first goal show negative constant choosing appropriate however since proof long break several steps steps summarized lemmas lemma employ three results show lim lemma given suppose positive constant fixed number dzdf note arbitrary positive constant proof main idea proof break integral several pieces prove piece converges zero throughout proof choose small enough based value consider following intervals first find unique integer define following intervals denote log log log log log log see small enough intervals nested define dzdf dzdf using notations dzdf goal show since intervals different forms consider five different cases iii case show let denote lebesgue measure interval first term positive constant dzdf dzdf dzdf log log log log log log log log log log used condition obtain last inequality last statement holds choosing large enough next analyze term constant dzdf dzdf dzdf log log log log log log used fact since log log last step note according lemma log obtain log lim log result clear second term upper bound vanishes choosing sufficiently large regarding first term know log log log consider arbitrary show similarly bounding dzdf dzdf log log log log log log use lemma conclude lim log log condition lemma verified following log log last step due fact using result straightforward confirm chosen large enough second term goes zero first term log lim log lim log log lim log log used obtain far showed next step prove dzdf dzdf log log log log log log based clear large enough second term goes zero show first term goes zero well log lim log lim log lim log log log holds lemma due condition imposed ensures last remaining term prove log log dzdf dzdf using strategy bounding second integral zero chosen large enough first integral bounded log log log holds lemma condition lemma easily checked completes proof define lemma proved dzdf next lemma would like extend result show fact dzdf lemma given suppose conditions lemma hold fixed dzdf note defined proof proof lemma break integral smaller subintervals prove one goes zero consider following intervals arbitrarily small number arbitrary natural number note sequence nested intervals goal show following integrals zero dzdf dzdf define since obtain dzdf log dzdf dzdf log log log straightforward notice second term converges zero first term condition derive following bounds log log log discuss term similarly bounded log log dzdf dzdf second integral easily shown convergent zero focus first integral log dzdf log log log log used lemma obtain showed given natural number dzdf lim note goes infinity exponent interval goes choosing small enough sufficiently large make hence completing proof last two lemmas able prove dzdf result used characterize following limit lim dzdf mention simple lemma applied several times proofs lemma proof sufficient consider analyze two different cases according lemma part since know hence completes proof consider one main results section lemma given suppose conditions lemma hold lim proof follow roadmap proof lemma recall first term calculated way proof lemma lim focus analyzing first note restricting bounded away makes possible follow arguments used proof lemma obtain lim hence next consider event dzdf dzdf constant specify later first analyze note due lemma straightforward verify chosen close enough apply dominated convergence theorem dct obtain lim turn bounding according lemmas know dzdf define get given therefore dzdf hence bound sufficient bound dzdf dzdf dzdf dzdf log dzdf result lemma positive constant first bound following log log easily seen goes zero choosing large enough remaining term log log log log obtain last statement choose close enough zero close hence conclude combined results gives lim result together finishes proof stated roadmap proof first goal characterize convergence rate towards goal first show either large small particular lemmas show utilize result lemma conclude lemma suppose proof consider formula since straightforward apply dominated convergence theorem obtain lim know also note hence lemma suppose conditions lemma hold proof consider expression first note lim study behavior recall defined log log respectively straightforward use argument bounding proof lemma see derivations dzdf dzdf lim moreover since small enough lemma part implies therefore dzdf dzdf last statement holds dzdf already shown proof lemmas proved dzdf based result easily follow derivations bounding term proof lemma conclude dzdf lim furthermore analyses derive equation bound proof lemma adapted yield dzdf lim dzdf lim putting results together gives dzdf lim lim finishes proof collecting results lemmas upper lower bound optimal threshold value shown following corollary corollary suppose conditions lemma hold proof since minimizes know small enough last inequality due lemma appropriate choice positive constant note already know lemma contradict lemma contradict able derive exact convergence rate lemma given suppose conditions lemma hold proof proof use denote notational simplicity using notations equation know satisfies following equation first goal show define interval dzdf dzdf first show first term goes zero note lemma part iii thus dzdf dzdf dzdf dzdf log since already shown corollary straightforward see second integral bound negligible large enough first term know dzdf log next goal find limit second term order break integral several pieces recall intervals introduced lemmas consider two different cases case assume log hence dzdf dzdf dzdf dzdf dzdf log fact enables conclude second integral goes zero choosing large enough regarding first term know dzdf log log due consider another integral dzdf dzdf goal show integral goes zero well use following calculations dzdf dzdf dzdf dzdf chosen way define note similar calculations case lemmas key argument regarding used show term converges zero hence show current case proofs carry dzdf purpose make use lemma note since need confirm condition lemma log assumption case hence current case obtained dzdf furthermore clear follow line arguments deriving proof lemma obtain log case log exists value small enough break integral dzdf dzdf dzdf show two integrals goes zero subsequent arguments exactly ones case regarding first integral dzdf dzdf dzdf log log log since corollary clear second term upper bound goes zero choosing large enough regarding first term log log log log log definition holds even since due fact log log according choice second integral note hence implies arguments calculating second integral case hold well far able derive limit next analyze term show goes zero dzdf dzdf dzdf upper bound shown zero preceding calculations regarding first term furthermore note obtain dzdf dzdf last term shown converge zero analysis derived lim dzdf together fact follow line arguments proof lemma get dzdf lim finally direct application dominated convergence theorem gives hence able derive following lim lim derived convergence rate according lemma immediately obtain order convergence rate lemma derivation expansion amse one proof theorem proof case lemma suppose logm logm small enough logm log log log times four constants depending arbitrary integer number proof since proof steps similar lemma repeat every detail instead highlight differences write notational simplicity using proof steps lemma obtain following arguments proof lemma weng show logm logm number since bounds proved using result furthermore know dependent choose inequality straightforward see last step due bound note logm last inequality holds upper bound based results lemma deriving expansion amse done similar way proof theorem repeat proof theorem idea proof similar theorems make use result theorem amse since large sample regime function clear hence leads due fact able use convergence rate results proved lemmas equations lemma together yield amse case lemma know exponentially small firs term vanishes second term remains proof theorem theorem proved similar fashion theorem equation still holds equations lemma together give amse first term second one case proved exactly way theorem using lemma acknowledgment arian maleki supported nsf grant references melunxen otz ropp living edge phase transitions convex programs random data information inference journal ima url bai limit smallest eigenvalue large dimensional sample covariance matrix annals probability bayati ontanari lasso risk gaussian matrices ieee trans inform theory bayati ontanri dynamics message passing dense graphs applications compressed sensing ieee trans inform theory ickel itov sybakov simultaneous analysis lasso dantzig selector annals statistics radic hen robustness sparse linear models relative efficiency based robust approximate message passing arxiv preprint van eer statistics data methods theory applications springer science business media andes lan probabilistic ripless theory compressed sensing ieee transactions information theory romberg tao robust uncertainty principles exact signal reconstruction highly incomplete frequency information ieee transactions information theory andes tao signal recovery random projections universal encoding strategies ieee transactions information theory onoho underdetermined systems linear equations minimal approximates sparsest solution comm pure appl math onoho underdetermined systems linear equations minimal nearsolution approximates sparsest manuscript submitted publication url http stanford onoho centrally symmetric polytopes neighborliness proportional dimension discrete computational geometry onoho avish ontanari phase transition matrix recovery gaussian measurements matches minimax mse matrix denoising proceedings national academy sciences onoho aleki ontanari phase transition compressed sensing ieee transactions information theory onoho ontanari high dimensional robust asymptotic variance via approximate message passing probability theory related fields onoho ontanari variance breakdown huber arxiv preprint onoho tanner neighborliness randomly projected simplices high dimensions proceedings national academy sciences onoho tanner sparse nonnegative solution underdetermined linear equations linear programming proceedings national academy sciences onoho compressed sensing ieee transactions information theory onoho aleki ontanari algorithms compressed sensing proceedings national academy sciences onoho aleki ontanari noise sensitivity phase transition ieee trans inform theory aroui ean ickel robust regression highdimensional predictors proceedings national academy sciences ldar utyniok compressed sensing theory applications cambridge university press oygel ackey corrupted sensing novel guarantees separating structured signals ieee transactions information theory rank riedman statistical view chemometrics regression tools technometrics reenshtein itov persistence linear predictor selection virtue overparametrization bernoulli oerl ennard ridge regression biased estimation nonorthogonal problems technometrics aroui asymptotic behavior unregularized robust regression estimators rigorous results arxiv preprint night asymptotics estimators annals statistics rzakala ausset deborov reconstruction compressed sensing physical review aleki approximate message passing algorithms compressed sensing thesis stanford university aleki nitori yang baraniuk asymptotic analysis complex lasso via complex approximate message passing camp ieee transactions information theory einshausen graphs variable selection lasso annals statistics ousavi aleki baraniuk consistent parameter estimation lasso approximate message passing appear annals statistics angan oyal letcher asymptotic analysis map estimation via replica method compressed sensing advances neural information processing systems askutti wainwright minimax rates estimation linear regression ieee transactions information theory eeves onoho minimax noise sensitivity compressed sensing information theory proceedings isit ieee international symposium ieee eeves astpar sampling bounds sparse support recovery presence noise information theory isit ieee international symposium ieee eeves fister prediction compressed sensing gaussian matrices exact information theory isit ieee international symposium ieee tein estimation mean multivariate normal distribution annals statistics tojnic dependent thresholds compressed sensing arxiv preprint tojnic various thresholds compressed sensing arxiv preprint tojnic linear systems thresholds arxiv preprint hrampoulidis bbasi assibi precise error analysis regularized arxiv preprint ibshirani regression shrinkage selection via lasso journal royal statistical society series wainwright sharp thresholds noisy sparsity recovery using constrained quadratic programming lasso ieee transactions information theory wang eng aleki bridge estimator optimal variable selection arxiv preprint eng aleki heng overcoming limitations phase transition higher order analysis regularization techniques arxiv preprint hao model selection consistency lasso journal machine learning research heng aleki eng wang ong outperform ieee transactions information theory accepted
| 10 |
traffic models periodic control systems nov anqi manuel mazo control petc version control etc requires measure plant output periodically instead continuously work present construction timing models petc implementations capture dynamics traffic generate construction employ approach first partition state space finite number regions region behavior analyzed help lmis state transitions among different regions result computing reachable state set starting region within computed event time intervals index abstractions periodic control lmi formal methods reachability analysis ntroduction wireless networked control systems wncs control systems employ wireless networks feedback channels systems physically distributed components wireless nodes communicate via wireless network components designed great mobility nodes supported batteries besides component established updated easily therefore wncs great adaptability obtaining different control objectives attracting much attention however two major issues must considered designing system limited bandwidth energy supply often control tasks designed executed periodically periodic strategy also named control ttc regard system current state thus may waste bandwidth energy alternatively eventtriggered control etc strategies proposed reduce bandwidth occupation see references therein etc control tasks execute necessary performance indicator violated thus system tightfisted communication however validate eventtriggering conditions sensors required sample plant output continuously continuous monitoring consume large amounts energy reduce energy consumption naturally one may want replace continuously sampling discrete time sampling applying discrete time sampling compensate delay caused discretization one either design authors delft center systems control delft university technology delft netherlands work partly funded china scholarship council csc stricter condition based system dynamics modify lyapunov function heemels present periodic control petc mechanism petc implementation sensors required measure plant output validate event conditions periodically conditions satisfied fresh measurements employed recompute controller output therefore petc enjoys benefits cautious communication discrete time measurement compared event conditions less conservative reduce communications thus energy consumed bandwidth occupied reduced furthermore transmissions control input controller plant also included petc mechanism reduce resource consumption fully extract potential gains etc one also consider scheduling approaches efficiently scheduling listening times wireless communications medium access time general energy consumption wncs reduced bandwidth efficiently reused enable scheduling model traffic generated etc required kolarijani mazo propose type approximate power quotient system derive models capture timing behaviors etc systems applying triggering mechanism first partition state space finite cones cone analyze timing behavior methods see linear matrix inequality lmi methods reachability analysis see similarly order fully extract potential gains petc scheduling approaches model traffic generated petc necessary work present construction timing models petc implementations first modify petc mechanism giving upper bound time event happens within interval system forced generate event end constructing models approach two steps first divide state space finite number partitions system partition looks like dartboard construct set lmis compute output map transition relations among different regions derived computing reachable state set starting region compared work require perturbation vanish state converges instead assume perturbation paper organized follows section presents necessary notation definitions problem solved defined section iii section shows details construct power quotient system model traffic centralized petc implementation numerical example shown section section summarizes contributions paper discusses future work ease readability proofs collected appendix otation preliminaries denote euclidean space positive real numbers natural numbers including zero denoted zero included denote natural numbers set closed intervals set denotes set subsets power set set real valued matrices set symmetric matrices respectively symmetric matrix said positive negative definite denoted whenever means positive negative semidefinite matrix equivalence relation set denotes equivalence class denotes set equivalence classes locally integrable signal denote furthermore define space locally integrable signals finite space signals finite review notions field system theory definition metric consider set metric distance function following three conditions satisfied ordered pair said metric space definition hausdorff distance assume two subsets metric space hausdoorff distance given max sup inf sup inf definition system system sextuple consisting set states set initial states set inputs transition relation set outputs output map term system indicates finite infinite set system cardinality smaller equal one system said autonomous definition metric system system said metric system set outputs equipped metric definition approximate simulation relation consider two metric systems let relation simulation relation following three conditions satisfied implies satisfying denote existence simulation relation say simulates simulated whenever inequality implies resulting relation called exact simulation relation introduce notion power quotient system corresponding lemma later analysis definition power quotient system let system equivalence relation power quotient denoted system consisting lemma lemma let metric system equivalence relation let metric system power quotient system max hausdorff distance set simulates definition minkowski addition introduced computation reachable sets definition minkowski addition minkowski addition two sets vectors euclidean space formed adding vector vector denotes minkowski addition iii roblem definition centralized petc presented reviewed consider continuous linear lti plant form rnp denotes state vector rny denotes plant output vector denotes input applied plant perturbation plant controlled controller given plant rnv rnw denotes rnc denotes state vector controller rnv denotes controller output vector rny denotes input applied controller periodic sampling sequence given sampling interval define two vectors rnu rnu output implementation input implementation hold mechanism applied samplings input sampling time input applied implementation updated given constant reformulating event condition quadratic form event sequence defined cpt cct cct dct zero matrix proper dimension identity matrix appropriate dimension obvious according theorem hypothesis therein satisfied system globally exponential stable ges smaller equal corresponding solution system satisfies model timing behaviour petc system aim constructing power quotient system implementation remark uncertainty brought perturbation may happen perturbation compensates effect sampling helping state implementation converge therefore event condition may satisfied along timeline result may upper bound event intervals however upper bound necessary constructing useful power quotient system remark apply scheduling approaches online scheduler required model going construct meaning event system may end several possible regions regions defined terms means measurement always clear region system means online scheduler figure system simple output measurements therefore online scheduler able access region system assumption current state region eventtriggered time obtained real time observation remark use instead following event condition inf state region last sampling time regional maximum allowable event interval maei dependent according assumption obtainable value possible accessed triggering mechanisms one always employ global upper bound discuss computation later sections note petc implementation employing guarantee stability performance petc implementation employing guarantee stability performance consider period definition constant dependent input expressed let state evolution initial state computed eap eap figure example state space partition finite polyhedral cones finite homocentric spheres finite partition thus event condition computed min inf present main problem solved paper consider system iff system output set system contains possible amount sampling steps system may exhibit sampling time chosen event interval computed problem construct finite abstraction system capturing enough information scheduling inspired solve problem constructing power quotient systems based adequately designed equivalence relation defined state set constructed systems semantically equivalent timed automata used automatic scheduler design particular system constructed follows compared power quotient system constructed main difference since focus petc timing uncertainty onstruction quotient system state set results remark following fact remark excluding origin states lie line going origin identical triggering behavior also call following assumption assumption perturbation satisfies besides assume upper bound known base remark assumption propose statespace partition follows rnx scalars constructed matrix sequence scalars note rest bounded obvious somewhere partition combines partitioning statespace finite polyhedral cones named isotropic covering finite homocentric spheres see isotropic covering describes relation entries state vector transverse isotropic covering used capture relation norm state vector norm perturbations shown later theorem homocentric spheres omitted details isotropic covering found appendix figure shows example finite state system output map first free system dynamics uncertainty brought perturbation lemma consider system assumptions hold exist scalar symmetric matrix generated lower bounded inf rnp max dap dap next construct lmis bridge lemma partition theorem regional lower bound consider scalar regions hypothesis lemma hold exist scalars following lmis hold according remark applying isotropic covering enough define represent corollary regional lower bound consider scalar exist scalars following lmis hold defined inter event time system regionally bounded corollary regional upper bound let large enough scalar consider scalar exist scalars following lmis hold defined inter event time system regionally bounded remark choice follow remark apply line search approach increasing results global upper bound inter event time system obvious set regional maei defined lemma inter event times system regionally bounded regions regional lower bound remark theorem discuss situations since regions holds regions holds one easily validate feasibility lmi diagonal infinity making lmi infeasible however according property petc regional lower bound exists equal following similar ideas theorem present next lower upper bounds starting state partition consider following event condition inf remark since consider perturbations computing lower upper bound region transition relation subsection discuss construction transition relation reachable state set denote initial state set samplings without update reachable state set denoted according relation obtained obvious computed directly perturbation uncertain state region may convex therefore aim find sets compute take following steps partition dynamics according computed minkowski addition sets computed compute one compute polytope approximates computed optimization problem compute computation follows rnx eap eap eap max figure partition labeling region last inequation holds according system thus reachable set starting computed compute transitions one check intersection approximation reachable state set state regions specifically one check following feasibility problem state region holds figure computed result regional lower bound case main result summarize main result paper following theorem theorem metric system simulates max hausdorff distance umerical example section consider system employed plant given controller given plant chosen since easy show feasibility presented theory plots partition presented figure set convergence rate gain sampling time event condition checking lmi presented see exists feasible solution thus stability performance guaranteed result computed lower bound theorem shown figure figure shows version computed upper bound corollary shown figure resulting abstraction precision simulation results system evolution event intervals perturbations shown figure upper bound triggered events simulation note increasing number subdivisions lead less figure result regional lower bound figure system evolution event intervals sin state evaluation perturbance event intervals bounds figure computed result regional upper bound conserved lower upper bounds inter event time conservativeness also reduced decreasing reachable state regions starting region shown figure example reachable state region initial region shown figure also present simulation lower bound shown figure evolution system shown figure shows inter event intervals within computed bounds reachable state regions starting region shown figure onclusion paper present construction power quotient system traffic model petc implementations constructed models used estimate next event time state set next event occurs models allow design scheduling improve listening time wireless communications medium access time increase energy consumption bandwidth occupation efficiency figure reachable regions starting state region labeling figure paper consider output feedback system dynamic controller however state partition still based states system controller system state may always obtainable therefore estimate system state etc implementation output measurements important extension make work practical periodic asynchronous control paetc presented extension petc considering quantization one either treat quantization error part perturbations analyze part separately increase abstraction precision since dynamics quantization error dependent states also interesting future investigation another interesting extension reconstruction traffic models sensor node capture local timing behavior decentralized petc figure flow pipe indicating initial state set red reachable state set blue reachable regions cyan figure system evolution event intervals state evaluation event intervals computed bounds figure computed result regional lower bound implementation either global information even local information ppendix isotropic covering consider first present case let interval splitting interval one construct cone pointing origin sin sin sin sin cos cos remark shows behaviours therefore sufficient consider half figure reachable regions starting conic region labeling figure derive case define projection point coordinate polyhedral cone defined constructed matrix relation given row column entry matrix satisfy proof lemma decouple event triggering mechanism first last inequality comes lemma uncertainty part proof theorem first consider regions hypothesis theorem hold applying schur complement one applying holds also since scalars following inequality hypothesis theorem exists together jensen inequality inequality assumption bounded last inequality comes form definition inserting results together applying schur complement provides regional lower bound diagonal infinity thus lmi infeasible according eap eap condition indicates regional lower bound regions finishes proof proof corollary result easily obtained eap eap theorem considering jensen equality proof corollary result easily obtained analogously theorem considering hypothesis corollary hold according according definition holds dap together event condition provides regional upper bound bounded proof theorem result follows lemma construction described section eferences hypothesis theorem holds applying schur complement following inequality holds indicates therefore generated lower bounded generated ends proof alongkrit chutinan bruce krogh computing polyhedral approximations flow pipes dynamic systems decision control proceedings ieee conference volume pages ieee alongkrit chutinan bruce krogh computational techniques hybrid system verification ieee transactions automatic control marieke cloosterman laurentiu hetel nathan van wouw wpmh heemels jamal daafouz henk nijmeijer controller synthesis networked control systems automatica marieke cloosterman nathan van wouw wpmh heemels hendrik nijmeijer stability networked control systems uncertain delays ieee transactions automatic control mcf donkers wpmh heemels control improved decentralized eventtriggering automatic control ieee transactions mcf donkers wpmh heemels nathan van wouw laurentiu hetel stability analysis networked control systems using switched linear systems approach ieee transactions automatic control ewald combinatorial convexity algebraic geometry volume springer science business media christophe fiter laurentiu hetel wilfrid perruquetti richard state dependent sampling linear state feedback automatica christophe fiter laurentiu hetel wilfrid perruquetti richard robust stability framework lti systems sampling automatica anqi manuel mazo periodic asynchronous control corr rob gielen sorin olaru mircea lazar wpmh heemels nathan van wouw niculescu polytopic inclusions modeling framework systems delays automatica keqin jie chen vladimir kharitonov stability systems springer science business media wpmh heemels mcf donkers andrew teel periodic eventtriggered control linear systems automatic control ieee transactions laurentiu hetel jamal daafouz claude iung stabilization arbitrary switched linear systems unknown delays ieee transactions automatic control arman sharifi kolarijani dieky adzkiya manuel mazo symbolic abstractions scheduling control systems decision control cdc ieee annual conference pages ieee arman sharifi kolarijani manuel mazo formal traffic characterization lti control systems ieee transactions control network systems manuel mazo ming cao asynchronous decentralized eventtriggered control automatica manuel mazo anqi decentralized controller implementations control signal processing page manuel mazo paulo tabuada decentralized control wireless networks automatic control ieee transactions skaf stephen boyd analysis synthesis controllers timing jitter ieee transactions automatic control young soo suh stability stabilization nonuniform sampling systems automatica paulo tabuada scheduling stabilizing control tasks automatic control ieee transactions paulo tabuada verification control hybrid systems symbolic approach springer science business media charles van loan sensitivity matrix exponential siam journal numerical analysis xiaofeng wang michael lemmon distributed networked control systems automatic control ieee transactions xiaofeng wang michael lemmon event design eventtriggered feedback systems automatica
| 3 |
derandomization maximization feb hiroki oshima department mathematical informatics graduate school information science technology university tokyo abstract submodularity one important property combinatorial optimization generalization submodularity maximization function approximation algorithm studied monotone functions iwata tanigawa yoshida gave algorithm paper give deterministic algorithm derandomizing algorithm algorithm runs polynomial time introduction set function submodular submodularity one important properties combinatorial optimization rank functions matroids cut capacity functions networks submodular submodular functions seen discrete version convex functions submodular function minimization showed first algorithm combinatorial strongly polynomial algorithms shown hand submodular function maximization use approximation algorithms let input function maximization maximizer output algorithm approximation ratio algorithm defined deterministic algorithms randomized algorithms randomized version double greedy algorithms achieves showed requires exponential time value oracle queries implies double greedy algorithm one best algorithms terms approximation ratio showed derandomized version randomized double greedy algorithm algorithm achieves extension submodularity function defined definition let function called note submodular function called bisubmodular function maximization functions also approximation algorithm studied input problem nonnegative function note function function output problem let input function maximizer output algorithm define approximation ratio algorithm deterministic algorithms randomized algorithms bisubmodular functions showed algorithm submodular functions extended analyzed extension functions showed randomized algorithm max deterministic algorithm algorithm shown particular monotone functions gave algorithm also showed algorithm requires exponential time value oracle queries paper give deterministic approximation algorithm monok runs tone maximization satisfies algorithm derandomized version algorithm monotone functions also note derandomization scheme used double greedy algorithm preliminary define partial order follows def also define monotone function satisfies property written another form theorem theorem function orthant submodular pairwise monotone note orthant submodularity satisfy pairwise monotonicity satisfy analyze functions often convenient identify vector associated existing randomized algorithms algorithm framework section see framework maximize functions algorithm used specific distributions algorithm algorithm input nonnegative function output vector denote elements set probability distribution let chosen randomly end return algorithm used monotone functions however paper use monotone functions define variables see algorithm let optimal solution write iteration let variables follows algorithm satisfies following lemma lemma lemma let conditioning suppose holds randomized algorithm monotone functions randomized algorithm monotone functions algorithm shown algorithm algorithm input monotone function output vector denote elements else end lets chosen randomly end return algorithm runs polynomial time approximation ratio algorithm satisfies theorem theorem theorem let maximizer monotone ksubmodular function let output algorithm proof theorem see inequality lemma proved get monotonicity orthant submodularity hence inequality used inequality lemma satisfied inequality valid deterministic algorithm section give deterministic algorithm maximizing monotone functions algorithm algorithm algorithm derandomized version algorithm note derandomization scheme algorithm algorithm deterministic algorithm input monotone function output vector denote elements supp find extreme point solution following linear formulation supp supp construct new distribution supp end return arg algorithm construct distribution satisfies algorithm outputs best solution supp see right hand side algorithm expected value left hand side also left hand side expected value right hand side constructed distribution algorithm achieves approximation ratio algorithm theorem let maximizer monotone nonnegative function let output algorithm proof consider iteration get consider define variables follows monotonicity orthant submodularity get satisfies hence obtain summation get note supp supp max algorithm performs polynomial number value oracle queries theorem algorithm returns solution value oracle queries proof algorithm uses value oracle caluculate iteration number equals number consider iteration definition extreme point solution note get solution setting distribution algorithm supp also see feasible region bounded extreme point solution exists let rkm equalities inequalities tight extreme point solution inequalities inequalities least inequalities tight hence number also see number value oracle queries algorithm search extreme point solution solving objective function use algorithm number queries also number operations simplex method proved method however practical algorithm needs extreme point solution get basic solution enough use first phase simplex method find extreme point solution conclusion showed derandomized algorithm monotone maximizak tion algorithm one open problems faster method finding extreme point solution linear formulation submodular functions showed greedy methods effective formulation form fractional knapsack problem formulation similar seen form relaxation multidimensional knapsack problem however faster methods given general solutions number constraints formulation depends number iterations therefore difficult find extreme point faster constructing deterministic algorithm nonmonotone functions also important open problem nonmonotone functions pairwise monotonicity instead monotonicity situation negative however find try use derandomizing method number constraints linear formulation size exponential algorithm finish references buchbinder feldman deterministic algorithms submodular maximization problems proceedings annual symposium discrete algorithms pages siam buchbinder feldman seffi schwartz tight linear time approximation unconstrained submodular maximization siam journal computing feige mirrokni vondrak maximizing submodular functions siam journal computing schrijver ellipsoid method consequences combinatorial optimization combinatorica iwata fleischer fujishige combinatorial strongly polynomial algorithm minimizing submodular functions journal acm iwata tanigawa yoshida bisubmodular function maximization extensions technical report technical report metr university tokyo iwata tanigawa yoshida improved approximation algorithms ksubmodular function maximization proceedings annual symposium discrete algorithms pages siam schrijver combinatorial algorithm minimizing submodular functions strongly polynomial time journal combinatorial theory series ward maximizing functions beyond acm trans algorithms august
| 8 |
graph distances controllability networks aug yasin waseem abbas magnus egerstedt technical note study controllability diffusively coupled networks graph theoretic perspective consider networks external control inputs injected agents namely leaders main result relates controllability systems graph distances agents specifically present graph topological lower bound rank controllability matrix lower bound tight applicable systems arbitrary network topologies coupling weights number leaders algorithm computing lower bound also provided furthermore prominent application present proposed bound utilized select minimal set leaders achieving controllability even coupling weights unknown ntroduction networks diffusively coupled agents appear numerous systems sensor networks distributed robotics power grids social networks biological systems central question regarding networks whether desired global behavior induced directly manipulating small subset agents referred leaders network question motivated numerous studies controllability networks particular large interest relating network controllability structure interaction graph technical note relate controllability diffusively coupled agents single integrator dynamics distances nodes interaction graph various graph theoretic tools recently utilized provide topology based characterizations network controllability graph theoretic constructs widely employed purpose include equitable partitions maximum matching centrality based measures dominating sets recently graph distances used acquire insight network structure locations leaders influence network controllability graph distances rely purely shortest paths nodes provide computationally tractable perceptible characterization graph structure thus relationships reveal innate connections network topology network controllability main contribution technical note distancebased tight lower bound dimension controllable yasin laboratory information decision systems massachusetts institute technology yasiny waseem abbas institute software integrated systems vanderbilt university magnus egerstedt school electrical computer engineering georgia institute technology magnus paper presented part ieee conference decision control maui december see subspace theroem bound generic sense applicable systems arbitrary network topologies coupling weights number leaders based distances leaders followers first define vectors define certain ordering rule derive lower bound maximum length sequences vectors satisfy rule section iii algorithm compute proposed bound also presented section prominent attribute proposed bound unlike dimension controllable subspace depend coupling weights thus bound useful many applications particularly information network incomplete example section present bound used leader selection achieving controllability given network even coupling weights unknown finally conclusions provided section reliminaries graph theory graph consists node set edge set undirected graph edge represented unordered pair nodes let denote neighborhood path pair nodes sequence nodes pair consecutive nodes linked edge distance nodes dist equal number edges belong shortest path nodes graph connected exists path every pair nodes graph weighted corresponding weighting function adjacency matrix weighted graph defined aij otherwise adjacency matrix corresponding degree matrix defined aik otherwise graph laplacian defined difference degree matrix adjacency matrix networks network diffusively coupled agents represented graph nodes correspond agents weighted edges exist coupled agents network let dynamics agent denotes state represents strength coupling setting objective drive overall system injecting external control inputs nodes called leaders set leaders represented without loss generality leaders labeled network global state vector obtained stacking states nodes without loss generality let let control input injected leaders overall dynamics system expressed matrix following entries bij otherwise system controllable subspace consists states reached finite time via appropriate choice controllable subspace range space controllability matrix iii eader ollower istances ontrollability section present connection controllability networks distances nodes leaders interaction graph specifically utilize distances define tight lower bound dimension controllable subspace rank controllability matrix first provide definitions prior analysis definition vector node network leaders vector defined dist denotes entry denotes leader analysis utilize specific sequence define increasing sequence vectors vector sequence let ith vector sequence let denote entry definition increasing pmi sequence sequence vectors vector pmi sequence every exists condition simply means ith vector sequence entry corresponding entries subsequent vectors sequence greater example consider network shown fig network one build pmi sequence five vectors entry satisfies encircled instance consider note first entries subsequent vectors greater similarly second vector note example another pmi sequence five vectors could build fig network two leaders shown gray node vector given next connected set leaders let denote set pmi sequences corresponding vectors furthermore let denote length longest sequence max following analysis show weighting function rank resulting controllability matrix lower bounded lemma let connected graph weighting function dist dist dist distance proof using expanded denotes sum matrices expressed multiplication copies copies instance note matrix represented multiplication nonnegative entries since entries moreover diagonal matrix connected graph positive entries main diagonal alter signs entries multiplied matrix hence using since adjacency matrix weighted graph positive edge weights equal positive scalar times number walks length node node hence dist adist connected graph consequently together imply theorem let connected graph let set leaders weighting function controllability matrix satisfies rank defined proof connected set leaders let corresponding vectors let pmi sequence maximum length consider vectors form index per definition pmi sequence definition denotes column input matrix corresponds vector node result lemma ith entry vector also node entry vector equal zero using along definition pmi sequences conclude matrix full column rank since column contains leftmost entry rows note every since distance two nodes always smaller equal hence column matrix also column consequently rank since holds weighting function proposed lower bound closely related notion strong structural controllability network said structurally controllable exists resulting controllability matrix full rank furthermore network said strongly structurally controllable resulting controllability matrix full rank regard dimension controllable subspace sense strong structural controllability defined minimum ranks controllability matrices obtained using arbitrary weighting functions accordingly essentially lower bound dimension controllable subspace sense strong structural controllability also emphasize lower bound theorem tight exist rank cycle graph two adjacent nodes leaders path graph leaf node leader examples satisfy equality two examples rank follows fact cases lead total number nodes graph remainder section present connections proposed lower bound closely related measures namely maximum distance leaders number unique vectors networks single leader pmi sequence vectors consists one dimensional vectors monotonically increasing entries hence networks equal one plus maximum distance leader indeed proposed lower bound controllability matrix networks extension case multiple leaders later presented taking maximum value among leaders max dist relationship seen following observation one considers pmi sequences satisfy fixed entry longest pmi sequence length hence one strained set conclude following inequality holds leaderfollower network rank light two quantities equal singleleader networks proposed lower bound richer capturing controllability networks multiple leaders fact since maximum distance two nodes graph diameter graph definition always less equal one plus diameter graph even every node leader general difference depends graph topology leader assignment instance example fig yields numerical comparisons bounds graphs two leaders illustrated fig figure point plot corresponds average value randomly generated cases case randomly generated graph pair randomly assigned leaders results indicate provides significantly better utilization graph distances controllability analysis networks even network pair leaders proposed lower bound also closely related number unique vectors connected graphs graphs leaders pair assigned leaders rank would like conclude section remark regarding application presented results directed networks lower bound lower bound fig comparison lower bounds two randomly selected leaders random graphs nodes two nodes adjacent probability graph nodes new node connected existing nodes preferential attachment strategy leader set let number unique vectors note due vector pmi sequence entry strictly smaller corresponding entries following vectors sequence hence pmi sequence contain two identical vectors consequently proposed lower bound always less equal number unique vectors relationship facilitates deeper understanding potential applications proposed lower bound instance necessary condition hence possible conclude network completely controllable per proposed lower bound follower distinctive vector relationships may naturally yield question whether capture rank better following result show universal relationship rank proposition networks number unique vectors universal bound rank proof prove statement providing examples rank rank path graph uniform edge weights controllable single node number nodes power two note path graph single leader leader leaf node hence node leader path graph uniform edge weights rank cycle graph uniform edge weights controllable pair nodes number nodes prime number consider cycle graph nodes odd composite number graph two leaders clockwise counterclockwise paths leaders different lengths hence pair nodes equal distances one leaders different distances leader furthermore light exists pair nodes render system uncontrollable assigned remark directed networks formulation applicable directed networks interaction graph denotes influenced network powers adjacency matrix property dist adist dist length shortest directed path hence using corresponding vectors results lemma theorem extended strongly connected networks exists directed path every node every node lgorithm omputing lower bound section present algorithm compute proposed lower bound note main contribution work lower bound algorithm section provided facilitate practical use result let set vectors given network given vectors present iterative way generating longest pmi sequences let set vectors assigned pth element sequence according definitions vector assigned pth element sequence index satisfying chosen resulting obtained order obtain longer sequences iteration must continued however general many possible sequences obtained way feasible find maximum length pmi sequences searching among possibilities instead present necessary condition pmi sequence maximum possible length necessary condition significantly lowers number sequences consider lemma let pmi sequence vectors maximum possible length pth entry satisfies min proof sake contradiction assume true pmi sequence maximum length exists vector construction pmi sequence added sequence however added right keeping parts since selected satisfy resulting sequence hence possible obtain longer pmi sequence leads contradiction maximum possible length light far sequence length concerned important decision step building pmi sequence satisfying choice based observation propose algorithm computing lower bound algorithm initialize min element computed iteration loop left child node obtained deleting vectors whose first entries equal minimum value first entries among vectors obtained per similarly right child node obtained following procedure accordingly elements first three levels end end end end return proposition given vectors connected network algorithm returns proof combining line algorithm step algorithm builds possible correspond pmi sequences vectors satisfying hence every longer pmi sequence satisfies necessary condition lemma consequently algorithm always returns length longest pmi sequence note light remark algorithm also used compute proposed lower bound directed network strongly connected interaction graph sample run algorithm consider network fig example algorithm terminates fifth iteration loop resulting flow algorithm represented via tree diagram illustrated fig fig illustration flow algorithm network fig fig node given level corresponds algorithm variable multiset element resulting specific choices subject corresponding pmi sequences satisfying main loop iterates long exists longer pmi sequence satisfies note number leaders different corresponding particular choice algorithm computes generating elements istance eader election ontrollability one main attributes proposed lower bound independence edge weights contrast rank depends values edge weights unless constraint weights equal consequently computing typically requires significantly less information overall system minimality required information makes lower bound attractive many applications leader selection controllability leader selection problems typically require finding leader set optimizes system objective robustness mixing time controllability instance consider problem finding minimum number leaders render given network controllable resulting dynamics edge weights known known identical rank controllability matrix computed set leaders hence possible yet scalable way find minimal set leaders controllability execute exhaustive search note aside complexity issues rank computation applicable edge weights unknown arbitrary cases leader selection problem needs solved leveraging structural properties interaction graph one approach achieving controllability arbitrary coupling weights choose minimal achieves structural controllability structural controllability implies selected leaders provide complete controllability weighting functions hence approach may fall short applications especially constraints admissible edge weights instance edge weights network equal design known complete graph controllable single leader whereas single leader enough achieve structural controllability alternatively notion strong structural controllability employed leader selection regard proposed bound used ensure dimension controllable subspace smaller desired value formulating leader selection problem minimize subject light theorem element feasible set problem renders rank weighting function note problem always feasible given always exists indeed feasibility shown considering trivial case case vector entry contains distance corresponding node satisfies sequence corresponding vectors consequently sequence vectors pmi sequence note feasibility would guaranteed problem posed using instead since always upper bounded one plus graph diameter example leaders assigned solving illustrated fig example leaders selected exhaustive search first looking solution incrementing number leaders solution exists algorithm used compute candidate fig graph nodes minimal selection leaders shown gray network completely controllable via weighting function onclusion technical note presented distances leaders followers interaction graph contain fundamental information controllability networks particular used vectors derive tight lower bound dimension controllable subspace proposed bound applicable networks arbitrary interaction graphs weighting functions also provided connections proposed lower bound pair closely related measures namely maximum distance leaders number distinct vectors results presented undirected networks also showed extended directed networks furthermore presented algorithm computing lower bound proposed bound may find applications various networked control problems especially edge weights unknown prominent application presented utilized find minimal set leaders ensure controllability network given interaction graph weighting function eferences speranzon fischione johansson distributed collaborative estimation wireless sensor networks ieee conference decision control jadbabaie lin morse coordination groups mobile autonomous agents using nearest neighbor rules ieee transactions automatic control dorfler bullo synchronization transient stability power networks nonuniform kuramoto oscillators siam journal control optimization ghaderi srikant opinion dynamics social networks local interaction game stubborn agents american control conference controllability structural brain networks nature communications rahmani egerstedt controllability systems graph theoretic perspective siam control egerstedt martini cao camlibel bicchi interacting networks structure relate controllability consensus networks ieee control systems magazine liu slotine controllability complex networks nature lin structural controllability ieee transactions automatic control mayeda yamada strong structural controllability siam journal control optimization chapman mesbahi strong structural controllability networked systems constrained matching approach american control conference jarczyk svaricek alt strong structural controllability linear systems revisited ieee conference decision control slotine control centrality hierarchical structure complex networks plos one pan structural controllability controlling centrality temporal networks plos one nacher akutsu analysis critical redundant nodes controlling directed undirected complex networks using dominating sets journal complex networks zhang camlibel cao controllability diffusivelycoupled systems general distance regular coupling topologies ieee conference decision control zhang cao camlibel upper lower bounds controllable subspaces networks diffusively coupled agents ieee transactions automatic control abbas egerstedt tight lower bound controllability networks multiple leaders ieee conference decision control parlangeli notarstefano reachability observability path cycle graphs ieee transactions automatic control lin fardad jovanovic algorithms leader selection stochastically forced consensus networks ieee transactions automatic control clark alomair bushnell poovendran leader selection systems smooth convergence via fast mixing ieee conference decision control egerstedt leader selection network assembly controllability networks american control conference aguilar gharesifard graph controllability classes laplacian dynamics ieee transactions automatic control
| 3 |
continuous control clot norm minimization niharika challapalli masaaki nagahara mathukumalli vidyasagar nov department electrical engineering university texas dallas richardson usa institute environmental science technology university kitakyushu hibikino kitakyushu fukuoka japan nagahara department systems engineering university texas dallas richardson usa department electrical engineering indian institute technology hyderabad kandi telangana india abstract paper consider control via minimization clot combined two norm maximum control sparsest control among feasible controls bounded specified value transfer state given initial state origin within fixed time duration general maximum control control taking values many real applications discontinuity control desirable obtain continuous still relatively sparse control propose use clot norm convex combination norms show numerical simulation clot control continuous much sparser longer time duration control takes conventional elastic net control convex combination squared norms keywords optimal control convex optimization sparsity maximum control control introduction sparsity recently emerged important topic processing machine learning statistics etc specified equation underdetermined infinitely many solutions rank finding sparsest solution solution fewest number nonzero elements formulated min subject however problem hard shown natarajan therefore approaches proposed purpose area research known sparse one popular lasso tibshirani also referred basis pursuit chen replaced thus problem becomes min subject advantage lasso convex optimization problem therefore large problems solved efficiently example using package cvx grant boyd moreover mild technical assumptions solution nonzero components osborne however exact location nonzero components sensitive vector overcome deficiency another approach known elastic net proposed zou hastie norm lasso replaced weighted sum squared norms leads optimization problem min subject positive weights shown zou hastie theorem formulation gives grouping effect two columns matrix highly correlated corresponding components solution nearly equal values ensures solution overly sensitive small changes name elastic net meant suggest stretchable fishing net retains big fish past decade half another research area known compressed sensing witnessed great deal interest compressed sensing matrix specified rather user gets choose integer known number measurements well matrix objective choose matrix well corresponding decooder map unknown vector sparse measurement vector equals sufficiently sparse vectors generally measurement vector measurement noise vector nearly sparse exactly sparse recovered vector sufficiently close true unknown vector referred robust sparse minimizing among popular decoders see books elad eldar kutyniok foucart rauhut theory applications due similarity lasso formulation tibshirani approach compressed sensing also referred lasso recently situation lasso achieves robust sparse recovery compressed sensing achieve grouping effect sparse regression flip side achieves grouping effect known whether achieves robust sparse recovery recent paper ahsen sheds light problem shown ahsen achieve robust sparse recovery achieve grouping effect sparse regression well robust sparse recovery compressed sensing ahsen proposed clot combined two formulation min subject difference clot norm term squared norm clot pure norm slight change leads grouping effect robust sparse recovery shown ahsen parallel advances sparse regression recovery unknown sparse vectors sparsity techniques also applied control optimization applied networked control nagahara quantization errors data rate reduced time sparse representation control packets examples control applications include optimal controller placement casas clason kunisch fardad design feedback gains lin polyak state estimation charles name recently novel control called maximum handsoff control proposed nagahara systems maximum control control control minimum support length among feasible controls bounded fixed value transfer state given initial state origin within fixed time duration control effective reduction electricity fuel consumption vehicle shuts internal combustion engine control vehicle stopped speed lower preset threshold see chan example railway vehicles also utilize control often called coasting control cut electricity consumption see liu golovitcher details nagahara authors proved theoretical relation maximum control optimal control assumption normality also important properties maximum control proved ikeda nagahara convexity value function chatterjee existence discreteness general maximum control control taking values many real applications discontinuity property desirable obtain continuous still sparse control nagahara proposed use combined squared minimization like mentioned let call control control case vector optimization control often shows much less sparse larger norm maximum control proposed use clot norm convex combination norms call minimum control clot control show numerical simulation clot control continuous much sparser longer time duration control takes conventional control remainder article organized follows section formulate control problem considered paper section give discretization method numerically compute optimal control results numerical computations variety problems presented section examples illustrate advantages clot control compared maximum control control present conclusions section notation let signal time interval define norms respectively kukp sup denote set signals kukp define norm signal interval kernel function defined scalar norm represented supp supp support signal lebesgue measure problem formulation let consider linear system described assume initial state fixed given control objective drive state origin time limit control satisfy umax fixed umax system controllable final time larger optimal time minimal time exist control drives origin see hermes lasalle exists least one satisfies equations let call control feasible control feasible control satisfies eat first divide time interval subintervals discretization step sampling period assume state control constant subinterval discretization grid system described eah eat bdt define linear operator define set feasible controls define control vector note final state described problem maximum control described min subject problem hard solve since cost function discontinuous problem nagahara shown optimal control equivalent following optimal control min subject set approximately represented kud next approximate norm plant normal controllable matrix nonsingular let call optimal control lasso control plant normal lasso control general control piecewise constant taking values discontinuity lasso solution desirable real applications smoothed solution also proposed nagahara min subject design parameter smoothness let call control elastic net control nagahara proved solution continuous function control continuous shown numerical experiments control sometimes sparse analogy vectors achieve robust sparse recovery borrowing idea clot ahsen define clot optimal control problem min subject kud way obtain approximation norm kud finally optimal control problems approximated min subject min hkud subject min hkud subject optimization problems convex efficiently solved numerical software packages cvx matlab see grant boyd details call optimal control clot control numerical examples discretization since problems infinite dimensional approximate finite dimensional problems adopt time discretization section present numerical results applying clot norm minimization approach seven different plants compare results applying lasso plant poles figs norm state lasso clot table details various plants studied details various plants studied save space table plant zeros shown zero zero zeros remaining plants zeros plant numerator equals one plant zeros poles specified plant numerator denominator polynomials computed using matlab command poly transfer function computed state space realization computed ssdata maximum control amplitude taken control must satisfy save space use notation denote vector whose elements equal one note one case initial condition equals order plant fig state trajectory plant initial state control lasso clot fig control trajectory plant initial state norm state lasso clot note problems plants feasible meaning smaller minimum time needed reach origin took optimization problems solved discretizing interval well samples examine whether sampling time affects sparsity density computed optimal control reader convenience details various plants given table figure numbers show corresponding computational results found conventions adopted reduce clutter table described next plants form plots optimal state control trajectories plots euclidean norm state vector trajectory control signal examples shown next several plots begin plant integrator figures show state control trajectories system analyzed using smaller value one would expect resulting control signals would sparse smaller indeed case results shown figures based observation control signal fig state trajectory plant initial state becomes sparse plants analyzed figures display state trajectory control trajectories plant damped harmonic oscillator initial state figures norm state control lasso clot lasso clot fig control trajectory plant initial state norm state fig state trajectory plant initial state lasso clot control lasso clot fig state trajectory plant initial state control lasso clot fig control trajectory plant initial state norm state lasso clot fig control trajectory plant initial state show state control trajectories initial state seen intial state control signal changes sign frequently fig state trajectory plant initial state compare sparsity densities three control signals compute fraction time signal nonzero connection noted lasso control signal solution linear programming problem consequently components exactly equal zero many time instants contrast clot control signals solutions convex optimization problems consequently many time instants control signal small without smaller machine zero therefore compute sparsity density applied threshold treated component control signal zero control norm state lasso clot lasso clot control lasso clot fig state trajectory plant initial state norm state lasso clot norm state lasso clot lasso clot fig control trajectory plant initial state control fig state trajectory plant initial state fig control trajectory plant initial state fig control trajectory plant initial state magnitude smaller threshold convention sparsity densities various control signals shown table table seen control signal generated using clot norm minimization significantly lower sparsity density compared much higher fig state trajectory plant initial state lasso also expected sparsity density lasso change whereas sparsity densities clot decrease decreased reason examples present results control lasso clot fig control trajectory plant initial state norm state lasso clot clot table sparsity densities optimal controllers produced various methods three methods lasso clot advantage using sparsity density instead sparsity count absolute number nonzero entries sample time reduced sparsity count would increase whereas would expect sparsity density remain explained applied threshold computing sparsity densities various control signals table shows sparsity densities nine examples studied table order table seen clot control signal always sparse control signal indeed cases sparsity density clot control comparable lasso control lasso fig state trajectory plant initial state also increased number samples optimal values changed third significant figure almost examples three methods therefore figures table essentially equal lebesgue measure support set divided conclusions control lasso clot fig control trajectory plant initial state lasso clot table sparsity indices control signals various algorithms plant integrator initial state comparison sparsity densities subsection analyze sparsity densities fraction samples nonzero using article propose clot control minimizes weighted sum norms among feasible controls obtain continuous control signal sparser control introduced nagahara shown discretization method clot optimal control problem solved via convex optimization numerical experiments shown advantage clot control compared lasso controls future work includes analysis sparsity continuity clot control signal acknowledgements research supported part jsps kakenhi grant numbers research supported national science foundation award cancer prevention research institute texas cprit award grant department science technology government india references ahsen challapalli vidyasagar two new approaches compressed sensing exhibiting robust sparse recovery gourping effect journal machine learning research submitted preprint arxiv casas clason kunisch approximation elliptic control problems measure spaces sparse solutions siam control chan state art electric hybrid fuel cell vehicles proc ieee charles asif romberg rozell sparsity penalties dynamical system estimation annual conference information sciences systems ciss chatterjee nagahara quevedo rao characterization maximum control systems control letters chen donoho saunders atomic decomposition basis pursuit siam sci clason kunisch measure space approach optimal source placement comput optim appl elad sparse redundant representations springer eldar kutyniok compressed sensing theory applications cambridge university press fardad lin sparsitypromoting optimal control class distributed systems american control conference acc foucart rauhut mathematical introduction compressive sensing grant boyd cvx matlab software disciplined convex programming version http hermes lasalle functional analysis time optimal control academic press ikeda nagahara value function maximum control linear systems automatica lin fardad jovanovic design optimal sparse feedback gains via alternating direction method multipliers ieee trans autom control liu golovitcher operation rail vehicles transportation research part policy practice nagahara quevedo maximum control paradigm control effort minimization ieee trans autom control nagahara quevedo sparse packetized predictive control networked control erasure channels ieee trans autom control natarajan sparse approximate solutions linear systems siam osborne presnell turlach lasso dual journal computational graphical statistics polyak khlebnikov shcherbakov lmi approach structured sparse feedback design linear control systems european control conference ecc tibshirani regression shrinkage selection via lasso statist soc ser zou hastie regularization variable selection via elastic net journal royal statistical society series
| 3 |
zero divisor graphs quotient rings aug rachael alvir abstract compressed graph associated commutative ring vertex set equal set equivalence classes whenever ann ann distinct classes adjacent paper explore compressed graph associated quotient rings unique factorization domains specifically prove several theorems exhibit method constructing one quotients principal ideal prove sufficient conditions two compressed graphs show conditions necessary unless one alters definition compressed graph admit looped vertices conjecture necessary sufficient conditions two compressed graphs loops isomorphic considering quotient ring unique factorization domain introduction coloring commutative rings beck introduces graph associated set commutative ring whose vertex set set two distinct adjacent graph establishes connection graph theory commutative ring theory hopefully turn mutually beneficial two branches mathematics axtell stickles remark general set lacks algebraic structure suggesting turning graph may reveal properties impose structure beck hopes certainly met classical paper motivated explosion research similar associated graphs past decade definition since modified emphasize fundamental structure set anderson definition given excludes vertex graph considered standard spiroff wickham formalized compressed graph vertex set equal set equivalence classes whenever ann ann distinct classes adjacent proved words multiplication equivalence classes graph represents succinct description activity spiroff compressed graph inspired mulay work also called mulay graph graph equivalence classes compressed graph many advantages traditional graph often finite cases set infinite reveals associated primes purpose paper study compressed graph discover way better describes core behavior set question becomes particularly tractable unique factorization domains building intuition built main theorem reveals overall structure activity quotient rings unique factorization domains determined finite set key elements call basis ring result observation able state sufficient conditions two compressed graphs rings conjecture necessity compressed graph admits looped vertices final section conjecture results may extended quotient ring unique factorization domain background one motivation use compressed graph observation new zerodivisors may trivially found every multiple also moreover multiples often behave exactly way parent annihilate elements phenomena expressed following lemma lemma let commutative ring proof let ann since follows negating definition therefore ann let ann ann useful considering graph thus appears literature although often weaker form unit observation leads consider determine behavior set behavior determined important naturally one wishes identify actually determine graph structure end define two equivalent ann ann denote equivalence class copycat add unnecessary visual information wish associate graph equivalence relation compressed graph becomes new object study situations expressed lemma annihilators two elements equal addressed following lemma lemma let commutative ring ideal proof may restate lemma explicitly relates graph follow standard notation graph theory given graph vertex set denoted edge set corollary let commutative ring ideal proof extra condition ensures vertex condition ensures edges exist distinct vertices using simple intuition springboard proceeed consider consequences relationship ideal set quotient ring results corollary indicates order investigate set quotient ring one must look factors elements kernel homomorphism every factor however principal ideal hni wish restrict attention factors therefore utilize following definition definition let unique factorization domain hni principal ideal zerodivisor basis set nontrivial divisors two distinct elements basis associates symbols unit associate associates claim factors ones determine behavior set every annihilator image irreducible divisor lemma let ufd ideal canonical homomorphism onto let gcd proof trivial one annihilator class done since onto image unit unit unit never lemma images associates therefore share equivalence classes however every gcd unique associates gcd theorem let ufd hni principal ideal canonical homomorphism onto gcd proof hni maximal trivial one annihilator class done nonzero unit let psi factorization irreducibles fix gcd done suppose may write pki kigcd lemma since gcd qpi hni gcd hni therefore suffices show equivalence pki gcd holds let suppose gcd since gcd gcd gcd implies gcd pki conclude pki let suppose pki min min pki since cancellation property holds ufd however definition minimum min min euclid lemma conclude words gcd implies therefore min min min desired show theorem let unique factorization domain hni nontrivial principal ideal let canonical homomorphism onto vertex equivalence class exactly one basis edge exists two distinct nodes hni proof hni maximal empty andqso also suppose hni maximal hni neither maximal trivial let psi already shown gcd thus every equivalence class equivalence class divisor since vertices contain classes every vertex equivalence class show next true exactly one suppose distinct elements basis unique associates may write basis units since definition basis associates without loss generality let consider psi clearly annihilates however pii pisi implies conclude ann ann edge exists distinct vertices hni follows immediately definition compressed graph corollary compressed graph always finite proof hni maximal trivial done compressed graph empty may expressed finite product irreducibles basis always finite theorem let two unique let tdomains sifactorization units let factorizations products irreducibles proof basis pvi basis given consider map whose domain contain equivalence classes defined pzi qizi theorem map vertex sets since theorem vertex corresponds exactly one element zerodivisor basis conclude correspondence suppose pfi pgi adjacent happens pfi pgi adjacent therefore isomorphism corollary let ufd hni principal ideal isomorphic perhaps appropriate mention requiring consequently simple seems caused disagreement among graph community axtell definition graph require edges exist distinct vertices refers explicitly loops graph despite fact claims anderson definition standard anderson must remind reader graph detect nilpotent elements redmond anderson definition posed potential problems requiring conditions make nilpotent elements easier locate paper also advocate changing standard definitions looped vertices allowed first graph loops models activity ring additionally although former theorem holds either definition conditions necessary unless stipulate contain loops counterexample note unlooped case believe sufficiency held admits loops sufficiency need shown corollary certain quotient rings next theorem gives necessary sufficient conditions compressed graph isomorphism regardless whether admit loops theorem let two ufd let irreducibles two associates respectively proof sufficiency already shown prove necessity contraposition first case suppose clear different numbers divisors associates bases two rings different cardinalities therefore vertex sets put correspondence graphs isomorphic second case suppose similarly clear vertex sets different cardinalities results generalize proved zpn conjectures would like discuss results may extended following conjectures sugges might possible conjecture let two commutative rings would also like extend results one quotients ufd ideal note first every ideal ring may written union principal ideals trivial union hxi suffices show true equality holds ideals closed multipliciation elements ring definition general definition basis follows definition let unique factorization domain let expression union principal ideals basis set divisors two distinct elements associates expressed minimally may able replace stipulation requirement divisor merely nontrivial conjecture let ufd ideal expressed minimal union canonical homomorphism onto gcd conjecture let unique factorization domain ideal minimal expression union principal ideals distinct nodes compressed graph given equivalence classes elements form diq element basis edge exists two nodes conjecture let unique factorization domains two ideals respectively neither maximal trivial suppose may write references anderson livingston graph commutative ring journal algebra vol axtell livingston graphs college mathematics journal vol beck coloring commutative rings journal algebra vol endean henry manlove graphs polynomial quotient rings undergraduate mathematics journal vol burton elementary number theory college mathematics journal hill edition gallian contemporary abstract algebra edition mulay cycles symmetries commutative algebra vol redmond generalizations graph ring doctoral dissertations university tenessee knoxville spiroff wickham graph determined equivalence classes commutative algebra vol
| 0 |
control strategy anaesthetic drug dosage interaction among human physiological organs using optimal fractional order pid controller saptarshi sourav koushik school electronics computer science university southampton southampton email saptarshi department power engineering jadavpur university sector india email paper efficient control strategy physiological interaction based anaesthetic drug infusion model explored using fractional order proportional integral derivative pid controllers dynamic model composed several human organs considering brain response anaesthetic drug output drug infusion rate control input particle swarm optimisation pso employed obtain optimal set parameters controller structures proposed fopid control scheme much less amount system designed attain specific anaesthetic target also shows high robustness parametric uncertainty patient brain model drug dosage control fractional order pid controller physiological organs pso introduction control strategy formulation anaesthetic drug dosage crucial clinical surgery falls particular category biomedical system design known pharmacology pharmacology two major steps involved known pharmacokinetics pharmacodynamics anaesthetic drug injected patient body gets infused arterial blood flow arterial blood carrying drug reaches different physiological organs determining drug concentration arterial blood flow different tissues dosage concentration known pharmacokinetics interaction drug different physiological organs overall effect drug concentration effect known pharmacodynamics amongst many others anaesthetic drugs fentanyl widely used relief acute pain like cancer different surgeries drug dosage clinical surgery generally controlled electroencephalogram eeg recording anaesthesia predefined unconsciousness reached characterized observing slow oscillations eeg signals since different physiological organs different time constants absorb react finally release drug blood stream interaction contribution overall physiological dynamics anaesthetic study highly complex may lead clinicians misinterpret observed event example organs may store drug larger extent reacts drug slowly others immediate effect reduction fast oscillations eeg waves may significant type phenomena may confuse clinicians enhance drug dosage attend eeg activity indicative anaesthetic state typical case might lifethreatening patient scenario physiological model based simulation study necessary device control strategy possible variation patient mathematical model nominal case since dosage drug type eeg observation based control may endanger patient process anaesthesia realistic model control strategy formulation thus implemented automated anaesthesia clinical surgery using brain activity eeg monitoring sensory feedback comparator generate tracking error results control action going actuator pump drug patient body using prior knowledge reference quantitative measure consciousness therefore task controller design present scenario summarised minimising tracking error brain response given hill equation modified hill equation frequency domain bispectral index bis eeg delicately manipulating drug input patient realistic mathematical model fentanyl drug developed known model mapleson model composed set algebraic equations derived biochemistry model organ human body like lungs peripheral shunt kidney gut spleen liver viscera muscle fat sample brain considered separate compartment also assumed organs gets equal arterial blood flow mahfouf translated static algebraic equation based mapleson model dynamic model considers temporal variation drug concentration inlet outlet physiological organ also carried model reduction large dynamical system generalized predictive control gpc design paper use original unreduced higher order dynamic model order design efficient control scheme shown das fractional order controllers effective handling higher order dynamics due inherent nature integer order controllers formulates scope present study control strategies anaesthetic dosage formulated much simpler models using pid controllers fuzzy controllers earlier research wada developed detailed physiological system level pharmacokinetic model without control scheme three state nonlinear compartmental model clinical pharmacology described different control studies including adaptive control neural network control nonnegative dynamical systems disturbance rejection control results clinical trials anaesthesia control scheme based model noise eeg measurements reported haddad patients focus present study formulate optimal control strategy dynamical model reported mahfouf considering interaction amongst physiological organs rest paper organised follow section describes overall system model different physiological organs interaction drug optimal controller design task described section iii system simulation section paper ends conclusion section followed references dynamic model fentanyl interaction physiological organs overall system description automated anaesthesia controller generate control action giving command actuator mechanical pump pump drug fig dynamical model human organs mahfouf developed individual system models applying system identification techniques original mapleson model consideration body weight cardiac output represent clinical scenario order study generalising capability model outside nominal range interpolation based model identification reported realistic variation either cardiac output simultaneously generalized interpolated scheme patient model developed body weight body weight fentanyl injected period sec corresponding interpolated transfer function models various human organs described fat lungs described earlier first static model fentanyl interaction pharmacokinetics pharmacodynamics point view known mapleson model mahfouf extended concept dynamic models representing organs dynamics ordinary differential equations siso continuous time transfer function models model mainly captures drug flow one organ drug concentration specific organ drug added intravenous injection gets infused arterial blood perfused organs fraction total arterial blood flow brain reaction drug manifested form eeg evaluated using hillequation index like bis comparison target anaesthetic level generate error minimised fopid kidneys liver muscle fig overall fentanyl model proposed control strategy brain nasal peripheral shunt modelled gain without time constants models explain dynamic relationship outgoing drug concentration blood flow tissue incoming arterial pool drug amount whereas model drug concentration issue drug concentration outgoing flow brain represented drug effect brain calculated hill equation using concentration drug effect steepness slope factor effect iii optimal control scheme drug infusion fractional order pid controller choice control objective clinical practices anaesthesia computer controlled drug infusion generally adopted meet target concentration infusion tci rather manually controlling infusion rate techniques rely open loop control assumes population model good representative patient may valid many cases sensible scheme continuously monitor brain response eeg use feedback mechanism minimise tracking error efficient controller structure control signal essentially modifies intravenous drug infusion rate violently manipulated reach faster anaesthetic effect apart minimizing tracking error also important task minimize variation derivative drug infusion rate prevent sudden shock automated injection pump chance infusing large amount drug small time increased control effort objective function optimal controller design formulated weighted sum integral time multiplied squared error itse integral squared deviation control output isdco controller structure tuning using pso optimiser fractional order controller design various time domain performance criteria explored das results shows squared error term puts penalties tracking error time multiplication term makes overall response faster reduces chance loop oscillations later stages evident tracking criterion definitely increase required control effort whose variation temporal derivative thus added minimising criteria control objective met integer fractional order pid controller structure controller gains orders individual particle pbest global swarm gbest velocity positions manipulated successive iterations using particles converges global best solution known inertia factor cognitive learning rate social learning rate uniformly distributed random variables within interval parameter known inertia factor swarm linearly varied present study unconstrained version pso employed bound controller parameters due implementation issues ora operators explored four classes fopid structures tested tracking performance control effort controller structure fopid fopid fopid fopid simulation results discussion tuned global optimisation algorithm pid controller gains optimized considering orders unity operator continuously rationalised within optimization process using order oustaloup recursive approximation ora chosen frequency band rationalised gains given controller parameters optimized using pso algorithm widely used global optimiser swarm starts particles velocity position time step try move towards global best latest value best found solution fig convergence characterteristics different controller structures table optimum values controller parameters controller jmin controller parameters pid pso based optimal parameter selection carried controller structure goal minimizing objective function considering step input target min reported table located minima min table show structure gives optimal result fopid variants pid controller pso convergence characteristic corresponding case low derivative gain implies type controller suitable particular system performance nominal patient model optimal parameters controller table nominal anaesthetic drug delivery system described fig simulated tracking performance required control efforts compared fig fig respectively evident although pid controller tracking much faster needs amount drug infusion within short period time mins whereas structure meets anaesthetic target pushing much less amount drug patient body longer period time mins testing vulnerability control loops gain variation widely used robustness measure described das gain nominal brain model rest physiological organs kidney liver muscle fat nasal receive arterial blood flow mapleson model described fig directly affect output hence much less influence respective parameters perturbed fig reports simulations best found cases pid fopid controller respectively gain perturbation brain model fig tracking performance control effort pid controller perturbation brain model fig tracking performance different controller structures fig tracking performance control effort fopid controller perturbation brain model fig control effort different controller structures robustness control scheme perturbed condition previous subsection simulations reported nominal model whereas highly likely model differs significantly different patients different conditions patient since brain model directly affects output system shown fig brain response fentanyl drug directly feedback controller report simulation studies brain response drug gain perturbation scenario evident fig decrease nominal brain model control effort rapidly increases signifying drug injected patient body evident rise time pid controlled system much faster mins amount significantly high units contrary fopid controller fig provides relatively slower tracking performance mins significant amount saving drug injection level bounded within units even perturbed condition clinical practices meeting specified unconsciousness level sole criterion may create sudden shock different human physiological systems may harmful patient also order attain level anaesthetic effect within shorter period drug concentration brain needs raised rapidly economical constraint physiological along faster mechanical pumping needed automated whereas smoother bounded control action less amount drug infusion patient attain level anaesthesthetic fentanyl drug equipped efficient control strategy using proposed optimal fopid controller conclusion paper devises new fractional order control strategy automatic fentanyl drug dosage control anaesthesia clinical practices proposed fopid control scheme requires less amount dug injected pid controller meet anaesthetic target smoother control action provided fopid controller outperforms classical pid controller sense restricting chance feeling sudden shock brain response due rapid increase amount anaesthetic drug concentration arterial blood flow within short time interval future work may directed towards validation control scheme applied mapleson dynamic model real clinical data references khoo physiological control systems analysis simulation estimation ieee press series biomedical engineering baura system theory practical applications biomedical systems ieee press series biomedical engineering northorp signals systems analysis biomedical engineering crc press bailey haddad drug dosing control clinical pharmacology ieee control systems magazine vol apr portenoy oral transmucosal fentanyl citrate otfc treatment breakthrough pain cancer patients controlled dose titration study pain vol anand sippell green randomised trial fentanyl anaesthesia preterm babies undergoing surgery effects stress response lancet vol smith eegs anesthesia anesthesia analgesia vol schwilden stoeckel effective therapeutic infusions produced feedback control methohexital administration total intrvanous anesthesia fentanyl anesthesiology vol davis mapleson physiological model distribution injected agents special reference pethidine british journal anaesthesia vol mapleson models uptake inhaled anaesthetics data quantifying british journal anaesthesia vol mahfouf linkens xue new generic approach model reduction complex physiologically based drug models control engineering practice vol das saha das gupta selection tuning methodology fopid controllers control higher order processes isa transactions vol july das gupta das generalized frequency domain robust tuning family fractional order controllers handle higher order process dynamics advanced material research vol mems nano smart systems hara bogen noordergraaf use computer controlling delivery anesthesia anesthesiology vol sakai matsuki white giesecke use eegbiseptral delivery system administering propofol acta anaesthesia scandinavica vol jaklitsch westenskow controller muscle relaxation anesthesia ieee transactions biomedical engineering vol shieh linkens peacock hierarchical fuzzy logic control depth anaesthesia ieee transactions systems man applications reviews vol mahfouf nunes linkens peacock modelling multivariable control anaesthesia using paradigms part closed loop control simultaneous administration propofol remifentanil artificial intelligence medicine vol wada stanski ebling graphical simulator physiological pharmacokinetic models computer methods programs biomedicine vol apr haddad hayakawa bailey adaptive control nonlinear compartmental dynamical systems applications clinical pharmacology systems control letters vol haddad bailey hayakawa hovakimyan neural network adaptive output feedback control intensive care unit sedation intraoperative anesthesia ieee transactions neural networks vol jul haddad bailey control intensive care unit sedation best practice research clinical anaesthesiology vol mar haddad hayakawa bailey adaptive control compartmental dynamical systems applications general anesthesia international journal adaptive control signal processing vol apr volyanskyy haddad bailey adaptive disturbance rejection control compartmental systems application intraoperative anesthesia influence hemorrhage hemodilution effects international journal adaptive control signal processing vol haddad volyanskyy bailey neuroadaptive output feedback control automated anesthesia noisy eeg measurements ieee transactions control systems technology vol mar das pan das gupta chaos synchronization via optimal fractional order controller bacterial foraging algorithm nonlinear dynamics vol das pan halder das gupta lqr based improved discrete pid controller design via optimum selection weighting matrices using fractional order integral performance index applied mathematical modelling vol mar pan das intelligent fractional order systems control introduction series studies computational intelligence vol rules selecting parameters oustaloup recursive approximation simulation linear feedback systems containing controller communications nonlinear science numerical simulation vol apr
| 3 |
systems parameters holonomicity systems feb christine berkesch stephen griffeth ezra miller abstract main result elementary proof holonomicity systems requirements behavior singularities originally due adolphson regular singular case gelfand gelfand method yields direct novo proof systems form holonomic families parameter spaces shown matusevich miller walther introduction system counterpart toric ideal solutions systems functions fixed infinitesimal homogeneity affine toric variety solution space system behaves well part system holonomic particular implies vector space germs analytic solutions nonsingular point finite dimension note provides elementary proof holonomicity arbitrary systems relying statement module weyl algebra variables holonomic precisely characteristic variety dimension see theorem along standard facts transversality subvarieties krull dimension particular proof requires assumption singularities system equivalently associated toric ideal need standard graded holonomicity proved regular singular case gelfand gelfand later adolphson regardless behavior singularities system adolphson proof relies careful algebraic analysis coordinate rings collection varieties whose union characteristic variety system another proof holonomicity system schulze walther yields general result weight vector large family possibilities variety union conormal varieties hence dimension holonomicity follows induces order filtration weyl algebra method uses explicit combinatorial interpretation initial ideals toric ideals requires series technical lemmas holonomicity systems forms part statement proof matusevich miller walther systems determine date january mathematics subject classification primary secondary support nsf grant acknowledges financial support fondecyt proyecto regular christine berkesch stephen griffeth ezra miller holonomic families parameter spaces new proof statement serves model suitable generalization hypergeometric systems reductive groups sense kapranov main step theorem proof easy geometric argument showing euler operators corresponding rows integer matrix form part system parameters product algebraically closed field toric variety determined observation leads quickly section conclusion characteristic variety associated system dimension hence system holonomic since algebraic part proof holds entries considered independent variables commute variables desired stronger consequence immediate ahypergeometric system forms holonomic family parameter space theorem systems parameters via transversality fix field let sets coordinates let denote column vector entries given rectangular matrix columns write vector bilinear forms given multiplying times lemma let knx coordinates let subvariety matrix entries variety var transverse smooth point whose nonzero proof suffices prove statement passing algebraic closure assume algebraically closed let smooth point lies var coordinates nonzero tangent space contains tangent space var kernel matrix respectively matrix results multiplying column corresponding coordinate respectively since coordinates projects surjectively onto last coordinates indeed given taking yields knx thus tangent spaces sum ambient space intersection transverse next result applies lemma affine toric variety fixed integer matrix defines action algebraic torus tan orbit orb point image algebraic map sends closure orb affine toric variety var cut toric ideal avi polynomial ring induces via deg semigroup ring chapters systems parameters holonomicity systems face real cone generated columns write let vector nonzero entry precisely nonzero column variety decomposes finite disjoint union orb orbits orb orbit dimension dim orb rank submatrix consisting columns lying dim rank theorem ring krull dimension particular rank forms part system parameters proof let subspace consisting vectors coordinate let dimension since dimension rank number independent generators rank dimension question least hence suffices prove orb var dimension let denote subsets corresponding variable sets respectively projection intersection onto subspace image contained orb var therefore suffices show dimension latter intersection lemma intersection transverse dimension orb codimension var completes proof hypergeometric holonomicity section matrix integer matrix full rank let chx denote weyl algebra complex numbers corresponds ring differential operators system parameter left avi toric ideal associated aij euler operators associated order filtration filters order differential operators symbol operator image grf writing means grf commutative polynomial ring characteristic variety left variety associated graded ideal grf ann annihilator nonzero holonomic characteristic variety dimension equivalent requiring dimension see theorem christine berkesch stephen griffeth ezra miller rank holonomic always finite dimension vector space number equals dimension vector space germs analytic solutions nonsingular point theorem viewing system varying parameter rank upper semicontinuous function theorem follows viewing holonomic family definition parametrized definition means holonomic also satisfies coherence condition namely replacing variables module finitely generated definition holonomic family allows sheaves arbitrary complex base schemes generality needed derivation holonomic family property holonomicity system less theorem phrased generality homology toric modules brief deduction isolates steps necessary systems brevity stems special status affine semigroup rings among toric modules definition theorem module forms holonomic family coordinates detail parametric system satisfies fiber holonomic module finitely generated proof since surjects onto grf enough show ring dimension standard equivalently rowspan rational numbers contains row length result follows theorem standard let matrix obtained adding row across top adding leftmost column denotes new variable corresponding leftmost column particular hin since row reduced case completing part part ring surjects onto grf suffices part show becomes finitely generated upon inverting nonzero polynomials since ideal hin generators involving variables suffices show finite dimension desired result statement proved part scheme dimension finite degree cnx systems parameters holonomicity systems references alan adolphson hypergeometric functions rings generated monomials duke math borel grivel kaup haefliger malgrange ehlers algebraic dmodules perspectives mathematics academic press boston ofer gabber integrability characteristic variety amer math gelfand gelfand generalized hypergeometric equations russian dokl akad nauk sssr english translation soviet math dokl gelfand graev zelevinsky holonomic systems equations series hypergeometric type dokl akad nauk sssr gelfand kapranov zelevinsky generalized euler integrals ahypergeometric functions adv math gelfand zelevinsky kapranov hypergeometric functions toric varieties funktsional anal prilozhen correction ibid gelfand kapranov zelevinsky discriminants resultants multidimensional determinants mathematics theory applications boston boston mikhail kapranov hypergeometric functions reductive groups integrable systems algebraic geometry world sci publishing river edge laura felicia matusevich ezra miller uli walther homological methods hypergeometric families amer math soc ezra miller bernd sturmfels combinatorial commutative algebra graduate texts mathematics new york mutsumi saito bernd sturmfels nobuki takayama deformations hypergeometric differential equations berlin mathias schulze uli walther irregularity hypergeometric systems via slopes along coordinate subspaces duke math department mathematics duke university box durham address cberkesc instituto universidad talca camino lircay talca chile address sgriffeth department mathematics duke university box durham address ezra
| 0 |
synthesis formation control eigenstructure assignment based approach aug takatoshi motoyama kai cai propose approach formation control heterogeneous systems based method eigenstructure assignment given problem achieving scalable formations plane approach globally computes state feedback control assigns desired characterize relation resulting communication topology design special sparse topologies synthesized control may implemented locally individual agents moreover present hierarchical synthesis procedure significantly improves computational efficiency finally extend proposed approach achieve rigid formation circular motion illustrate results simulation examples ntroduction cooperative control systems active research area systems control community among many problems formation control received much attention owing wide applications satellite formation flying search rescue terrain exploration foraging main problem studied stabilization rigid formation goal steer agents achieve formation specified size freedoms translation rotation several control strategies proposed affine feedback laws nonlinear control anglebased algorithms achieving scalable formation unspecified size also studied scalable formation may allow group adapt unknown environment obstacles targets addition presented methods controlling formations motion different methods formation control common feature design namely specifically communication topology given priori defines neighbors agent based neighborhood information local control strategies designed individual agents properties designed local strategies finally analyzed systemic global level correctness proved certain graphical conditions communication topology design indeed mainstream approach cooperative control systems places emphasis distributed control paper propose distinct approach formation control based known method called eigenstructure assignment different authors dept electrical information engineering osaka city university japan emails motoyama approach need communication topology imposed priori fact agents typically assumed independent uncoupled design done local level indeed given formation control problem characterized specific eigenvalues eigenvectors precisely defined section approach constructs global level feedback matrix exists renders system possess desired thereby achieving desired formations moreover synthesized feedback matrix entries zero nonzero defines communication topology accordingly computed feedback control may implemented individual agents thus approach features compute globally implement locally communication topology result control synthesis rather given priori characterize relation resulting topology eigenstructure chosen synthesis show appropriately choosing desired eigenvalues corresponding eigenvectors special topologies star cyclic line designed computed feedback control may implemented locally sparse topologies although method requires centralized computation control gain matrices show straightforward extension approach hierarchical synthesis procedure significantly reduces computation time empirical evidence provided show efficiency proposed hierarchical synthesis procedure particular computation feedback control group agents needs merely fraction second likely suffice many practical purposes main advantage approach systematic sense treats heterogeneous agent dynamics different cooperative control specifications characterizable desired eigenstructure synthesis procedure show scalable formation rigid formation cooperative circular motion addressed using method additionally show method amenable deal general cases agents initial connections arbitrary first proposed eigenstructure assignment based approach applied solve consensus problem conference precursor paper extended approach solve scalable rigid formations proposed hierarchical synthesis procedure significantly shorten computation time paper differs following aspects precise relation eigenstructure topology characterized section iii method imposing topological constraints eigenstructure assignment presented section iii general cases initial topology arbitrary exist agents addressed section problem achieving cooperative circular motion solved proofs provided note also proposed eigenstructure assignment method applied consensus problem approach first communication topology imposed among agents local control strategies designed based eigenstructure assignment respecting topology finally correctness proposed strategies verified global level contrast approach topology imposed priori topology result control synthesis moreover characterize relation topology eigenstructure design special topologies selecting special eigenstructures addition problems addressed paper distinct namely formation circular motion plane involve complex eigenvalues eigenvectors rest paper organized follows section review basics eigenstructure assignment formulate formation control problem section iii solve problem eigenstructure assignment discuss relations topologies section study general cases initial topology arbitrary exist agents section present hierarchical synthesis procedure reduce computation time section extend method achieve rigid formation circular motion simulation examples given section vii conclusions stated section viii first review basics eigenstructure assignment consider linear system modeled state vector input vector suppose modify state feedback may chosen assign set eigenvalues controllable unless single input however uniquely determined set eigenvalues indeed state feedback one additional freedom assign certain sets eigenvectors simultaneously assigning eigenvalues eigenvectors referred eigenstructure assignment linearly independent columns thus columns form basis ker ker denotes kernel also use denote image lemma consider system suppose controllable kerb let set distinct complex numbers set linearly independent vectors unique every lemma provides necessary sufficient condition eigenstructure assignment condition holds thus exists assigning distinct complex eigenvalues corresponding eigenvectors may constructed following procedure compute basis ker stack basis vectors form partition properly get find condition kerb lemma ensures independent columns thus may uniquely determined iii compute preliminaries problem formulation preliminaries eigenstructure assignment let shown controllable exists note entries may include complex numbers general set distinct complex numbers wherever denotes complex conjugate entries real numbers procedure iii computing complexity inasmuch calculations involved solving systems linear equations matrix inverse multiplication note eigenstructure assignment result may extended case repeated eigenvalues generalized eigenvectors details refer appendix problem formulation consider heterogeneous system agent modeled ode state variable control variable constant parameters thus agent point mass moving complex plane possibly stable semistable unstable dynamics requirement ensure thus agent note represented agents independent uncoupled topology imposed stage form system independent agents diag diag diag denotes diagonal matrix specified diagonal entries consider modifying state feedback thus system straightforward calculation shows diagonal entries fii entries fij since entries fij view structure define corresponding directed graph follows node set node standing agent state edge set edge entry fij since fij implies uses state update say case agent communicates state agent neighbor graph therefore called communication network among agents whose topology decided entries thus communication topology imposed priori emerges result applying state feedback control define formation control problem multiagent system problem consider system specify vector design state feedback control every initial condition constant problem specified vector represents desired formation configuration plane formation configuration mean geometric information formation remains scaling rotational effects discarded indeed writing constant polar coordinate form final formation configuration scaled rotated constant unknown priori general depends initial condition note also problem includes consensus problem special case solve problem note following fact proposition consider system state feedback simple eigenvalue corresponding eigenvector eigenvalues negative real parts every initial condition proof solution system since simple eigenvalue corresponding eigenvector eigenvalues negative real parts follows standard linear systems analysis respect eigenvalue therefore view proposition specified eigenvalues corresponding eigenvectors may assigned state feedback problem solved end resort eigenstructure assignment iii esults section solve problem formation control problem systems method eigenstructure assignment following first main result theorem consider system let desired formation configuration always exists state feedback control solves problem lim proof proposition system achieves formation configuration matrix following eigenstructure eigenvalues satisfy corresponding eigenvectors linearly independent must verify eigenstructure assignable state feedback system note except fixed freedom choose simplicity let distinct thus lemma applied diag diag thus easily checked pair controllable kerb show exists specified suffices verify condition lemma first find basis ker ker derive thus hence condition lemma holds next let find basis ker derive condition control gain example example assign second largest eigenvalue originally change results new feedback matrix fig example lemma holds therefore conclude always exists state feedback system achieves formation configuration proof considered distinct eigenvalues hence control gain matrix may computed computed turn gives rise agents communication graph following illustrative example example consider system single integrators square formation let desired eigenvalues corresponding eigenvectors one computes control gain matrix determines corresponding communication graph see fig observe contains complex entries may viewed control gains real imaginary axes respectively scaling rotating gains complex plane also note spanning tree node root computed feedback control implemented four agents individually consensus let desired eigenvalues corresponding eigenvectors one computes control gain matrix corresponding graph see fig note case real strongly connected unlike usual consensus algorithm graph laplacian matrix entries positive thus eigenstructure assignment based approach may generate larger class consensus algorithms negative weights remark approach convergence speed desired formation configuration assignable convergence speed dominated eigenvalue second largest real part closedloop system approach freely assignable smaller faster convergence formation occurs cost higher zero entries locations thus topology achieves faster convergence speed seen example feedback matrix entries determine topology dependent choice eigenvalues well eigenvectors namely different sets eigenvalues eigenvectors result different communication topologies next result characterizes precise relation topologies theorem consider system desired formation configuration let eigenstructure denote rows communication graph system vector orthogonal subspace spanned vectors proof derive choose satisfy equation diag thus matrix diag vni vnj vni vnj vni vnj therefore communication graph vector orthogonal following vectors namely orthogonal subspace spanned vectors desired eigenvalues eigenvectors chosen theorem provides necessary sufficient condition check interconnection topology among agents without actually computing feedback matrix hand problem choosing appropriate eigenstructure match given topology difficult inasmuch many free variables determined eigenvalues eigenvectors shall investigate general problem eigenstructure design imposing particular topologies future work next subsection nevertheless show choosing certain appropriate eigenstructures results certain special sparse topologies topologies synthesized control may implemented distributed fashion special topologies show derive following three types special topologies choosing appropriate eigenstructures star topology directed graph star topology single root node say node thus nodes receive information root node terms total number edges star topology one sparsest topologies least number edges contain spanning tree consider following eigenstructure eigenvalues distinct eigenvectors independent derive therefore corresponding graph star topology node root cyclic topology directed graph cyclic topology consider following eigenstructure eigenvalues eigenvectors independent proposition consider system eigenstructure used synthesis feedback control problem solved resulting cyclic topology proof following lines proof proposition using eigenstructure derive proposition consider system eigenstructure used synthesis feedback control problem solved resulting graph star topology proof first follows proof theorem eigenstructure assigned closedloop matrix proposition problem solved proceed analogously proof theorem derive substituting eigenvalues eigenvectors well inspection conclude corresponding graph cyclic topology line topology directed graph directed line topology single root node say node line topology also one sparsest topologies containing spanning tree consider following eigenstructure eigenvalues eigenvectors independent proposition consider system eigenstructure used synthesis feedback control problem solved resulting line topology note repeated eigenvalues corresponding generalized eigenvectors result lemma computing control gain matrix applied case instead resort generalized method provide proof proposition appendix set new feedback matrix topology constrained eigenstructure assignment namely originally computed except ith row replaced computed new feedback matrix satisfies imposed topological constraint eigenstructure generally different therefore must verify new still satisfy achieve formation control verification need always successful case turn successful guaranteed achieve desired formation feedback matrix satisfying imposed topological constraint illustrate method following example example consider system single integrators desired formation simply consensus choose following eigenvalues eigenvectors end section presenting alternative approach imposing topological constraints eigenstructure assignment suppose computed feedback matrix achieve desired formation matrix eigenvalues eigenvectors satisfy assume topological constraint imposed agent receive information agent reasons cost physical impossibility unfortunately computed fij thus goal derive new feedback matrix suitably modifying new matrix generally different eigenvalues eigenvectors hence must check new eigenvalues eigenvectors still satisfy approach proceeds follows inspired constrained feedback method writing diag eigenvalues eigenvectors feedback matrix achieve let denote ith row also ith row equation rewritten terms kronecker product row stacking follows compute consensus constrain zero focus equation deleting fij well jth column obtain reduced equation suppose topological constraint agent receive information agent must set zero first derive equation follows matrix jth column deleted vector jth element deleted view entries unknowns contains equations unknowns using pseudoinverse denoted derive derivation may readily extended deal one topological constraint deleting column obtain reduced equation solve equation unknowns compute finally set new feedback matrix arbitrary connections first consider case agents arbitrary initial interconnection keeping assumption individually stabilizable consider following system diag matrix arbitrary real matrix modeling arbitrary initial communication topology among agents turns despite general matrix conclusion theorem holds theorem consider system let desired formation configuration always exists state feedback control achieves formation control lim proof proof proceeds similarly theorem first diagonal matrix regardless pair controllable observe second row entries kerb left verify distinct eigenvalues different original moreover eigenstructure new eigenvectors satisfy condition lemma let since loop matrix eigenvalues eigenvectors find basis ker derive hence new still satisfy therefore consensus achieved despite imposed topological constraint fact since faster convergence new eneral ulti ystems far considered system agents uncoupled stabilizable matrices diagonal diagonal entries nonzero shown theorem state feedback control based eigenstructure assignment always exists drive agents desired formation generally however agents may initially interconnected owing physical coupling existence communication channels agents might capable stabilizing though receive information others thus interest inquire based eigenstructure assignment approach conclusions draw formation control general cases thus condition lemma holds therefore always exists state feedback system achieves formation configuration proof theorem derived since diagonal diagonal matrices commute held general theorem found instead deal arbitrary without depending commutativity matrices theorem asserts long agents individually stabilizable formation control achievable eigenstructure assignment regardless agents initially interconnected final topology hand general determined initial connections plus additional ones resulted chosen discussed section iii may also possible however initial connections decoupled corresponding entries synthesized feedback matrix illustrated following example consider example change zero matrix following agents initially interconnected assigning eigenstructure example obtain feedback matrix matrix identity matrix theorem consider system let desired formation configuration also let desired eigenvalues eigenvectors satisfying pair controllable imb iii satisfies exists state feedback control achieves formation control lim proof first observe kerb since columns linearly independent let since controllable condition exist holds setting derive following matrix equation feedback matrix well matrix example thus despite initial coupling final topology turns example particular final topology agents uncoupled initial couplings canceled corresponding entries feedback matrix existence agents continuing consider arbitrary initial topology general assume agents stabilize corresponding diagonal entries zero equivalently agents control inputs case achieving desired formation possible agents may take advantage information received others via connections specified problem global formation stabilization locally unstabilizable agents rarely studied literature aim provide answer using eigenstructure assignment based approach without loss generality assume first agents stabilizable thus system consider subsection since imb condition equation solution determined finally since condition iii condition lemma satisfied therefore desired eigenvalues eigenvectors satisfying may assigned state feedback control formation control achieved theorem provides sufficient conditions ensure solvability formation control problem systems agents following illustrate result working concrete example represents directed line topology one agent stabilizable simply vector example consider system nonzero namely represents directed line topology agent root means agent stabilizable thus singleinput system controlling root directed line first verified controllable condition theorem satisfied ensure condition imb suffices choose desired eigenvalue every eigenvalue distinct nonzero diagonal entries time eigenvalues must satisfy condition hold equation solution let solve obtain explicit solution hence ensure condition iii theorem must choose desired eigenvector satisfied particular thus means formation vector must characterizes set achievable formation configurations system consideration conclude controlling one agent indeed root agent directed line topology possible achieve arbitrary formation configurations determined nonzeros entries matrices specific manner given ierarchical igenstructure ssignment previous sections shown control gain matrix always computed long every agent stabilizable formation problem solved computing see eigenstructure assignment procedure section complexity number agents consequently computation cost becomes expensive number agents increases address issue centralized computation propose section hierarchical synthesis procedure shall show control gain matrix computed hierarchical procedure solves problem moreover significantly improves computational efficiency empirical evidence provided section vii clarity presentation let return consider system problem desired formation configuration partition agents pairwise disjoint groups let group agents may different configuration write accordance partition possibly reordering cnk cnk thus group dynamics later use also write resp first component resp diag diag vector local formation configuration group formation configuration set first component agent group assume configurations nonzero present hierarchical synthesis procedure group dynamics compute simple eigenvalue corresponding eigenvector eigenvalues negative real parts moreover topology defined unique root node star line method given section treat group leaders higherlevel group dynamics compute simple eigenvalue corresponding eigenvector eigenvalues negative real parts iii set control gain matrix low high low high partitioned according low block high computational complexity step max step let max complexity entire hierarchical synthesis procedure proper group partition hierarchical procedure significantly reduce computation time demonstrated empirical study section vii note step procedure requiring topology defined unique root single leader simplicity presentation extended case multiple leaders step treat leaders hand number leaders kept small highlevel control synthesis step done efficiently correctness hierarchical synthesis procedure asserted following theorem consider system let desired formation configuration state feedback control synthesized hierarchical synthesis procedure solves problem lim proof let yknk gknk cnk thus first element removed step hierarchical synthesis procedure since unique root node write follows eigenstructure eigenvalues negative real parts reorder get permutation matrix similarly transforms control gain matrix step iii follows eigenstructure assigned step matrix simple eigenvalue corresponding eigenvector eigenvalues negative real parts hence proposition lim since resp reordering resp conclusion follows proof complete igid ormation ircular otion section show method eigenstructure assignment may easily extended address problems rigid formation circular motion rigid formation first extend method study problem achieving rigid formation one translational rotational freedom fixed size problem consider system specify design control every initial condition problem goal system achieve rigid formation translational freedom rotational freedom fixed size present synthesis procedure compute two eigenvalues corresponding eigenvectors eigenvalues negative real parts moreover topology defined exactly roots say nodes topology may achieved assigning appropriate eigenstructures eigenvalues distinct eigenvectors independent let first two components set iii set control idea synthesis procedure first use eigenstructure assignment achieve desired formation configuration two leaders control size formation stabilizing distance two leaders prescribed latter inspired result following proposition consider system let control synthesized synthesis procedure solves problem initial conditions repeated eigenvalues eigenvectors eigenstructure assignment result lemma computation control gain matrix remain case distinct eigenvalues topology one exist nodes every node reached directed path removing arbitrary node lim moreover choosing eigenstructure following similarly proof proposition show resulting topology defined nodes two roots topology design step follows theorem illustrative example achieving rigid formations provided section vii circular motion apply eigenstructure assignment approach solve cooperative circular motion problem agents circle around center keeping desired formation configuration cooperative task may find useful applications target tracking encircling problem consider system specify design state feedback control every initial condition ebjt problem goal agents circle around center rate keeping formation configuration scaled result following proposition consider system let always exists state feedback control solves problem proof similar argument proof theorem show always exists following eigenstructure eigenvalues distinct eigenvectors independent proposition lim ebjt position proof first similar argument proof theorem show desired two eigenvalues eigenvectors eigenvalues negative real parts may always assigned system result proposition position fig scalable regular pentagon formation initial positions steadystate positions system made achieve circular motion keeping rigid formation specified size circular motion may applied task target encircling illustrated example next section vii simulations illustrate eigenstructure assignment based approach several simulation examples examples consider system heterogeneous agents diag diag thus first agents unstable latter stable agents stabilizable first achieve scalable regular pentagon formation assign following eigenstructure eigenvalues eigenvectors compute control gain matrix problem solved key point achieving circular motion assign one one pure imaginary eigenvalue associated formation vector circular motion counterclockwise clockwise one may easily speed slow circular motion specifying value also note similar synthesis procedure rigid formation previous subsection agent agent agent agent agent simulating system initial condition result displayed fig observe regular pentagon formed topology determined contains spanning tree next achieve rigid pentagon formation follow method presented section first assign following position position target position position fig rigid regular pentagon formation size initial positions positions fig target encircling circular motion initial positions steadystate positions moreover choose following eigenstructure eigenstructure eigenvalues eigenvalues eigenvectors compute control gain matrix thus topology determined nodes two roots different sizes obtain control simulating system initial condition result displayed fig pentagons specified sizes formed consider task target encircling solve circular motion introduced section suppose static target say constantly zero goal make agents circle around treat target part system hence augmented diag diag eigenvectors thus desired formation regular pentagon target center pentagon moreover eigenvectors resulting topology contain spanning tree target root corresponding formation vector eigenvalue hence agents perform circular motion rate since center move center root makes first agents encircle target compute control gain matrix assign eigenstructure indeed corresponding topology spanning tree whose root simulating system initial condition result displayed fig observe target stays put initial position agents circle around finally present empirical study computation time synthesizing feedback matrix particular compare centralized synthesis hierarchical synthesis section result listed table different numbers hierarchical synthesis partition agents way number computation done matlab laptop intel core cpu memory table omparison computation time unit seconds agent centralized method hierarchical method sec groups number agents group balanced make small agents partitioned groups agents agents partitioned groups agents plus groups observe hierarchical synthesis significantly efficient centralized one efficiency increases number agents increases particular agents seconds needed hierarchical approach might well sufficient many practical purposes viii oncluding emarks proposed eigenstructure assignment based approach synthesize state feedback control solving formation problems relation eigenstructures used control synthesis resulting topologies among agents characterized special topologies designed choosing appropriate eigenstructures general cases initial coupling arbitrary exist agents studied hierarchical synthesis procedure presented improves computational efficiency approach extended achieve rigid formation circular motion view proposed approach multiagent formation control complimentary existing mainstream approach rather opposed indeed approach successful produce scalable control strategies effective possibly topologies nonlinear agent dynamics robustness issues like communication failures cases difficult dealt approach hand design generally challenging requiring significant insight problem hand possibly many design process contrast topdown design straightforward automated algorithms hence suggest following control researcher engineer faces distributed control design problem achieving new cooperative tasks one start linear version problem try approach derive solution ideas insights gained solution one may try design possibly nonlinear cases future work aim apply eigenstructure assignment based approach solve complex cooperative control problems systems particular immediate goals achieve formations three dimensions obstacle avoidance abilities well deal agents heterogeneous possibly dynamics ppendix provide proof proposition first briefly review eigenstructure assignment method dealing repeated eigenvalues generalized eigenvectors lemma consider system suppose controllable kerb let set positive integers satisfying set complex numbers vkdk set linearly independent vectors feedback matrix every vij vij wij lemma provides necessary sufficient condition assigning repeated eigenvalues eigenvector generalized eigenvectors vidi corresponding jordan block matrix condition holds thus exists may constructed following procedure compute following matrices satisfying needs computed following vector chain vidi pidi find vectors pidi generate new vector chain follows widi pidi iii compute satisfying vij wij solution exists alter one vectors pij step ready prove proposition proof proposition consider system assign following eigenstructure eigenvalues eigenvectors first step need compute obtain step hence verified together satisfies first equation second step derive step chain find hence obtain chain together verified satisfy hence follows lemma eigenstructure assigned matrix proposition problem solved finally compute feedback matrix let since obtain therefore matrix corresponding graph line topology eferences jadbabaie lin morse coordination groups mobile autonomous agents sing nearest neighbor rules ieee trans autom control vol fax murray consensus cooperation networked systems proc ieee vol ren beard distributed consensus cooperative control theory applications springer verlag bullo distributed control robotic networks princeton university press mesbahi egerstedt graph theoretic methods multiagent networks prenceton university press anderson fidan hendrickx rigid graph control architectures autonomous formations ieee control syst vol fax murray information flow cooperative control vehicle formations ieee trans autom control vol global robust stabilizatioin relative sensing networks automatica vol krick broucke francis stabilisation infinitesimally rigid formations networks int control vol cao morse anderson maintaining directed triangular formation mobile autonomous agents commun inform vol basiri bishop jensfelt distributed control triangular formations constraints syst control vol coogan arcak scaling size formation using relative position feedback automatica vol lin wang han distributed formation control systems using complex laplacian ieee trans autom control vol bai arcak wen adaptive design reference velocity recovery motion coordination syst control vol ding yan lin collective motions formations pursuit strategies directed acyclic graphs automatica vol moore flexibility offered state feedback multivariable systems beyond closed loop eigenvalue assignment ieee trans autom control vol klein moore eigenvector assignment state feedback ieee trans autom control vol andry shapiro chung eigenstructure assignment linear systems ieee trans aerospace electronic systems vol liu patton eigenstructure assignment control system design wiley cai motoyama eigenstructure assignment synthesis consensus algorithms proc japan joint autom control kobe japan motoyama cai synthesis formation control eigenstructure assignment based approach proc american control seattle iwasaki eigenstructure assignment applicaiton consensus linear heterogeneous agents proc conf decision control osaka japan wonham pole assignment controllable linear systems ieee trans autom control vol golub van loan matrix computations johns hopkins university press kim sugie cooperative control task based cyclic pursuit strategy automatica vol lan yan lin distributed control cooperative target enclosing based reachability invariance analysis syst control vol
| 3 |
sep graphs hyperbolic groups limit set intersection theorem pranab sardar abstract define notion limit set intersection property collection subgroups hyperbolic group namely hyperbolic group collection subgroups say satisfies limit set intersection property given hyperbolic group admitting decomposition finite graph hyperbolic groups structure embedded condition show set conjugates vertex edge groups satisfy limit set intersection property introduction limit set intersection theorems first appear work susskind swarup context geometrically finite kleinian groups later anderson anda andb undertook detailed study general kleinian groups context gromov hyperbolic groups true quasiconvex subgroups see lemma recently yang looked case relatively quasiconvex subgroups relatively hyperbolic groups see however theorem false general subgroups hyperbolic groups characterizations known pair subgroups hyperbolic group guarantee motivates look subgroups quasiconvex subgroups satisfy limit set intersection property starting point following celebrated theorem bestvina feighn theorem suppose graph hyperbolic groups embedded condition hallways flare condition fundamental group say graph groups hyperbolic graphs groups briefly recalled section many examples hyperbolic groups admitting decomposition graphs groups vertex edge groups quasiconvex nevertheless terminologies theorem theorem set conjugates vertex edge groups satisfy limit set intersection property note special case theorem already known results ilya kapovich author showed given graph hyperbolic groups embedded condition fundamental group turnns hyperbolic theorem date march mathematics subject classification primary key words phrases hyperbolic groups limit sets theory pranab sardar vertex groups quasiconvex subgroups hence conjugates vertex groups satisfy limit set intersection property case acknowledgments work started author post doctoral term university california davis author would like thank michael kapovich many helpful discussions regard author would also like thank mahan helpful discussions ilya kapovich useful email correspondence research supported university california davis partially supported dst inspire faculty award dst boundary gromov hyperbolic spaces limit sets subspaces assume reader familiar basics gromov hyperbolic metric spaces coarse language shall however recall basic definitions results explicitly used sections follow details one referred notation convention section shall assume hyperbolic metric spaces proper geodesic metric spaces use mean depending context hausdorff distance two subsets metric space denoted subset metric space denote assume groups finitely generated graph shall denote vertex edge sets respectively definition suppose group generated finite set let path joining two vertices let consecutive vertices let shall say word labels path also given free group image natural map called element represented definition see let hyperbolic metric space base point gromov boundary equivalence classes geodesic rays two geodesic rays said equivalent equivalence class geodesic ray denoted unbounded sequence points say converges boundary point following holds let geodesic joining subsequence contains subsequence uniformly converging compact sets geodesic ray case say limit write limit set subset set denote set following lemma basic exercise hyperbolic geometry mention without proof basically uses thin triangle property hyperbolic metric spaces see chapter exercise lemma suppose two sequences hyperbolic metric space converging points bounded graphs hyperbolic groups limit set intersection theorem lemma natural topology boundary proper hyperbolic metric space respect becomes compact space embdedding proper hyperbolic metric spaces induces topological embedding homeomorphism refer reader proposition theorem chapter proof lemma definition map two metric spaces said proper embedding implies family proper embeddings metric spaces indexing set said uniformly proper dyi implies dxi proper embeddings proper hyperbolic metric spaces say map exists gives rise continuous map means given sequence points converging sequence converges point resulting map continuous note terminology slightly different mitra following lemma immediate lemma suppose hyperbolic metric spaces proper embedding map exists mention following lemma brief remarks proofs since states standard facts hyperbolic geometry lemma suppose proper metric space two sequences let two geodesics joining respectively subsequences natural numbers sequences geodesics converge uniformly compact sets two geodesics joining respectively moreover constant depending sequences points pnk qnk pnk qnk pnk qnk conclusion remains valid replace words joined respectively constant depending subsequence natural numbers sequences points pnk qnk pnk qnk pnk qnk proof see lemma lemma theorem stability chapter precisely proving may choose geodesic segments connecting endpoints respectively apply geodesics extract subsequences respectively converging uniformly compact sets find two sequences points pranab sardar pnk qnk satisfying finally stability pnk qnk pnk pnk qnk qnk uniformly small prove definition suppose gromov hyperbolic group let collection subgroups say limit set intersection property state two elementary results limit sets future use lemma suppose hyperbolic group subset roof follows lemma one notes acts naturally cayley graph isometries thus homeomorphisms lemma graphs groups presume reader familiar theory however briefly recall concepts shall need details one referred section serre book trees although always work nonoriented metric graphs like cayley graphs need oriented graphs possibly multiple edges adjacent vertices loops describe graphs groups hence following definition quoted definition graph pair together two maps edge refer origin terminus edge edge opposite orientation write refer set vertices set edges shall denote edge without orientation definition graph groups consists following data finite graph defined edge group respectively together two injective homomorphisms following conditions hold shall refer maps canonical maps graph groups shall refer groups vertex groups edge groups respectively topological motivations graph groups following definition fundamental group graph groups one referred definition fundamental group graph groups suppose graph groups finite connected oriented graph let graphs hyperbolic groups limit set intersection theorem maximal tree fundamental group defined terms generators relators follows generators elements disjoint union generating sets vertex groups set oriented edges relators four types coming vertex groups edge oriented edge bass serre tree graph groups suppose graph groups let maximal tree definition let fundamental group graph groups tree say tree vertex set edge set gee edge relations given ggee gegt ggee ggo note tree metric spaces graph groups given graph groups maximal tree one form natural way graph say fundamental group acts isometries properly cocompactly admits simplicial lipschitz map construction described follows assume finite connected graph vertex groups edge groups finitely generated fix finite generating set one vertex groups similarly edge groups fix finite generating set assume let generating set shall include nonoriented edges define disjoint union following graphs introducing extra edges follows vertex spaces ggv let denote subgraph vertex set coset ggv two vertices connected edge iff shall refer subspaces vertex spaces edge spaces similarly edge ggee let denote subgraph vertex set gegee two vertices gex gey connected edge iff shall refer subspaces edge spaces extra edges connect edge spaces vertex spaces follows edge ggee connecting vertices ggo gegt gee join gex gegee gex gegt ggo edges length define setting gex gex gex natural simplicial map precisely first barycentric subdivision map coarse analog tree metric spaces introduced see also abuse terminology shall refer also tree metric spaces graphs recall notations definitions collect basic properties note intrinsic path metric denoted similarly pranab sardar use intrinsic path metric follows intrinsic metrics metric spaces isometric cayley graphs respectively therefore vertex edge groups gromov hyperbolic vertex edge spaces uniformly hyperbolic metric spaces lifts geodesics suppose let denote geodesic joining section lift set theoretic section also embedding general interested defining sections vertices hallways flare condition say satisfies hallways flare condition numbers given geodesic two lifts max graphs groups embedded conditions suppose graph groups vertex edge group finitely generated say satisfies embedded condition inclusion maps edge groups vertex groups embeddings respect choice finite generating sets vertex edge groups clear graph groups embedded condition maps uniform embeddings lemma naturally defined proper cocompact action map proof note obtained disjoint union cosets vertex edge groups group natural action disjoint union also easy check action adjacent vertices adjacent vertices thus simplicial clearly natural map show action proper enough show vertex stabilizers uniformly finite however point ggv fixed element fixes ggv however stabilizers ggv simply ggv action ggv ggv fixed point free hence fixed point free cocompact follows fact cofinite fix vertex vertex look corresponding vertex space let denote let denote orbit map lemma orbit map since proper cocompact lemma lemma constant vertex space ggv ggv ggv follows ggv roof proving lemma let geodesic joining identity element path joining ggv hence one choose maximum lengths following corollary immediate consequence two lemmas graphs hyperbolic groups limit set intersection theorem corollary vertex spaces edge spaces uniformly properly embedded notation shall use denote canonical inclusion vertex edge spaces let ggv follows corollary induces coarsely ggv namely send ggv point corollary shall denote ggv main theorem rest paper shall assume hyperbolic group admits graph groups decomposition embedded condition vertex edge groups hyperbolic let tree graph groups aim show family subgroups satisfies limit set intersection property theorem suppose hyperbolic group admits decomposition graph hyperbolic groups embedded condition suppose correspoding tree idea proof pass tree space using orbit map defined previous section use techniques following theorem important ingredient proof theorem inclusion maps admit maps recall connected edge natural maps know maps uniform embeddings assume embeddings induce embeddings lemma therefore get partially defined maps domain let denote definition definition domain say flow flowed suppose consecutive vertices geodesic say point flowed case called flow xwn since maps injective domains flow unique exists lemma suppose flow let geodesic vertex space xwi pranab sardar roof enough check adjacent vertices suppose edge connecting lemma follows stability hyperbolic space xwi fact every point distance xwi corollary maps points point lemma let suppose joined edge suppose flowed let geodesic ray set bounded roof bounded limit set flowed lemma contradiction proves lemma briefly recall ladder construction mitra crucial proof main theorem shall need proof theorem mitra ladder fix let finite geodesic segment shall define set union vertex space geodesics subtree containing construction inductive inductively one constructs centred corresponding finitely many edges incident set terminal points edges start diameter least case edge connecting say choose two points say maximum choose define geodesic joining suppose vertex adjacent belongs diameter least edge connecting diameter least define one chooses two points maximum let define geodesic joining theorem mitra constants depending defining parameters tree metric spaces following holds geodesic segment corresponding ladder subset prove theorem mitra defines coarse lipschitz retraction map recall proof works one referred however shall subsequently assume appropriate choices made context ladders uniformly quasiconvex subsets coarsely lipschitz retraction ladders suppose geodesic let geodesic know coarsely well defined graphs hyperbolic groups limit set intersection theorem nearest point projection see proposition chapter define connect geodesic since tree unique geodesic let end point geodesic let edge geodesic incident going mitra proved case projection uniformly small follows careful choice see lemma choose point projection define theorem map coarsely lipschitz retraction words retraction constants using theorems mitra shall prove converse corollary last ingredient proof theorem proposition suppose points map point maps respectively flowed roof using lemma assume point flowed along similarly flowed direction let geodesic rays respectively let first edges points along direction respectively fev xev xew bunded sets lemma theorem let respectively ladders uniformly subsets theorem hence uniform ambient ladders joining respectively choose one one let call respectively since limit point lemma uniform constant subsequence natural numbers points xnk ynk xnk ynk xnk ynk let vertices geodesic adjacent respectively let avk awk respectively remove edge space xev remaining space two one containing containing call respectively note since diameter avk uniformly bounded nonempty portion contained travels uniformly bounded implies portion joining avk uniformly small avk hence infinitely many xnk ynk since dealing tree spaces xnk ynk implies points fev xev xnk thus flowed lemma contradiction proves proposition proof theorem suppose gvi implies gwi gvi also gvi gvi lemma pranab sardar hence need show using lemma therefore enough show clearly thus need show given element map maps lemma lemma coset vertex group mapped uniformly hausdorff close coset hence induces uniform ggv ggv avoiding confusion let denote subset follows mapped element maps hence proposition flowed say lemma image map hence replace assume flows lemma geodesic rays xwi pulling back geodesics get uniform rays say let join geodesic suppose word labeling geodesic since finitely many possibilities words constant subsequence wnk let pnk qnk let group element represented wnk xhk since connects two elements hence thus finally since pnk pnk completes proof following corollary pointed mahan use notations main theorem corollary gwi two quasiconvex subgroups roof assume gvi gwi gvi let gvi may construct new finite graph starting adding two vertices connected edge let call graph define new graph groups keeping definition setting gui gei defining inclusion map gvi produces new graph groups embedded condition fundamental group isomorphic suppose tree new graph groups apply theorem finish proof example give example intersection limit sets equal limit set intersection suppose hyperbolic group graphs hyperbolic groups limit set intersection theorem infinite normal subgroup torsion let image element infinite order let whence however infinite normal subgroup thus end question question hyperbolic group admits decomposition graph hyperbolic groups embedded condition vertex group describe pointed ilya kapovich first interesting case question considered hyperbolic strictly ascending hnn extension finitely generated nonabelian free group injective surjective endomorphism one would also like describe case references anda anderson limit set intersection theorems finitely generated kleinian groups math res lett vol limit set intersection theorems kleinian groups conjecture andb susskind comput methods funct theory october vol issue bestvina feighn combination theorem negatively curved groups differential vol bridson haefliger metric spaces nonpositive curvature grundlehren der mathematischen wissenchaften vol gitik mitra rips sageev widths subgroups trans ams gromov hyperbolic groups essays group theory gersten msri springer verlag hatcher algebraic topology cambridge university press kapovich combination theorem quasiconvexity int algebra comput issue mitra maps trees hyperbolic metric spaces differential geom mahan mitra height splittings hyperbolic groups proc indian acad sciences serre trees isbn susskind swarup limit sets geometrically finite hyperbolic groups amer vol scott wall topological methods group theory homological group theory wall london math soc lecture notes series vol cambridge univ press yang limit sets relatively hyperbolic groups geom dedicata vol issue indian institute science education research mohali
| 4 |
statistical characterization localization performance wireless networks christopher lone graduate student member ieee oct harpreet dhillon member ieee buehrer fellow ieee abstract localization performance wireless networks traditionally benchmarked using lower bound crlb given fixed geometry anchor nodes target however endowing target anchor locations distributions paper recasts traditional scalar benchmark random variable goal work derive analytical expression distribution random crlb context positioning derive distribution work first analyzes crlb affected order statistics angles consecutive participating anchors internodal angles analysis reveals intimate connection second largest internodal angle crlb leads accurate approximation crlb using approximation expression distribution crlb conditioned number participating anchors obtained next conditioning eliminated derive analytical expression marginal crlb distribution since marginal distribution accounts target anchor positions across numbers participating anchors therefore statistically characterizes localization error throughout entire wireless network paper concludes comprehensive analysis new paradigm index terms lower bound localization order statistics poisson point process stochastic geometry time arrival toa mutual information wireless networks authors wireless bradley department electrical computer engineering virginia tech blacksburg usa email olone hdhillon buehrer paper presented part ieee icc workshop advances network localization navigation anln paris france ntroduction global positioning system gps decades standard mechanism position location anywhere world however deployment locations recent emerging wireless networks begun put strain effectiveness gps localization solution example populations increase precipitating expansion urban environments cell phone use urban canyons well indoors continually increasing rise gpsconstrained environments highlight need fall back existing network infrastructure localization purposes additionally emergence wireless sensor networks wsns increased emphasis energy efficiency possibility equipping potential target node gps chip quickly becomes impractical furthermore deployment networks environments necessitates reliance terrestrial network localization solution thus localization within network performed network absence gps begun garner attention benchmarking localization performance wireless networks traditionally done using lower bound provides lower bound position error unbiased estimator common practice analyze crlb fixed scenarios anchor nodes target strategy produces value crlb specific scenario analyzed idea provide insight fundamental limits localization performance rather limited take account possible setups anchor nodes target positions within network order account possible setups useful appeal field stochastic geometry whereas past stochastic geometry applied towards study connectivity capacity outage probability fundamental limits wireless networks however apply towards study localization performance wireless networks modeling anchor node target placements point processes opens possibility characterizing crlb setups anchor nodes target positions thus crlb longer fixed value rather random variable conditioned number participating anchors randomness induced inherent randomness anchor positions upon marginalizing number participating anchors resulting marginal distribution crlb characterize localization performance throughout entire wireless network related work quest distribution localization performance comprises two main steps first step involves finding distribution crlb conditioned number participating anchor nodes second step involves finding probability given number anchors participate localization procedure regards first step several attempts literature obtain conditional distribution excellent first attempt found series papers approximations conditional distribution presented rss toa aoa based localization respectively approximations obtained asymptotic arguments driving number participating anchor nodes infinity approximate distributions accurate larger numbers participating anchors less ideal lower numbers however desirable conditional distribution accurate lower numbers participating anchors since dominate case terrestrial networks cellular conditional distribution crlb also explored authors able derive true expression conditional distribution clever crlb using complex exponentials distribution used derive analyze localization outage probability scenarios fixed number randomly placed anchor nodes expression represents true conditional distribution complexity puts disadvantage simpler approximations discussed section second step involves finding participation probability given number anchor nodes explored work authors modeled cellular network homogeneous poisson point process ppp consequently allowed derive bounds probability mobile device hear least base stations participation localization employing dominant interferer analysis able derive accurate expression probability easily extended give probability hearing given number anchor nodes participation localization procedure term hearability hearable used describe anchor nodes able participate localization procedure received sinr target threshold contributions paper proposes novel statistical characterization wireless network ability perform localization using stochastic geometry model target anchor node placements throughout network using crlb localization performance benchmark paper presents analytical derivation crlb distribution offers many insights localization performance within wireless networks previously attainable lengthy network simulations thus distribution offers means comparing networks terms localization performance enabling calculation localization statistics avg localization error unlocks insight changing network parameters sinr thresholds processing gain frequency reuse affect localization performance throughout network provides network designers analytical tool determining whether network meets example fcc standards pursuit distribution paper makes four key contributions first work presents analysis crlb affected order statistics internodal angles analysis reveals intimate connection second largest internodal angle crlb leads accurate approximation crlb second approximation used obtain distribution crlb conditioned number hearable anchors although distribution crlb approximation simplicity accuracy offer clear advantages true distribution presented approximate distribution presented advantages discussed section third work takes major step combining conditional distribution crlb distribution number participating anchors eliminates conditioning given number anchors allowing analytical expression marginal crlb distribution obtained since marginal distribution simultaneously accounts possible target anchor node positions across numbers participating anchor nodes therefore statistically characterizes localization error throughout entire wireless network thus signals departure existing literature additionally since two component distributions parameterized various network parameters resulting marginal distribution using square root crlb performance benchmark however state crlb unnecessarily clutter discussion described section crlb also parameterized network parameters consequently final contribution involves comprehensive analysis new crlb paradigm examine varying network parameters affects distribution crlb thereby revealing network parameters affect localization performance throughout network roblem etup section details assumptions propose determining network layout well localization procedure additionally describe important notation definitions used throughout paper conclude assumptions impact network setup network setup localization assumptions assumption assume ubiquitous wireless network anchor nodes distributed according homogeneous ppp assume potential targets distributed likewise anchor target point processes assumed independent remark assumption modeling wireless networks common literature assumption assume positioning technique used within twodimensional network remark although tdoa also commonly implemented toa represents viable approach additionally offers lower bound tdoa performance furthermore assumption eliminates need clock synchronization target anchor nodes assumption range measurements independent exhibit normally distributed range error remark reader familiar wireless positioning recognize classic los assumption however familiar localization terrestrial networks realize nlos measurements common thus move forward los assumption order make progress new paradigm see section adapt model accommodate nlos measurements selecting ranging errors consistent nlos propagation table ummary otation symbol description probability distribution function pdf differential entropy unit step function indicator function true otherwise trace matrix number participating anchors localization performance benchmark anchor node angle random variable internodal angle random variable order statistic sequence rvs path loss exponent sinr threshold frequency reuse factor symbol description cumulative distribution function cdf mutual information normal distribution mean variance probability event floor function transpose matrix max anchors tasked perform localization fixed localization error meters realization realization density ppp anchor locations processing gain average network load network traffic common range error variance assumption range error variance common among measurements participating anchor nodes considered known quantity remark assumption often made literature localization although realistic every scenario allows gain initial insight problem relaxed future work notation notation used throughout paper found table localizability define terms localizable unlocalizable introduced definition say target localizable detects localization signals sufficient number anchor nodes position determined without ambiguity remark assumption implies also define unlocalizable negation definition purposes setup subsequent derivations initially consider scenarios target localizable avoid unnecessary complication later section follow account scenarios target unlocalizable modify results accordingly impact assumptions assumptions place describe impact network setup assumption since anchors potential targets distributed independent homogeneous ppps stationary without loss generality may perform analysis typical target placed origin due fact independence stationarity assumptions imply matter target placed network distribution anchors relative target appears next assume number hearable anchors fixed value begin numbering anchors terms increasing distance origin target position depicted fig particular realization homogeneous ppp four hearable anchor nodes fig also depicts corresponding angles measured counterclockwise labeled accordingly assumption implies angles hearable anchors random variables come uniform distribution definition target placed origin term anchor node angle defined angle corresponding hearable anchor node measured counterclockwise note unif later sections see distances anchor nodes target important determining many anchors able participate given localization procedure however assumption particular anchor nodes identified participating localization procedure endowed common range error variance see following section assumption along assumptions lead crlb fixed dependent angles participating anchor nodes internodal angles thus since crlb expression dependent internodal angles distances participating anchor nodes target need considered hence may view participating anchors placed circle origin depicted fig ppp realization fig fig nitial abeling cheme dots represent fig quivalent etup realization particular realization anchors placed according fig participating anchor nodes homogeneous ppp origin represents location ternodal angles trace considered whereas target anchors participating localization procedure distances target realizations rvs labeled increasing order distance given note anchor origin corresponding anchor node angles labeled node angle order stats renumber participating anchors accordingly note realization terms increasing angle starting next formally define term internodal angle since anchor node angles unif may examine corresponding order statistics definition thus order statistics participating anchor node angles effectively renumber nodes terms increasing angle starting counterclockwise also depicted fig definition participating anchor nodes considered according anchor node angle order statistics internodal angle defined angle two consecutive participating anchor nodes remark since internodal angles functions rvs rvs thus may also consider order statistics order statistics internodal angles depicted particular ppp realization fig summary fig depicts example typical setup realization given assumptions taken consideration iii erivation etwork ide crlb istribution section first formally define localization performance benchmark square root crlb using definition assuming random placement anchors well random describe work generalizes localization performance results currently literature follows present steps necessary derive marginal distribution localization performance benchmark localization performance benchmark consider traditional localization scenario number participating anchor nodes positions well target position fixed represent set coordinates anchors coordinates target denoted next assumptions range measurements target participating anchors given measured distance divided measured distance true distance remark note assumption common among range measurements toa set toa considered thus moving forward may consider range measurements always regardless whether toa used continuing assumption enables likelihood function easily written product denoting vector range measurements likelihood function exp likelihood function obtain following fisher information matrix fim cos cos sin cos sin sin cos sin remark note target placed origin angles particular realization anchor node angles definition taking inverse fim crlb unbiased estimator target position given crlb defined definition define localization performance benchmark square root crlb remark benchmark often referred position error bound peb literature conclude notice expression obtained cos sin function anchor node angles departure traditional localization setup previous section assumed traditional setup number participating anchor nodes positions well target position fixed section however assume random briefly describe random placement anchors impacts localization performance benchmark randomness signals departure existing literature begin describing random placement anchors affects localization performance benchmark accomplished invoking assumption examining impact expression given recall section assumption implies anchor node angles thus realizations replaced random variables definition since function random variables becomes random variable may seek distribution work past sought distribution always remained one implicit assumption fixed therefore results presented thus far literature applied localization setups fixed number anchor nodes hence applicable address issue consider random variable whose distribution statistically quantifies number anchor nodes participating localization procedure new interpretation consequently allow decouple thereby enabling marginal distribution account possible positioning scenarios within network addition contributions outlined section taking advantage new interpretation subsequently obtain marginal distribution main contribution setting work apart existing literature approximation crlb deriving conditional distribution given use section acquire approximation expression consequently allow accurate tractable expression conditional distribution obtained approximation preliminaries goals facilitate search accurate approximation recall terms random variables thus rewritten using internodal angles definition given definition define terms underneath denominator random variable thus would like find approximation comprises two key traits allows straightforward transformation random variables number terms change would ideally involve single term simultaneously sacrifice accuracy approximation preserve much information possible setup anchors implying approximation dominate contribute total value initial approach intuition trying find approximation satisfies traits consider following possibilities first consider approximating sine squared arbitrary internodal angle sum consecutive internodal angles starting angle arbitrary possible approximations may seem like reasonable candidates satisfying first trait unfortunately fall short satisfying second trait see illustrative examine fig different anchor nodes looking unordered internodal angle every realization little knowledge gained total setup anchors example one realization arbitrary internodal angle examined might large would therefore give strong indication rest anchors placed however another realization internodal angle might small thus giving little information placement remaining anchors hence general arbitrary internodal angles provide accurate approximations due inconsistency describing anchor node setup consequently leads sine squared terms inability capture total value across realizations anchors given quantitative approach using mutual information taking advantage intuition gained clear would like approximation utilizes angles tend consistently dominate given setup therefore makes sense examine larger internodal angles thus use internodal angle order statistics follows naturally since ideally desire approximation examine possibility using largest second largest third largest internodal angles note larger internodal angles intuitively contain information setup anchors consequently since greatly restrict placement remaining anchors since order statistic approximations might seem viable examine sums larger internodal angle order statistics following two reasons would lead complex expression conditional distribution desired offer gain accuracy using sine squared single internodal angle order statistic evidenced simulation mutual information bits number participating anchor nodes fig ustifying pproximations hrough utual nformation mutual informations calculated numerically computing necessary distributions generated using monte carlo simulation million anchor node realizations bin width distributions chosen matlab spline option used interpolate integrands supports given furthermore adopt convention based continuity arguments qualitative notion information thus turn towards quantitative notion order justify use one approximations towards end utilize concept mutual information reason behind choice correlation example mutual information captures linear nonlinear dependencies two random variables since zero two random variables independent hence examine mutual information random variables may quantify approximation carries information thus condition equaling integer calculate differential entropies given fdw supports respectively mutual information approximations given fig versus fig evident mutual information approximation highest across numbers participating anchors shown investigating high mutual information explore reasoning behind result examine effect total value begin rewriting follows proposition random variable definition equivalently expressed proof see appendix separating terms internodal angle order statistics total sum proposition makes clearer approximations may affect total value reveal effects particular present following lemma along corollaries lemma cdf second largest order statistic internodal angles conditioned given min support note since would exist proof refer reader conference version paper appendix corollary given finite expected value second largest order statistic internodal angles conditioned given proof see appendix corollary given finite variance conditioned given var proof note var derivation analogous proof corollary next plot corollary one two standard deviations given fig versus see second largest internodal angle centered concentrated around suggesting concentrated maximum one implies majority anchor node placements dominant term expression prop thus term tend contribute given term could contribute total value especially true small values focus also low intuitively dominant angle places greatest constraints remaining angles determined restricts thus gives greatest sense total setup anchors note considering order statistics constraints placed remaining angles pronounced furthermore examining different realizations anchors fig example along prop one see small large large value follows thus consistency dominant term prop along intuitive correlation offer supporting evidence higher low summary mutual information proved utility revealing perhaps best approximation desirable lower values since possesses two desirable traits approximation discussed beginning section henceforth use approximation completing approximation complete approximation consequently remains ensure range possible values support ensure ultimate approximation produce range values true order accomplish approximate scaled version thus search value constant yields desired support since dmax follows support lemma order support equal simply need set dmax value dmax presented following lemma abscissa radians increasing solid true dashed theorem number participating anchor nodes meters fig econd argest nternodal ngle fig accuracy heorem true conditional cdf tatistic figure gives sense concentration given generated using monte carlo simulation distribution around given cor var cor million random setup realizations internodal der angles note lemma let finite given maximum value dmax proof see appendix thus completes approximation lastly substituting approximation expression finally yields approximation localization performance benchmark approximated sin stated lemma conditional crlb distribution theorem localization performance benchmark given approximation cdf conditioned min min support proof see appendix remark although theorem conditional distribution approximation provides two clear advantages true conditional distribution presented approximate conditional distribution presented first theorem offers simple algebraic expression involving finite sums opposed rather complex expression involving improper integral products scaled bessel functions second theorem remarkably accurate lower numbers participating anchor nodes see fig comes contrast approximate distribution presented derived asymptotically therefore accurate higher numbers participating anchors selective accuracy theorem desirable since device likely hear lower numbers participating anchors especially terrestrial wireless distribution number participating anchors next step needed achieve goal find distribution number participating anchors section build upon localizability results order obtain distribution towards end present relevant theorems work modify use finally conclude discussion applicability results overview localizability work recall section goal derive expression probability mobile hear least base stations participation localization procedure cellular network derive expression authors assumed base stations placed according homogeneous ppp examined sirs base station signals received typical user placed specifically examined sir base station denoted sir since used directly determine since sir depends locations interfering base stations base stations placement according ppp implies sir becomes random variable consequently distribution also becomes function ppp density additionally authors incorporate wireless networks mean wireless network setup mobile devices fixed access points separate channels sir signal interference ratio noise ignored since assumes network network loading parameter sir means given base station considered active interfering base station signal probability furthermore sir also function pathloss distances base stations target sir statistically characterized authors able determine noting sir sir threshold detection processing gain mobile assumed also average effect small scale fading threshold thus since sir must also depend network parameters described denote dependency continuing localizability results mention one last caveat regarding ppp density shadowing present easily incorporated ppp network model small displacements base station locations results new ppp density accounts effect shadowing new density given assumed random variable representing effect shadowing signal base station origin thus using incorporate shadowing model presented section localizability results section present main theorem enable obtain lemma theorem probability mobile device hear least base stations participation localization procedure given random variable denoting number active participating note equality holds rare corner cases however probability cases occurring vanishingly small thus little impact accuracy subsequent localizability results assumed behavior implies distributed normally expressed base stations interfering note binomial additionally trivial case define note linear terms remark theorem derived following assumptions dominant interferer networks refer reader section details regarding assumptions consequent derivation theorem distribution localizability results finally present distribution number participating anchors theorem pdf given support probabilities given lemma distribution frequency reuse using theorem may obtain another expression incorporates frequency reuse parameter parameter models ability base stations transmit separate frequency bands thereby limiting interference basis easily incorporated model considering independent ppps whose densities original ppp divided thus number participating base stations band total number participating base stations given thus find frequency reuse simply need account combinations participating base stations sum equals given following corollary modification theorem corollary pdf given frequency reuse factor multiplicands given thm support remark corollary reduces theorem corollary may evaluated numerically use recursive function applicability results obtained pdf conclude brief discussion regarding applicability begin taking note support whereas section proceeded assumption target localizable support allows consider cases target unlocalizable thus addressed following section may use cases determine percentage network target unlocalizable lastly note localizability results lemma presented context cellular networks results actually applicable wireless network using downlink measurements long distribution parameters altered accordingly implies distribution corollary also applicability since derived using lemma thus since use corollary along modified theorem derive marginal distribution final distribution also applicable wireless network employing toa localization strategy marginal crlb distribution section modify theorem combine corollary obtain marginal distribution first state one last network assumption often used practice assumption given localization procedure finite number anchor nodes ever tasked transmit localization signals remark anchors tasked mean signals necessarily heard assumption considering scenarios target unlocalizable modify theorem follows new modified conditional distribution previous conditional distribution given theorem predetermined localization error value used account unlocalizable scenarios described detail remark theorem accounted scenarios target localizable modified form however accounts unlocalizable scenarios scenarios modified conditional distribution yields step function valid cdf corresponds deterministic value localization error thus account cases target unlocalizable assigning arbitrary localization error value chosen represent cases ambiguity target position remark possible mobile may hear anchors tasked perform localization procedure case participating anchors likely highest received sir target connectivity information known priori therefore scenario localization performance based anchors tasked clearly reflected modified conditional distribution using modified theorem along corollary may obtain distribution localization error entire wireless network theorem marginal cdf localization performance benchmark given theorem given corollary proof multiplying modified conditional distribution given marginal distribution corollary gives joint distribution setting equal particular realization summing realizations gives marginal cdf desired remark first recall conditional distribution approximation given theorem accurate lower values next note declines rapidly increases intuitive result since probability hearing many anchor nodes small wireless networks maximum nodes tasked perform localization procedure assumption thus two facts validate use approximation since cases approximation less ideal paper choose value applies thus large enough allows clear distinction localizable unlocalizable portions network quick examination cdf note however one could account scenarios separately example cellular network one may want choose cell radius since user equipment typically knows cell located large either multiplied considered invalid realistic network assumption consequence theorem also retain accuracy remark conclude noting distribution accounts localization error setups anchor nodes numbers participating anchors placements target anywhere network hence distribution completely characterizes localization performance throughout entire wireless network represents main contribution work umerical nalysis section examine accuracy theorem investigate changing network parameters affects localization performance throughout network description simulation setup discuss parameters fixed across simulations include description simulations conducted fixed parameter choices effect model assumptions simulations consider case cellular network thus place anchor nodes ppp density matches ubiquitous hexagonal grid intersite distances furthermore choose shadowing standard deviation defines density parameter next set pathloss exponent chosen represent pathloss similar seen typical cellular network note pathloss value indicative nlos range measurements inherently part localization cellular networks recall however assumption implied use los measurements thus attempt mimic nlos simulations selecting range error account reasonable delay spread nlos conditions described following section note subject future work seek refinement model incorporating nlos directly range measurements last parameter remains fixed across simulations parameter chosen large enough examination cdf reveal percentage network target unlocalizable towards end sufficient choose simulations choice value arbitrary left network designers would like treat unlocalizable cases increasing frequency reuse abscissa abscissa decreasing solid true dashed theorem solid true dashed theorem meters fig ffect requency euse result meters fig mpact ecreasing etwork oad exposes large impact frequency reuse plot demonstrates improvement localization ization performance throughout network parameters mance due decrease network loading parameters chosen follows plots appearance due discrete conducting simulations true marginal cdf generated simulation positioning scenarios scenario consisted average placement anchor nodes placed according homogeneous ppp target located origin next anchor nodes whose sirs surpassed detection threshold deemed participate localization procedure corresponding coordinates used calculate true value given definition anchor nodes signals threshold anchors highest sirs used calculate effect frequency reuse distribution localization error section explore frequency reuse impacts localization performance throughout network simulation well subsequent simulations compares true simulated distribution analytical model theorem parameters fixed levels stated fig parameter varied frequency reuse factor range error standard deviation chosen according detection threshold crlb range estimate see equ assuming channel bandwidth approximately added account reasonable delay spread nlos conditions fig notable impact frequency reuse localization performance localizability small increase frequency reuse portion network target localizable increases astonishing furthermore localization error also reduced although improvement drastic increase localizability additionally frequency reuse increases gains localizability stop gains localization error also declining well thus conclude increase frequency reuse strongly advisable desires increase localization performance within network result coincides seen practice viz lastly note excellent match true simulated distribution analytically derived distribution given theorem see accuracy theorem retained across results section examining effects network loading examine effect network loading localization performance throughout network accomplished varying percentage network actively transmitting interfering localization procedure parameter values fixed chosen manner frequency reuse case distributions plotted fig see decrease network load leads improvement localizability well improvement localization error however improvement pronounced frequency reuse case examining percentile example evident rate improvement localization error declines network load declines well thus since low network traffic usually never desirable network designer looking optimize localization performance may find solace fact gains performance begin decline network loading decreases also impact processing gain section examine effects changing processing gain since perhaps easiest parameter network designer change practice note choose previous simulations fig evident processing gain increases corresponding improvement localizability across network well improvement localization error consequence exists clear sacrificing processing time gains localization performance however appears improvements begin level processing gain promising processing gains higher abscissa abscissa increasing processing gain increasing solid true dashed theorem solid true dashed theorem meters fig ffect ncreasing meters rocessing fig mpact ncreasing ange rror figure highlights exists ing range error results predictable effect localization cessing time localization performance distribution performance throughout network yet effect parameter values localizability parameter values quickly become impractical examining percentile see increase processing gain lead improvement localization error throughout network thus increasing processing gain easily implementable solution achieving moderate gains localization performance range error impact localization performance within network last result attempt mimic effect increasing nlos bias injecting additional range error measurements examining fig note first value chosen frequency reuse simulation therefore subsequent choices represent mimicking impact increasing nlos bias fig see increasing range error effect localizability within network clear analytical model since appear parameter corollary additionally injecting range error measurements results predictable effect distribution localization performance also evidenced examining theorem appears scale parameter thus mimicking effects nlos measurements results scaling distribution implying predictable reduction localization performance onclusion paper presents novel parameterized distribution localization error applicable throughout entire wireless network invoking ppp network model well common assumptions toa localization enabled distribution simultaneously account possible positioning scenarios within network deriving result involved two main steps derivation approximate distribution localization performance benchmark conditioned number hearable anchor nodes yielded conditional distribution desirable accuracy tractability properties modification results attain distribution thus using two distributions arrived final marginal distribution also retained parameterizations two component distributions followed numerical analysis distribution localization performance analysis revealed distribution offer accurate baseline tool network designers used get sense localization performance within wireless network also providing insight network parameters change order meet localization requirements since marginal distribution distribution throughout network consequently provides benchmark describing localization performance network employing unbiased efficient localization algorithm amount insights distribution localization error reveals numerous results presented paper begun explore new paradigm hope work spawns additional research new concept future work incorporating nlos measurements accounting collaboration adding localization strategies refine model presented closing work presents initial attempt provide network designers tool analyzing localization performance throughout network freeing lengthly simulations offering accurate analytical solution ppendix roof roposition first helps visualize terms sum grid represent rows represent columns example case gives arguments terms represented grid clarity arrangement evident sum represents process summing row sequentially starting however choose sum terms diagonally starting lowest diagonal working way upward yields considering cases separately may rewrite sin sin sin next note sin sin last two equalities follow definition hence lastly complete proof noting first sum may equivalently expressed replacing internodal angles order statistics ppendix roof orollary using lemma assumption finite pdf conditioned min support lemma next note lower summation limit may rewritten upper summation limit simplified appending indicator function summand gives logically equivalent expression using expression pdf expectation derived follows follows absorbing indicator function integration limits since follows assumption finite derived integration parts ppendix roof emma straightforward application lowest geometric dilution precision gdop presented first since finite continuous real function defined compact subset thus maximum must exist call dmax next assumptions gdop presented written gdop follows lemma assumption assumptions asserts lowest gdop given gdopmin since gdopmin must occur maximum lemma follows ppendix roof heorem approximation defined theorem determine support conditioned know lemma support sin must sin see sin sin sin hence next find cdf conditioned consider following sin follows approximation fact may drop parameter condition since dependency explicit defined theorem use lemma gives theorem desired eferences lone buehrer towards characterization localization performance networks random geometries ieee international conference communications icc paris france may baronti pillai chook chessa got wireless sensor networks survey state art zigbee standards comput commun may vieira coelho silva mata survey wireless sensor network devices proc ieee conference emerging technologies factory vol patwari ash kyperountas hero moses correal locating nodes cooperative localization wireless sensor networks ieee signal processing magazine vol july alrajeh bashir shams localization techniques wireless sensor networks international journal distributed sensor networks vol kay fundamentals statistical signal processing upper saddle river vol chang sahai estimation bounds localization first annual ieee communications society conference sensor hoc communications networks ieee secon wang yip yao estrin lower bounds localization uncertainty sensor networks ieee international conference acoustics speech signal processing vol guvenc chong survey toa based wireless localization nlos mitigation techniques ieee communications surveys tutorials vol quarter savvides error characteristics multihop node localization sensor networks ispn proceedings international conference information processing sensor networks haenggi andrews baccelli dousse franceschetti stochastic geometry random graphs analysis design wireless networks ieee sel areas vol haenggi stochastic geometry wireless networks new york cambridge university press huang anderson performance limit sensor localization ieee conference decision control european control conference orlando huang performance limit toa localization international conference control automation robotics vision icarcv guangzhou huang performance limits sensor localization automatica vol zhou shen outage probability localization randomly deployed wireless networks ieee communications letters vol april schloemann dhillon buehrer towards tractable analysis localization fundamentals cellular networks ieee trans wireless vol march code federal regulations service andrews baccelli ganti tractable approach coverage rate cellular networks ieee trans vol dhillon ganti baccelli andrews modeling analysis downlink heterogeneous cellular networks ieee sel areas vol apr blaszczyszyn karray keeler using poisson processes model lattice cellular networks proc ieee infocom apr keeler ross xia wireless network signals appear poisson blaszczyszyn karray keeler wireless networks appear poissonian due strong shadowing ieee trans wireless vol zekavat buehrer handbook position location theory practice advances wiley jourdan dardari win position error bound uwb localization dense cluttered environments ieee international conference communications istanbul jourdan roy optimal sensor placement agent localization position location navigation symposium schloemann dhillon buehrer tractable metric evaluating base station geometries cellular network localization ieee wireless commun vol apr nounagnon using divergence analyze performance collaborative positioning dissertation bradley dept electrical computer virgina tech blacksburg cover thomas elements information theory new york chs liu geometry influence gdop toa aoa positioning systems second international conference networks security wireless communications trusted computing wuhan hubei dhillon andrews downlink rate distribution heterogeneous cellular networks generalized cell selection ieee wireless communications letters vol february bhandari dhillon buehrer impact proximate base station measurements localizability cellular systems proc ieee spawc edinburgh july invited paper
| 7 |
balanced allocation patience jan john william moses amanda eli abstract load balancing problem primary framework greedy algorithm greedy azar places ball probing random bins placing ball least loaded high probability maximum load greedy exponentially lower result balls placed uniformly randomly showed slightly asymmetric variant left provides significant improvement however improvement comes additional computational cost imposing structure bins present fully decentralized algorithm called firstdiff combines simplicity greedy improved balance left key idea firstdiff probe different bin size first observation located place ball although number probes could quite large balls show firstdiff requires probes average per ball standard settings thus number probes greater either greedy left importantly show firstdiff closely matches improved maximum load ensured left standard settings provide tight lower bound maximum load log log log terms additionally give experimental data firstdiff indeed good left better practice key words allocation load balancing firstdiff balanced allocation randomized algorithms task ams subject classifications introduction load balancing study distributing loads across multiple entities load minimized across entities problem arises naturally many settings including distribution requests across multiple servers networks requests need spread amongst participating nodes hashing much research focused practical implementations solutions problems work builds several classic algorithms theoretical model model balls placed sequentially bins ball probes load random bins order make choice give new algorithm firstdiff performs well best known algorithm left significantly easier implement allocation time ball number probes made different bins placement challenge balance allocation time versus maximum bin load example using one probe per ball placing ball uniformly earlier version paper appeared previous version expectation upper bounds average number probes lower bounds version lower bounds maximum load high probability upper bounds average number probes well cleaner proofs department computer science engineering indian institute technology madras chennai india augustine supported iit madras new faculty seed grant iit madras exploratory research project max planck center computer science impecs department computer science engineering indian institute technology madras chennai india department mathematics bowdoin college usa aredlich material based upon work supported national science foundation grant author residence institute computational experimental research mathematics providence spring semester department computer science brown university usa eli augustine moses redlich upfal random maximum load bin lnlnlnnn high total allocation time probes hand using probes per ball placing ball lightest bin greedy first studied azar decreases maximum load lnlnlndn allocation time words using choices improves maximum load exponentially linear allocation cost introduced slightly asymmetric algorithm left quite surprisingly guaranteed maximum load constant using allocation time probes greedy analysis maximum load greedy left extended case berenbrink however left utilizes additional processing bins initially sorted groups treated differently according group membership thus practical implementation especially distributed settings requires significant computational effort addition probes contribution present new algorithm firstdiff algorithm requires bins instead firstdiff uses feedback adjust number probes natural comparison classic greedy algorithm firstdiff uses number probes average greedy produces significantly smaller maximum load fact show maximum load small left furthermore comparable left heavily loaded heavily loaded cases firstdiff much lower computational overhead left simpler implementation makes firstdiff especially suitable practical applications amenable parallelization example requires central control underlying structure applications target maximum load aim minimize necessary number probes perspective algorithm improves greedy maximum load firstdiff comparable greedy uses exponentially fewer probes per ball theorem use firstdiff maximum number probes allowed per ball allocate balls bins average number probes required per ball expectation furthermore maximum load bin log log max high probability log smallest value log theorem use firstdiff maximum number probes allowed per ball allocate balls bins log taken lemma smallest value satisfies log log takes probes average place every ball expectation high probability furthermore absolute constant max load bin log log log log log log log technique proving average number probes bounded novel use phrase high probability short denote probability form suitable furthermore every log paper base unless otherwise mentioned thus concerned average number probes per ball throughout paper balanced allocation patience virtue best knowledge number probes required ball dependent configuration time ball placed naive approach computing expected value quickly becomes conditional instead show conditioning eliminated carefully overcounting number probes required configuration leading proof quite simple case significantly complex case however basic ideas remain upper bound maximum load proved using layered induction technique however firstdiff dynamic algorithm standard recursion used layered induction must altered use coupling complex analysis adjust standard layered induction context furthermore provide tight lower bound maximum load broad class algorithms use variable probing theorem let alg algorithm places balls bins sequentially one one satisfies following conditions probes used place ball ball probe made uniformly random one bins ball probe independent every probe maximum load bin placing balls using alg least high probability use theorem provide lower bound maximum load firstdiff tight log log log terms theorem maximum load bin placing balls bins using firstdiff maximum number probes allowed per ball least high probability related work several algorithms number probes performed ball adaptive nature emerged past work done czumaj stemann berenbrink czumaj stemann present interesting threshold algorithm first define process load value associated threshold ball placed number probes made find bin particular load exceeded associated threshold carefully selecting thresholds load values develop ball probes bins finds one whose load within predetermined threshold bounds maximum load average allocation time better algorithm computing required threshold value often depends knowledge typically unavailable practical applications total number balls ever placed furthermore proofs extend easily recently berenbrink develop new threshold algorithm adaptive similar threshold value used given ball placed depends number balls placed thus far analyze algorithm also extend analysis case show algorithms good bounds maximum load average allocation time comes requiring sort global knowledge placing balls case adaptive ball must know order global placement balls case augustine moses redlich upfal threshold ball must know total number balls ever placed algorithm unique requires global knowledge able make decisions based probed bins load values alone definitions course paper use several terms probability theory define convenience consider two markov chains time state spaces respectively coupling markov chain time state space maintain original transition probabilities consider two vectors let permutations respectively say majorizes majorized given allocation algorithm places balls bins define load vector process balls placed follows ith index denotes load ith bin assume total order bins according ids note markov chain consider two allocation algorithms allocate balls let load vectors balls placed using respective algorithms respectively say majorizes majorized coupling majorizes berenbrink provide illustration ideas applied load balancing context also use theorem janson order achieve high probability concentration bounds geometric random variables first set terms theorem restate let geometric random variables parameters respectively define mini following lemma lemma theorem organization paper structure paper follows section define model formally present firstdiff algorithm analyze algorithm section give proof total number probes used firstdiff place balls high probability maximum log bin load still upper bounded log high probability provide analysis algorithm section namely number probes average per ball high probability maximum bin load log log upper bounded log log log probability close provide matching lower bound maximum bin load tight log log log term algorithms variable number probes firstdiff particular section section give experimental evidence firstdiff algorithm indeed results maximum load comparable left finally provide concluding remarks scope future work section balanced allocation patience virtue firstdiff algorithm idea behind algorithm use probes efficiently standard model effort wasted phases example early distribution bins size need search placing ball hand effort phases would lead significant improvement example balls distributed bins already size least thus harder avoid creating bin size firstdiff takes variation account probing finds difference making decision algorithm uses probes efficiently algorithms still balanced outcome ball probes bins extension uniformly random found two bins different loads bin zero load places ball least loaded probed bins zero loaded bin probed bins equally loaded ball placed without loss generality last probed bin pseudocode firstdiff note use hide constant value exact values different respectively algorithm firstdiff assume following algorithm executed ball repeat times probe new bin chosen uniformly random probed bin zero load place ball probed bin exit probed bin load different probed place ball least loaded bin breaking ties arbitrarily exit place ball last probed bin see manner ball placed using firstdiff classified follows first probe made bin load zero probes made bins load one probes made bins larger load followed probe bin lesser load one probes made bins lesser load followed probe bin larger load analysis firstdiff theorem use firstdiff maximum number probes allowed per ball allocate balls bins average number probes required per ball expectation furthermore maximum load log bin log high probability max log smallest value log proof first show upper bound average number probes per ball expectation subsequently show maximum load end placing balls desired augustine moses redlich upfal proof number probes lemma number probes required place balls bins using firstdiff maximum number probes allowed per ball expectation high probability proof let maximum number probes allowed used firstdiff per ball show total number probes required place balls exceed log thus probes required place balls let balls indexed order placed analysis proceeds two phases value fixed subsequently first balls analyzed first phase remaining balls analyzed second consider ball indexed let random variable denoting number probes takes firstdiff place ball phase one couple firstdiff related process probes finds difference bin loads runs probes without treating empty bins special words firstdiff algorithm without lines one additional rule related process empty bin probed first process finishes probing ball placed first probed bin empty bin note valid coupling empty bin probed firstdiff process ball placed empty bin empty bin probed two processes exactly let number probes required related process place ball configuration bins load bins load notice configuration balls bins furthermore configuration placement firstdiff new process see simple sequence couplings first choose arbitrary configuration bins size configuration probed bins two different sizes discovered set probed intersects two distinct couple configuration bins size rest size configuration requires probes original configuration continues set probed intersects second note configuration bins size bins size requires even probes one restricting bins size decrease number empty bins finally note ball placement either firstdiff new process leads configuration time isomorphism firstdiff places ball empty bin process first derive expected value expected number probes used firstdiff upper bounded expected number probes bin appears expected number probes used firstdiff algorithm without line course overall expected number probes balanced allocation patience virtue first steps log log log find expected number probes phase one log log log solving get recall want high probability bound number probes required place ball phase one running firstdiff recall high probability suffices use lemma bound log log log log since phase two rather analyzing detail use fact number probes ball bounded number probes overall phase total number probes log log upper bound number probes place balls probes expectation desired proof maximum load lemma maximum load bin using firstdiff maximum number probes allowed per ball allocate balls bins log max log high probability log smallest value log augustine moses redlich upfal proof proof follows along lines standard layered induction argument make adaptations fit context number probes fixed let maximum number probes allowed used firstdiff per ball define fraction bins load least balls placed define number balls height least balls placed clear wish show max load logloglogk constants set logloglogk equivalently wish show constant order aid proof let define series numbers upper bounds let set remains find upper bounds remaining two terms equation derive recursive relationship acts upper bound fraction bins height least balls placed order ball placed land height least one conditions must occur probes made bins height least several probes made bins height least one made bin height least one probe made bin height least several probes made bins height least thus probability ball placed height least conditioning time balanced allocation patience virtue let fraction bins load least ball placed bin let min arg mint first probability bounded greater probability binomial random variable fix using chernoff bound say high probability long constant long log log value dips log log log log log log notice seen solving recurrence log logk log logk logn log log log log log log log log log log therefore set least log log max order keep value log values defined proceed bound given nvi upper bound inequality using following idea let indicator variable set following conditions met rth ball placed height least set otherwise probability upper bounded therefore probability number balls height least exceeds upper bounded binomial random variable given parameters recall chernoff bound sum independent poisson trials expectation set augustine moses redlich upfal log log since thus log log since finally need upper bound consider particular bin load least probability ball fall bin since function since upper bound probability balls fall given bin load least use union bound bins load least show probability fraction bins load least exceeds negligible first probability balls fall given bin load least taking union bound across possible bins following inequality balanced allocation patience virtue log log since putting together equations get thus log log max load log lemma lemma immediately arrive theorem analysis firstdiff theorem use firstdiff maximum number probes allowed per ball allocate balls bins log taken lemma smallest value satisfies log log takes probes average place every ball expectation high probability furthermore absolute constant max load bin log log log log log log log proof first show average number probes per ball expectation show maximum load bound holds proof number probes remark earlier version paper different proof subsection overall idea overcounting number probes remains specific argument justify overcounting changed cleaner specifically replaced lemmas argument follows header overcounting method concludes header expectation bound also note lemma earlier version longer required due way constructed argument main difficulty analyzing number probes comes fact number probes needed ball depends previous balls placed intuitively previous balls placed bin number balls number probes hand significant number bins different load levels ball placed probes one might hope prove system always displays variety augustine moses redlich upfal loads unfortunately system verified experimentally oscillates evenly loaded otherwise therefore take slightly nuanced approach takes account number probes cycles high high loads even low variety load lemma log taken lemma smallest value satisfies log log using firstdiff maximum number probes allowed per ball takes probes expectation high probability place balls bins proof let maximum number probes allowed per ball using firstdiff throughout proof assume maximum load bin log holds high probability owing lemma low probability event maximum load exceeds log contribute little overall number probes probability ball exceeds height log arbitrarily small inverse polynomial therefore ball contribute probes overall number probes even liberally account probes ball long let balls placed bins assume log order prove lemma proceed three stages first stage consider arbitrary sequence placing balls bins develop method allows overcount number probes required place ball step placement second stage proceed calculate expected number probes required place balls finally third stage show get high probability bound number probes required place ball overcounting method first couple process firstdiff similar process zero bin condition place balls used similar process zero bin probed first probes made either bin different load probed probes made ball placed first bin probed zero bin case first bin probed zero bin process acts exactly firstdiff clear process take probes firstdiff still making placements firstdiff thus upper bounds number probes obtained process apply firstdiff remainder proof analyze process describe method use overcount number probes consider arbitrary sequence placing balls bins describe method associate configuration arises placement canonical configuration define later proof canonical configuration requires probes place ball actual configuration ensure mapping actual configurations canonical configurations mapping thus counting number probes required place ball every possible canonical configuration overcount number probes required place every ball imagine coupling according probe sequences consider possible sequences probes timestep corresponds particular sequence bin labels direct bins probed note probe sequences equally likely fully determine balanced allocation patience virtue placement probe sequence let sequence configurations generated let sequence timestep note gives potential probes timestep entries often less often ball placed without using maximum number probes convenience drop superscript subsequently use refer vectors give definitions consider particular bin balls placed one top another exactly balls placed given ball say ball height level balls height said level say level contains balls balls level level containing balls said complete level containing least one ball less balls said incomplete given configuration consider highest complete level given configuration define plateau level least one ball level intuitively plateaus configuration highest complete level higher incomplete levels notice possible given configuration may one plateau incomplete levels notice given configuration two plateaus exist levels number balls implies exists plateau level number balls consider particular configuration sequence call number plateaus consider plateaus increasing order levels call call number balls level define canonical configuration configuration balls level greater balls level balls every level less associate configuration set canonical configurations plateau level include canonical configuration set note probe sequence specific sequence number probes utilized sequence less equal number probes utilized configurations therefore expected number probes used place ball configuration less equal expected number probes used place ball configurations describe way select one particular configuration associate configuration choose configuration mapping configurations selected canonical configurations mapping furthermore show every selected canonical configurations unique others thus set selected canonical configurations multiset thus counting number probes required place balls every possible canonical configuration overcount number probes required place balls sequence look canonical configurations associated configurations entire sequence let refer set canonical configurations ball placed let empty set set canonical configurations differs three configurations consider balls placed plateaus levels corresponding values ith ball placed level level exists balls present level level exists augustine moses redlich upfal level exist notice every scenario exactly one configuration added get denote configuration selected canonical configuration given sequence configurations clear selected canonical configuration uniquely chosen show following lemma lemma every selected canonical configuration sequence unique different selected canonical configurations words set selected canonical configurations multiset furthermore earlier established configuration one canonical configurations takes least many probes place ball latter former thus calculating number probes would take place balls possible canonical configurations overcount number probes required place balls bins lemma set selected canonical configurations sequence configurations multiset proof consider arbitrary sequence within arbitrary configuration reach configuration ball placed previously level extended level balls balls selected canonical configuration since balls added never deleted level extended number balls placing another ball never extend level balls level henceforth extended larger number balls balls thus given configuration never appear twice set selected canonical configurations expectation bound mentioned earlier assume maximum load bin log thus given sequence placements final configuration never balls level log thus calculating number probes taken place balls need consider number probes required place ball every canonical configuration log given canonical configuration let random variable denoting number probes required place ball using firstdiff without zero bin condition let random variable denoting total number probes required place ball possible canonical configurations thus log given configuration ball placed either first hits bins level several times bin level hits bins level several times bin level makes probes thus using geometric random variables see min first last canonical configurations given level let give away maximum number probes want calculate expected number probes middle canonical configurations therefore balanced allocation patience virtue log log log log log log log log log log high probability bound may apply lemma taken log log log log log since therefore high probability total number probes log log log log log log log log log log log since log thus balls placed bins upper bound expected total probes total probes high probability log therefore expectation high probability number probes per ball since augustine moses redlich upfal proof maximum load lemma use firstdiff maximum number probes allowed per ball allocate balls bins absolute constant max load bin log log log log log log log proof proof follows along lines theorem order prove lemma make use theorem gives initial loose bound gap maximum load average load arbitrary use lemma tighten gap use one final lemma show bound gap holds balls placed hold time prior first establish notation let maximum number probes permitted made per ball firstdiff placing balls let define load vector representing difference load bin average load without loss generality order individual values vector order load difference xnt xit load ith loaded bin minus convenience denote gap heaviest load average initial bound gap give upper bound gap maximum loaded bin average load placing arbitrary number balls words show negligible initial bound gap lemma arbitrary constant placing arbitrary balls bins firstdiff exist constants log thus exists constant gives log desired value order prove lemma need two additional facts first following basic observation lemma firstdiff majorized greedy proof let load vectors firstdiff greedy balls placed using respective algorithms respectively follow standard coupling argument refer section example couple firstdiff greedy letting bins probed greedy first bins probed firstdiff know firstdiff makes least probes clear firstdiff always place ball bin load less equal bin chosen greedy ensures majorization preserved prior placement ball new load vectors continue preserve majorization see detailed example initially majorized since vectors using induction seen majorized time tth ball placed would continue majorized time therefore firstdiff majorized greedy fact following theorem greedy taken used similarly theorem theorem let load vector generated greedy every exist positive constants ready prove lemma balanced allocation patience virtue proof lemma combining lemma theorem tells load vector generated firstdiff clearly log eag observe eag log eag eag markov inequality theorem lemma theorem proved reducing gap lemma gives initial bound order log next step reduce desired gap value reduction use modified version lemma similar involved proof give modified lemma prove lemma every exists universal constant lowing holds log log logloglogk constants theorem proof proof consists many steps first observe lemma follows directly lemma sufficiently small use layered induction bound proportion bins size larger turn allows compute desired bound probability large gap occurring proof define define lemma smaller values minimum value recall log log minimum value log log define minimum value log define absolute constant max notice lemma implies lemma holds log log log consider right hand side lemma exp sufficient prove inequality augustine moses redlich upfal since conditions lemma may rewrite log let compute constant accordingly set log log log done rewriting initial probability inequality prove lemma assuming start rewriting probability terms log log log log log log log log log log log log prove theorem enough show logloglogk bins loads define fraction bins load least balls placed let set logloglogk set using new notation want show negligible thought showing probability fraction bins load least exceeding balls placed conditioned event negligible suppose series numbers upper bounds know successively expanding bounding term derivatives conditioning sides remains find appropriate values use layered induction approach show exceed corresponding high probability allows balanced allocation patience virtue upper bound components equation base case layered induction order use layered induction need base case let set statement theorem since statement theorem applying markov inequality using theorem therefore third term equation bounded recurrence relation layered induction define remaining values recursively note let defined number balls height least balls placed initially balls system threw another balls system remember average load bin balls placed condition ball height must one balls placed therefore number bins load balls placed upper bounded number balls height least order upper bound upper bound recall algorithm places ball bin load probes times sees bin load time probes times sees bin load time probes bin load probes times sees bin load least time load bin probed time probes bin load thus probability ball end height least let fraction bins load least ball placed bin augustine moses redlich upfal let min arg minf first preceding argument probability bounded probability binomial random variable greater fix using chernoff bound say high probability long constant long log words upper bound holds placement balls show according previous recurrence relation dips later propose modified recurrence relation sets value ensures maximum value obtained recurrence log log upper bound used later argument value discussion log log log log log log solving recursion log log get log log log log log log log log log log log log log log log log log log log log log log log therefore since thus log desired need bound log let set max using values generated prove given nvi upper bound inequality using following idea let indicator variable set three following conditions met balanced allocation patience virtue rth ball placed height least iii set otherwise probability upper bounded since condition number balls height least come balls placed therefore probability number balls height least exceeds upper bounded binomial random variable given parameters according chernoff bound sum independent poisson trials expectation set log log since thus bound middle term equation log log since top layers layered induction finally need upper bound first term equation consider bin load least upper bound probability ball falls specific bin regardless probes made ball one must made specific bin thus formula similar original recursion factor augustine moses redlich upfal therefore probability ball fall bin log log since log since upper bound probability balls fall given bin load least use union bound bins height least show probability fraction bins load least exceeds negligible first probability balls fall given bin load least taking union bound across possible bins following inequality log log since putting together equations get thus balanced allocation patience virtue log log log finally log log log log log log hence lemma proved lemma know arbitrary time gap log high probability applying lemma log log log appropriately chosen constants get logloglogk log log log log applying lemma log log log log log appropriately chosen constants get logloglogk log log log log log time log show balls placed probability gap exceeds particular value increases true lemma lemma stochastically dominated thus every although setting different proof lemma applies well thus knowing gap large time log probability log log implies values gap exceeds desired value probability substituting logloglogk log log log log log modifying inequality talk max load balls thrown results lemma statement thus concludes proof lemma putting together lemma lemma get theorem lower bound maximum bin load provide lower bound maximum load bin using firstdiff well types algorithms use variable number probes class type algorithms defined class algorithms ball locations chosen uniformly independently random bins available first give general theorem type algorithm apply firstdiff theorem let alg algorithm places balls bins sequentially one one satisfies following conditions probes used place ball ball probe made uniformly random one bins ball probe independent every probe maximum load bin placing balls using alg least high probability augustine moses redlich upfal proof show greedy majorized alg greedy always performs better alg terms load balancing thus lower bound applies max load bin using greedy must also apply alg let load vectors greedy alg balls placed using respective algorithms respectively use induction number balls placed prove claim majorization initially ball placed default majorized assume majorized use standard coupling argument prove induction hypothesis placement tth ball let alg use probes couple greedy alg letting first bins probed greedy bins probed alg greedy always make least probes thus possibly makes probes lesser loaded bins probed alg since greedy places ball least loaded bin finds place ball bin load one chosen alg therefore majorized thus induction see majorized therefore greedy majorized alg known max load bin placement balls bins using greedy least high probability therefore lower bound also applies alg ready prove lower bound max load bin using firstdiff theorem maximum load bin placing balls bins using firstdiff maximum number probes allowed per ball least high probability proof see firstdiff uses probes satisfies requirements theorem thus substituting get desired bound table experimental results maximum load balls bins based experiments configuration note maximum number probes per ball firstdiff denoted chosen average number probes per ball fewer greedy left firstdiff greedy left firstdiff greedy left firstdiff experimental results experimentally compare performance firstdiff left greedy table similar experimental results perform algorithms different configurations bins balanced allocation patience virtue values let maximum number probes allowed used firstdiff per ball value choose corresponding value average number probes required ball firstdiff configuration run algorithm times note percentage times maximum loaded bin particular value interest note firstdiff despite using average less probes per ball appears perform better greedy firstdiff terms maximum load conclusions future work paper introduced novel algorithm called firstdiff load balancing problem algorithm combines benefits two prominent algorithms namely greedy left firstdiff generates maximum load comparable left fully decentralized greedy another perspective observe firstdiff log greedy result comparable maximum load number probes used firstdiff log exponentially smaller greedy words exhibit algorithm performs well optimal algorithm significantly less computational requirements believe work opened new family algorithms could prove quite useful variety contexts spanning theory practice number questions arise work theoretical perspective interested developing analysis number probes experimental results suggest number probes used place ith ball depends congruence class modulo applied perspective interested understanding firstdiff would play real world load balancing scenarios like cloud computing environment servers interconnections etc workload jobs applications users etc likely lot heterogeneous dynamic acknowledgements thankful anant nag useful discussions developing library helpful experiments also grateful thomas sauerwald helpful thoughts visited institute computational experimental research mathematics icerm brown university finally john augustine amanda redlich thankful icerm hosted part semester long program references augustine moses redlich upfal balanced allocation patience virtue proceedings annual symposium discrete algorithms society industrial applied mathematics azar broder karlin upfal balanced allocations siam journal computing berenbrink czumaj steger balanced allocations heavily loaded case siam journal computing berenbrink khodamoradi sauerwald stauffer nearly optimal load distribution proceedings acm symposium parallelism algorithms architectures acm czumaj stemann randomized allocation processes random structures algorithms shen randomized load balancing strategies churn resilience networks journal network computer applications janson tail bounds sums geometric exponential variables technical report augustine moses redlich upfal mitzenmacher upfal probability computing randomized algorithms probabilistic analysis cambridge university press nag problems balls bins model master thesis indian institute technology madras india peres talwar wieder process weighted proceedings annual symposium discrete algorithms society industrial applied mathematics raab steger balls bins simple tight analysis randomization approximation techniques computer science springer shen algorithms structured networks parallel distributed systems ieee transactions shen liu zha jiang chen panneerselvam achieving dynamic load balancing mobile agents small world networks computer networks talwar wieder balanced allocations simple proof heavily loaded case automata languages programming springer asymmetry helps load balancing journal acm jacm
| 8 |
nov udc phg phg phg phg phg phg phg phg zoltan plum phg sandia zoltan phg phg phg researches dynamic load balancing algorithms adaptivity parallel adaptive finite element computations liu hui majored computational mathematics directed zhang work related phg parallel hierarchical grid phg toolbox developing parallel adaptive finite element programs active development state key laboratory scientific engineering computing phg designed distributed memory parallel computers purpose support development parallel algorithms codes solving real world application problems using adaptive finite element methods phg supports importing meshes several mesh file formats phg provides solvers preconditioners well interfaces many external packages solving linear systems equations eigenvalue problems resulted adaptive finite element discretization work divided two parts first part consists design realization dynamic load balancing module phg including studies mesh partitioning data migration algorithms second part studies adaptive strategies finite element computations main results work follows tetrahedral meshes used phg reasonable assumptions proved existence hamiltonian paths arbitrary two vertices well existence hamiltonian cycles designed efficient algorithm linear complexity constructing hamiltonian paths resulting algorithm implemented phg used ordering elements coarsest mesh refinement tree mesh partitioning algorithm designed encoding decoding algorithms high dimensional hilbert order hilbert order good locality wide applications various fields computer science memory management database dynamic load balancing analysed existing algorithms computing hilbert order designed improved algorithms computing hilbert order arbitrary iii space dimensions also proposed alternate form hilbert space filling curve advantage preserving ordering different levels algorithms implemented phg used mesh partitioning implemented refinement tree space filling curve based mesh partitioning algorithms phg designed dynamic load balancing module phg refinement tree based partitioning algorithm originally proposed mitchell one implemented phg improved several aspects space filling curve based mesh partitioning function phg use either hilbert morton space filling curve also implemented submesh process mapping algorithm plum package phg use reduce amount data migration mesh redistribution numerical experiments show dynamic load balancing functions work well thousands processes one billion elements space filling curve based mesh partitioning module phg faster yields better results corresponding function well known mesh partitioning packages studied existing adaptive strategies literature proposed new strategy numerical experiments show new strategy achieves exponential convergence superior precision solutions computation time strategy compared part work also serves validate adaptivity module phg keywords phg adaptive finite element method parallel computing hamiltonian path hilbert order dynamic load balancing adaptive strategy phg phg phg phg phg phg courant dirichlet max stratedy equidistribution stratedy guaranteed error reduction strategy mns strategy bank rivara sewell rivara sewell kossaczky liu joe arnold rivara thompson jones plassmann plassmann savage pared phg parallel hierarchical grid rcb recursive coordinate bisection rib recursive inertial bisection sfc curve rgb recursive graph bisection rsb recursive spectral bisection algorithm parmetis zoltan jostle phg phg parallel hierarchical grid mpi phg phg phg phg phg alberta medit gambit phg phg phg phg zoltan parmetis phg phg phg phg phg phg phg phg phg phg pcg gmres phg petsc hypre superlu mumps spc laspack spooles phg solver phg phg parpack jdbsym lobpcg slepc primme phg opendx vtk phg phg phg phg phg phg phg phg phg phg phg phg mitchell morton zoltan plum phg sandia zoltan phg phg phg phg phg phg rtk gmsh hamiltonian path heber local cut vertex mitchell mitchell hamiltonian path hamiltonian cycle mitchell mitchell path cycle hamiltonian path partial hamiltonian path hamiltonian cycle path hamiltonian path vtn step step step breadth fist search phg parallel hierarchical grid dell poweredge intel cpu netgen seconds mesh mesh mesh mesh mesh george cantor netto netto giuseppe peano peano peano hilbert moore osgood lebesgue knopp peano morton velho salmon challacombe matias hwansoo han cfd morton morton morton peano hilbert order resolution depth hnm morton order order hilbert order hilbert code ordering morton morton order morton code morton enconding index decoding butz goldschlager breinholt schierz witten groper cole peano fisher liu schrack faloutsos butz lawder butz kamata chen max max xor rek rek rek pin mod pin cell hilbert gene list exchange command reverse command rem hnm gnrm grnm gnrm grnm floor max grnm grnm grnm grnm inm floor grnv grnv grnv grnv grnv grnv grnv grnv hnm hnm hnm gnrm grnm grnv grnv xnew ynew xnew ynew xnew ynew xnew ynew quadrant xnew ynew quadrant xnew ynew phg phg phg phg phg phg phg phg phg parent child child phg phg phg phg simplex phg phg phg parent parent phg phg rcb recursive coordinate bisection rib recursive inertial bisection sfc curve recursive graph bisection rsb recursive spectral bisection algorithm multilevel methods diffusive methods rcb rcb rcb jones plassmann urb unbalanced recursive bisection urb urb jones plassmann rcb urb rib rib sfc patra oden sfc morton sfc multilevel method rsb laplacian fiedler vector patoh patoh partitioning tools hypergraph chaco chaco laplacian scotch scotch phg scotch parkway parkway mpi metis hmetis parmetis metis serial graph partitioning matrix ordering metis parmetis parallel graph partitioning matrix ordering metis mpi hmetis hypergraph circuit partitioning zoltan zoltan sandia jostle jostle jostle phg phg phg submesh submesh subtree subtree phg phg lif max phg lif lif phg phg phg parmetis zoltan phg phg partition method william mitchell mitchell set mitchell log log prefix sum phg algorithm bisect compute subtree weights bisect subtree root end algorithm bisect algorithm bisect subtree node node leaf assign node smaller set elseif node one child bisect subtree child else node two children select set child child examine sum subtree weight accumulated weight selected set smaller two sums assign subtree rooted child selected set add subtree weight weight set bisect subtree child endif end algorithm bisect subtree phg step step step phg morton morton phgpartitionrtk boolean phgpartitionrtk grid int dof float wep phg phg phghamiltonpath int phghamiltonpath grid phghilbertorder int phghilbertorder grid phgmortonorder morton int phgmortonorder grid phggridinitorder int phggridinitorder grid boolean dist dist morton bounding box lenx leny lenz lenx leny lenz phg len max lenx leny lenz len len len morton phg boolean sfc hsfc int boolean sfc hsfc int hsfc sfc typedef struct sfc int sfc sfc index phg morton boolean sfc msfc int boolean sfc msfc int msfc maxlevel phgpartitionsfc boolean phgpartitionsfc grid int dof float mark wep phg zoltan bounding box float dots int lenx int comm float dots int lenx int comm lenx comm dots typedef struct float key float int dots key oliker biswas phg oliker biswas similarity matrix phg pold pold pold cost function totalv maxv totalv maxv oliker biswas totalv phg oliker biswas phg unassigned generate list entries descending order count count find next entry assigned map partition processor phg int phgpartitionremap grid int nprocs comm int grid int nprocs comm int double int perm int comm phg mark perm perm phgpartitionremap cut edge surface index interprocess connectivity maximum local surface index max global surface index average surface index maximum interprocess connectivity phg phg phg phg phg phg phg phg int phgbalancegrid grid float lif int dof float lif wpk null phg mpi phg mark false true surface index interprocess connectivity parmetis hsfc phg msfc phg morton rcb zoltan zoltan hsfc netgen parmetis rcb msfc rcb parmetis msfc rcb msfc phg parmetis phg msfc phg morton rcb zoltan intel ddr infiniband submeshes parmetis rcb msfc submeshes parmetis rcb msfc dirichlet helmholtz cos cos cos netgen hypre boomeramg kfh maximum strategy phg maximum surface index submeshes parmetis rcb msfc average surface index submeshes parmetis rcb msfc maximum surface index submeshes parmetis rcb msfc average surface index submeshes parmetis rcb msfc cylinder procs parmetis rtk msfc rcb partitioning time number elements rtk msfc phg hsfc msfc parmetis rcb parmetis rcb msfc rtk rtk parmetis parmetis msfc rcb rcb parmetis rtk rcb parmetis rtk phg cylinder procs parmetis rtk msfc rcb time dynamic load balancing time number elements cylinder procs parmetis rtk msfc rcb time solving linear system number dof cylinder procs parmetis rtk msfc rcb overall computational time number dof rcb msfc phg method total running time repartitionings rcb parmetis rtk msfc rcb phg parmetis dirichlet poisson netgen pcg preconditioned conjugate gradient method kfh maximum strategy rtk msfc parmetis rcb rcb parmetis rtk rtk parmetis msfc rcb msfc rcb rcb thin plate procs parmetis rtk msfc rcb partitioning time number elements thin plate procs parmetis rtk msfc rcb time dynamic load balancing time number elements phg thin plate procs parmetis rtk msfc rcb time solving linear system number dof thin plate procs parmetis rtk msfc rcb overall computational time number dof method total running time repartitionings msfc rtk rcb parmetis parmetis morton msfc exp sin cos pcg preconditioned conjugate gradient method maximum strategy phg method time tal time dlb time sol time stp msfc rtk rcb parmetis method time tal time dlb time sol time stp msfc rcb rtk parmetis tal dlb sol stp msfc rtk parmetis rcb parmetis phg phg vdx vdx vds vds banach lipschitz phg rheinholdt residual estimate dual weighted residual estimate hierarchical basis error estimate averaging methods equilibrated residual error estimate patch dirichlet neumann poisson morin mns maximum strategy max equidistribution strategy eqdist guaranteed error reduction strategy gers mns strategy mns osca data oscillation tolerance maximum strategy max max equidistribution strategy eqdist guaranteed error reduction strategy gers poisson mns strategy mns morin osc osc osca oscth max eqdist gers mns gers mns phg parallel hierarchical grid max equidist gers mns log phg max phg void phgmarkelements strategy dof float theta dof osc float zeta int strategy dof float gamma int float osc theta zeta gamma eqdist theta gers ivo minimum rule support hierarchic jacobi legendre lobatto use priori knowledge solution regularity ainsworth senior bernardi valenciano type parameter gui affia estimate regularity using smaller estimates houston schwab log log estimate regularity using larger estimates ainsworth senior equilibrium condition neumann kej reference solution demkowicz kuh predict error estimate assumption smoothness melenk heuveline melenk heuveline kuk melenk heuveline kekh kukh min melenk heuveline old old heuveline kek kek kekh kukh heuveline heuveline melenk phg melenk kfpk fpk max pcg preconditioned conjugate gradient method block jacobi kek hafem melenk melenk hafem energy error number dof elements dof energy error hafem cos cos cos hafem energy error number dof elements dof energy error hafem fichera hafem energy error number dof elements dof energy error hafem melenk phg phg cpu phg phg zhang parallel algorithm adaptive local refinement tetrahedral meshes using bisection numer math theory methods applications geist kohl papadopoulos pvm mpi comparison features calculateurs paralleles geist adam beguelin jack dongarra weicheng jiang robert manchek vaidy sunderam pvm parallel virtual machine users guide tutorial networked parallel computing mit press pvm parallel virtual machine http mpi message passing interface http mpi standard http extensions interface http implementation mpi http bill gallmeister posix programmers guide programming real world reilly media dick buttlar jacqueline farrell bradford nichols pthreads programming posix standard better multiprocessing reilly media barbara chapman gabriele jost ruud van der pas using openmp portable shared memory parallel programming mit press rhit chandra leonardo dagum dave kohr dror maydan jeff mcdonald ramesh menon parallel programming openmp morgan kaufmann publisher george karypis vipin kumar multilevel partioning algorithms http george karypis vipin kumar graph partitioning fillreducing matrix ordering http kirk schloegel george karypis vipin unified algorithm loadbalancing adaptive scientific kirk schloegel george karypis vipin kumar parallel multilevel algorithms graph george karypis kirk schloegel vipin kumar parmetis parallel graph partitioning sparse matrix ordering library version boman devine heaphy hendrickson leung riesen vaughan catalyurek bozdag mitchell teresco zoltan parallel partitioning load balancing services user guide sandia national laboratories tech boman devine heaphy hendrickson leung riesen vaughan catalyurek bozdag mitchell zoltan parallel partitioning load balancing services developer guide sandia national laboratories tech karen devine erik boman robert heaphy bruce hendrickson courtenay vaughan zoltan data management services parallel dynamic applications computing science engineering erik boman karen devine lee ann fisk robert heaphy bruce hendrickson vitus leung courtenay vaughan umit catalyurek doruk bozdag william mitchell zoltan home page http falgout henson yang kole lee painter tong vassilevski performance preconditioners http satish balay kris buschelman william gropp dinesh kaushik matthew knepley lois curfman mcinnes barry smith hong zhang petsc web page http satish balay kris buschelman victor eijkhout william gropp dinesh kaushik matthew knepley lois curfman mcinnes barry smith hong zhang petsc users manual revision argonne national laboratory satish balay william gropp lois curfman mcinnes barry smith efficient management parallelism object oriented numerical software libraries modern software tools scientific computing press bastian birken johannsen lang neuss wieners flexible software toolbox solving partial differential equations computing visualization science bastian birken johannsen lang reichenberger wieners wittum wrobel parallel solving problems partial differential equations using unstructured grids adaptive multigrid methods high performance computing science engineering springer amestoy duff excellent multifrontal parallel distributed symmetric unsymmetric solvers methods appl mech amestoy duff excellent koster fully asynchronous multifrontal solver using distributed dynamic scheduling siam journal matrix analysis applications xiaoye james demmel superlu dist scalable sparse direct solver unsymmetric linear systems acm trans mathematical software blackford choi cleary azevedo demmel dhillon dongarra hammarling henry petitet stanley walker whaley scalapack users guide society industrial applied mathematics kristi maschhoff danny sorensen portable implementation arpack distributed memory parallel architectures copper mountain conference iterative methods vicente hernandez jose roman vicente vidal slepc scalable flexible toolkit solution eigenvalue problems acm transactions mathematical software hernandez roman vidal slepc scalable library eigenvalue problem computations lecture notes computer science steve benson lois curfman mcinnes jorge todd munson jason sarich tao user manual revision mathematics computer science division argonne national laboratory http peter macneice kevin olson clark mobarry rosalinda fainchtein charles packer paramesh parallel adaptive mesh refinement community toolkit computer physics communications hagger automatic domain decomposition unstructured grids doug advances computational mathematics james ahrens berk geveci charles law paraview tool large data visualization visualization handbook elsevier http convergent adaptive algorithm poisson equation siam numer anal morin nochetto siebert convergence adaptive finite element methods siam review bank efficient implementation local mesh refinement algorithms adaptive computational methods partial differential equations siam bank pltmg software package solving elliptic partial differential equations users guide volume frontiers applied mathematics siam mark jones paul plassmann parallel algorithms adaptive mesh refinement siam journal scientific computing bank sherman weiser refinement algorithms data structures regular local mesh refinement scientific computing publishing company amsterdan rivara mesh refinement processes based generalized bisection simplices siam journal numerical analysis rivara inostroza using bisection techniques automatic refinement delaunay triangulations international journal numerical methods engineering venere cost analysis triangle bisection refinement algorithms triangulations engineering computers sewell finite element program automatic mesh grading advances computer methods partial differential equations iii new brunswick rupak biswas roger strawn tetrahedral hexahedral mesh adaptation cfd problems applied numerical mathematics yoshitaka wada hiroshi okuda effective adaptation tenique hexahedral mesh concurrency computation practice experience adaptive finite element strategy three dimensional time dependent equations comput appl kossaczky recursive approach local mesh refinement two three dimensions comput appl arnold mukherjee pouly locally adapted tetrahedral meshes using bisection siam sci liu joe quality local refinement tetrahedral meshes based bisection siam sci rivara pizarro chrisochoides parallel refinement tetrahedral meshes using terminaledge bisection algorithm proceedings international meshing roundtable williamsbourg usa barry jones plassmann parallel adaptive mesh refinement techniques plasticity problems advances engineering software philippe david thompson parallel mesh refinement without communication proceedings international meshing roundtable williamsbourg usa september savage parallel refinement unstructured meshes proceedings iasted international conference parallel distributed computing systems mit boston usa phg parallel hierachical grid http williams performance dynamic load balancing algorithms unstructured mesh calculations concurrency practice experience keyser roose grid partitioning inertial recursive bisection report dept computer science belgium campbell devine flaherty gervasio teresco dynamic octree load balancing using curves technical report farhat simple efficient automatic fem domain decomposer computer structures fiduccia mattheyses heuristic improving network partitions acm ieee design automation conference proceedings walshaw cross jostle parallel multilevel software overview magoules editor mesh partitioning techniques domain decomposition techniques schmidt siebert alberta adaptive hierarchical finite element toolbox http pascal frey medit interactive mesh visualization software http ansys fluent http linbo zhang tao cui hui liu symmetric quadrature rules triangles tetrahedra opendx open visualization data explorer http vtk visualization toolkit http leonid oliker rupak biswas plum parallel load balancing adaptive unstructured meshes journal parallel distributed computing dia http cifuentes kalbag performance study tetrahedral hexahedral elements finite element structure analysis finite elements analysis design steven benzley ernest perry karl merkley brett clark greg sjaardama comparison hexagonal tetrahedral finite element meahes elastic analysis mark mark shephard mesh generation modified octree technique international journal numerical methods engineering mark shephard marcel georges mesh generation finite octree technique international journal numerical methods engineering lawson software surface interpolation mathematical software iii david watson computing delaunay tesselation application voronoi polytopes computer journal geuzaine remacle gmsh finite element mesh generator facilities international journal numerical methods engineering netgen advancing front generator based abstract rules computing visualization science tetgen quality tetrahedral mesh generator delaunay triangulator http mitchell hamiltonian paths grids journal research nist mitchell partition parallel solution partial differential equations journal research nist heber biswas gao walks adaptive unstructured meshes nas techinical report nasa ames research center hans sagan curves new york velho gomes miranda digital halftoning curves computer graphics morton computer oriented geodetic database new technique file sequencing technical report ottawa canada guohua jin john using curves computation reordering proceedings los alamos computer science institute sixth annual symposium alpert kahng partitioning via spacefilling curves dynamic programming proceedings annual conference design automation conference abel mark comparative analysis orderings international geographical information systems bartholdi iii goldsman algorithms hilbert spacefilling curve software practice experience berchtold keim seaching spaces index structures improving performance multimedia databases acm computing surveys butz space filling curves mathematical programming information control challacombe general parallel matrix multiply linear scaling scf theory computer physics communications cox zwaenepoel improving irregular benchmarks data reordering proceedings conference supercomputing dallas jin fowler increasing temporal locality skewing recursive blocking proceedings conference supercomputing denver nov matias shamir video scrambling technique based space filling curve advances hwansoo han tseng improving locality adaptive irregular scientific codes int workshop languages compilers parallel computing whalley kennedy improving memory hierarchy performance irregular applications proceedings acm international conference supercomputing rhodes greece june whalley kennedy improving memory hierarchy performance irregular applications using data computation reorderings international journal parallel programming platzman bartholdi iii spacefilling curves planar travelling salesman problem acm salmon parallel methods simulation proceedings siam conference parallel processing scientific computing zhang webber space diffusion improved parallel halftoning technique using spacefilling curves proceedings annual conference computer graphics interactive techniques curve http david salomon data compression complete reference jochen alber rolf niedermeier hilbert indexings theory computing systems springer breinholt schierz algorithm generating hilbert curve recursion acm trans mathematical software butz altrnative algorithm hilbert curve ieee transactions computers goldschlager short algorithms curves experience witten wyvill generation use curves practice experience cole note space filling curve experience fisher new algorithm generation hilbert curves software practice griffiths algorithms generating curves computeraided design liu schrack encoding decoding hilbert order experience liu schrack algorithm encoding decoding hilbert order ieee transactions image processing faloutsos roseman fractals secondary key retrieval oceedings acm symposium principles database systems philadelphia pensylvania usa chen wang shi new algorithm encoding decoding hilbert order experience lawder calculation mappings one values using hilbert curve technical report kamata eason bandou new algorithm hilbert scanning ieee trans image processing feng algorithm analyzing hilbert curve vol springer robert doran gray code journal universal computer science liu dynamic load balancing adaptive unstructured meshes high performance computing communications ieee international conference schmidt siebert alberta adaptive hierarchical finite element toolbox http bruce hendrickson karen devine dynamic load balancing computational mechanics computer methods applied mechanics engineering jimack overview dynamic parallel adaptive computational mechanics codes parallel distributed processing computational mechanics publications vol algorithms graph partitioning survey blake load balancing unstructured mesh applications progress computer research devine boman heaphy hendrickson teresco faik flaherty gervasio new challenges dynamic load balancing appl numer math campbell performance octree load balancer parallel adaptive finite element computation master thesis computer science rensselaer polytechnic institute troy umit catalyurek cevdet aykanat decomposing irregularly sparse matrices parallel multiplication proceedings third international workshop parallel algorithms irregularly structured problems bui jones heuristic reducing fill sparse matrix factorization proc siam conf parallel processing scientific computing siam hendrickson leland multilevel algorithm partitioning graphs proc supercomputing acm karypis kumar fast high quality multilevel scheme partitioning irregular graphs tech corr university minnesota dept computer science minneapolis june chang kurc sussman catalyurek saltz workload partitioning strategy parallel data aggregation proc siam conf parallel processing scientific computing siam philadelphia cybenko dynamic load balancing distributed memory multiprocessors journal parallel distributed computing karen devine erik boman robert heapby bruce hendrickson courtenay vaughan zoltan data management service parallel dynamic applications computing science engineering karen devine joseph flaherty parallel adaptive techniques conservation laws applied numerical mathematics karen devine bruce hendrickson tinkertoy parallel programming case study zoltan international journal computational science engineering berger bokhari partitioning strategy nonuniform problems multiprocessors ieee trans computers jones plassmann computational results parallel unstructured mesh computations computing systems engineering simon partitioning unstructured problems parallel processing proc conference parallel methods large scale structural analysis physics applications pergammon press van driessche roose dynamic load balancing spectral bisection algorithm constrained graph partitioning problem high performance computing networking lecture notes computer science springer pothen simon liou partitioning sparse matrices eigenvectors graphs siam matrix oden patra feng domain decomposition adaptive finite element methods proc seventh intl conf domain decomposition methods state college pennsylvania october patra oden problem decomposition adaptive finite element methods computing systems leonid oliker rupak biswas efficient load balancing data remapping adaptive grid calculations acm symposium parallel algorithms architectures rupak biswas leonid oliker experiments repartitioning load balancing adaptive meshes james teresco lida ungar comparison zoltan dynamic load balancers adaptive computation department computer science williams college technical report trifunovic knottenbelt parkway parallel multilevel hypergraph partitioning tool proc international symposium computer information sciences aykanat approach decomposition denver november bruce hendrickson robert leland chaco user guide version sandia tech report chevalier pellegrini tool efficient parallel graph ordering parallel computing pellegrini roman scotch software package static mapping dual recursive bipartitioning process architecture graphs proceedings hpcn brussels belgium lncs springer george karypis vipin kumar multilevel hypergraph partitioning design automation conference zhiming chen feng jia adaptive finite element method reliable efficient error control linear parabolic problems math zhelezina adaptive finite element method numerical simulation electric magnetic acoustic fields thesis univ erlangen martin singular problems msc thesis department mathematical sciences university texas paso may svatava analysis optimation class hierarchic finite element methods msc thesis department mathematical sciences university texas paso may babuska rheinboldt posteriori error estimates finite element method international journal numerical methods engineering babuska rheinboldt adaptive approaches reliability estimates finite element analysis computer methods applied mechanics engineering noor babuska quality assessment control finite element simulations finite elements design zhienkiewicz achievements unsolved problems finite element method international journal numerical methods engineering oden demkowicz advances adaptive improvements survey adaptive finite element methods computational mechanics accuracy estimates adaptive refinements finite element computations asme pages ainsworth oden posteriori error estimation finite element analysis computer methods applied mechanics engineering chen dai efficiency adaptive finite element methods elliptic problems discontinuous coefficients siam journal scientific computing segeth finite element methods chapman press randolph bank hierarchical bases finite element method acta numerica adjerid aiffa flaherty hierarchical finite element bases triangular tetrahedral elements computer methods applied mechanics engineering volume spencer sherwin george karniadakis new triangular tetrahedral basis finite element methods international journal numerical methods engineering mitchell mcclain survey strategies elliptic partial differential equations accepted annals european academy sciences ainsworth senior element procedures geometric meshes adaptivity constrained approximation grid generation adaptive algorithms houston schwab element methods hyperbolic problems mathematics finite elements applications mafelap elsevier melenk wohlmuth error estimation advances computational mathematics eibner melenk adaptive strategy based testing analyticity compute mech heuveline rannacher adaptivity element method numer math ainsworth senior adaptive refinement strategy element computation applied numerical mathematics suri optimal convergence rate finite element method siam journal numerical analysis andersson guo melenk finite element method solving problems singular solutions journal computational applied mathematics guo version finite element method part basic approximation results comput mech guo version finite element method part general results applications comput mech gui versions finite element method dimension part iii adaptive version numer math houston note design finite element methods elliptic partial differential equations computer methods applied mechanics engineering bernardi owens error indicator mortar element solutions stokes problem ima num valenciano owens adaptive spectral element method stokes flow appl numer math adjerid aiffa flaherty computational methods singularly perturbed systems malley eds singular perturbation concepts differential equations ams providence oden patra parallel adaptive strategy finite elements comput methods appl mech engrg oden patra feng adaptive strategy adaptive multilevel hierarchical computational strategies mavriplis adaptive mesh strategies spectral element method comput methods appl mech engrg demkowicz rachowicz devloo fully automatic sci comp rachowicz demkowicz oden toward universal adaptive finite element strategy part design meshes comput methods appl mech demkowicz elliptic problems comput meth appl mech hanging nodes automatic adaptivity math comput simulation houseton note design finite element methods elliptic partial differential equations comput methods appl mech jones plassmann adaptive refinement unstructured meshes finite elements analysis design zumbusch simultanous adaption multilevel finite elements zuse institute berlin jack dongarra ian foster geoffrey fox wiliam gropp ken kennedy linda torczon andy white sourcebook parallel computing morgan kaufmann publishers inc san francisco usa thomas cormen charles leiserson ronald rivest clifford stein introduction algorithm second edition mit press wolfgang bangerth rolf rannacher adaptive finite element methods differential equations verlag basel switzerland adaptive finite element methods lecture notes eriksson estep hansbo johnson introduction adaptive methods differential equations acta numerica nochetto adaptive finite element methods elliptic pde cna summer school bernardi adaptive finite element methods elliptic equations coefficients numerische mathematik phg hui liu dynamic load balancing adaptive unstructured meshes ieee international conference high performance computing communications linbo zhang tao cui hui liu symmetric quadrature rules triangles tetrahedra phg hui liu linbo zhang existence construction hamiltonian paths cycles conforming tetrahedral meshes international journal computer mathematics accepted
| 5 |
effective differential lisi gabriela pablo jul departamento ciencias exactas ciclo universidad buenos aires ciudad universitaria buenos aires argentina departamento imas facultad ciencias exactas naturales universidad buenos aires ciudad universitaria buenos aires argentina addresses lisi jeronimo psolerno july abstract paper focuses effectivity aspects theorem differential fields let ordinary differential field characteristic hui field differential rational functions generated single indeterminate let given non constant rational functions hui generating differential subfield hui differential theorem proved ritt states exists hvi prove total order degree generator bounded minj ord respectively maxj ord maxj deg byproduct techniques enable compute generator dealing polynomial ideal polynomial ring finitely many variables introduction presented famous result currently known theorem extension fields field rational functions one variable suitable see modern proof castelnuovo solved problem rational function fields two variables algebraically closed ground field three variables problem solved negatively ritt addressed differential version result let ordinary differential field characteristic indeterminate fhui partially supported following argentinian grants anpcyt pict ubacyt ubacyt smallest field containing derivatives differential field fhui element fhvi element called generator extension fact ritt considered case differential field meromorphic functions open set complex plane finitely generated extension later kolchin gave new proof theorem differential field characteristic without hypothesis finiteness contrary classical setting differential problem fails case two variables see possible weak generalization theorem dimension greater one conjecture control theory states every system linearizable dynamic feedback linearizable endogenous feedback algebraic terms subextension differentially flat extension differentially flat section present paper deals quantitative aspects differential theorem computation generator finite differential field extension precisely see propositions theorem let ordinary differential field characteristic differentially transcendental relatively prime differential polynomials order least one derivative occurs total degree bounded every generator written quotient two relatively prime differential polynomials order bounded min ord total degree bounded min approach combines elements ritt kolchin proofs mainly introduction differential polynomial ideal related graph rational map estimations concerning order differentiation index differential ideals developed estimations allow reduce problem computing generator basis computation polynomial ring finitely many variables see remark algorithmic version ritt proof differential theorem given authors propose deterministic algorithm relies computation ascending chains means zero decomposition algorithm however quantitative questions order degree generator addressed effectiveness considerations classical differential version refer interested reader paper organized follows section introduce notations definitions previous results differential algebra mainly concerning order differentiation index needed rest paper section present straightforward optimal upper bound order generator discuss ingredients appear classical proofs theorem ritt kolchin use arguments section means estimates differentiation index order associated dae system reduce computation generator elimination problem effective classical algebraic geometry byproduct obtain upper bounds degree generator finally section show two simple examples illustrating constructions preliminaries section introduce notation used throughout paper recall definitions results differential algebra basic definitions notation differential field field set derivations paper differential fields ordinary differential fields say equipped one derivation instance usual derivation reason simply write differential field instead ordinary differential field let differential field characteristic ring differential polynomials indeterminates denoted simply defined commutative polynomial ring infinitely many indeterminates extending derivation letting stands ith derivative cus tomarily first derivatives also denoted write every fraction field differential field denoted fhzi derivation obtained extending derivation quotients usual way order respect ord max appears order ord max ord notion order extends naturally fhzi taking maximum orders numerator denominator reduced representation rational fraction given differential polynomials write denote smallest differential ideal containing smallest ideal containing polynomials derivatives arbitrary order minimum radical differential ideal containing denoted every write differential field extension consists two differential fields restriction given subset denotes minimal differential subfield containing element said differentially transcendental family derivatives algebraically independent otherwise said differentially algebraic differential transcendence basis minimal subset differential field extension differentially algebraic differential transcendence bases differential field extension cardinality see sec theorem called differential transcendence degree differential polynomials ideals manifolds recall definitions properties concerning differential polynomials solutions let class order variables defined greatest appears class order separant denoted initial denoted coefficient highest power given said higher rank either ord ord ord ord degree greater degree finally said higher rank higher class class higher rank use elementary facts theory characteristic sets definitions basic properties rankings characteristic sets refer reader let necessarily finite system differential polynomials manifold set zeros possible differential extensions every radical differential ideal unique representation finite irredundant intersection prime differential ideals called essential prime divisors see differential polynomial positive class algebraically irreducible one essential prime divisor contain manifold prime differential ideal called general solution see function differentiation index let prime differential ideal differential dimension denoted diffdim differential transcendence degree extension frac frac denotes fraction field differential function respect function defined algebraic transcendence degree frac function equals linear function diffdim ord ord invariant called order sec theorem minimum equality holds regularity let finite set differential polynomials contained order bounded integer throughout paper assume words systems consider actually differential purely algebraic definition set every jacobian matrix polynomials respect variables full row rank fraction field fundamental invariant associated ordinary differential algebraic equation systems differentiation index several definitions notion see references given every case represents measure implicitness given system use following definition introduced section context differential polynomial systems respect fixed prime differential ideal definition index system polynomials order min every contraction prime ideal denotes localized ring prime ideal algebraic ideal generated roughly speaking differentiation index system minimum number derivatives polynomials needed write relations given differential ideal order differential theorem chapter viii see also classical theorem transcendental field extensions generalized differential algebra framework theorem differential theorem let ordinary differential field characteristic let differentially transcendental let differential field fhui element fhvi goal following let given differential polynomials relatively prime polynomials every denote subfield fhui want compute generator pair differential polynomials fhp also interested study priori upper bounds orders degrees polynomials optimal estimate order polynomials obtained elementary computations see section however problem estimating degrees seems delicate question requires careful analysis subsequent sections paper bound order start proving upper bound order generator proposition previous assumptions notation element fhvi satisfies ord min ord proof let fhvi ord let assumption let new differential indeterminate since fhvi exists fht let ord ord addition since order follows scendental differentially furthermore since ord conclude therefore ord proposition follows note proposition shows possible generators order fact two arbitrary generators related homographic map coefficients see instance ritt approach discuss ingredients appear classical proofs theorem see also consider approach following let new differential indeterminate field fhui particular consider differential prime ideal differential polynomials vanishing lemma manifold general solution irreducible differential polynomial precisely differential polynomial lowest rank proof let differential polynomial lowest rank note algebraically irreducible since prime denote order separant consider differential ideal mod shown ideal prime moreover mod particular order ord multiple see furthermore essential prime divisor representation intersection essential prime divisors prime contain therefore manifold general solution order prove lemma suffices show let taking account prime follows see inclusion consider differential polynomial minimality rank least rank reducing modulo obtain relation type mod initial differential polynomial whose rank lower rank since lie differential ideal follows minimality implies particular therefore since otherwise would differential polynomial rank lower rank recall follows multiplying polynomial given lemma suitable denominator obtain differential polynomial factor following result proved proposition two coefficients regarded polynomial polynomial multiple factor note definition ratio two coefficients coincides ratio corresponding coefficients alternative characterization generator previous assumptions consider map differential algebras defined let kernel morphism isomorphism implies prime differential ideal moreover fraction field isomorphic fhui addition previous isomorphism gives inclusion inclusion induced map fraction fields leads original extension fhui let new differential indeterminate fhui ideal introduced differential polynomial multiplying adequate element obtain differential polynomial rank taking representative respect coefficients get rank differential polynomial conversely given differential polynomial every coefficient polynomial lies differential polynomial zero polynomial vanishes rank higher conclude differential polynomial lowest rank among differential polynomials associated differential multiple factor minimal polynomial polynomial introduced lemma therefore proposition generator provided ratio obtained ratio pair coefficients lie moreover proposition let differential polynomial assume lowest rank let suitable integer consider two generic points let substituting differential polynomials obtained respectively generator proof proposition means two specializations variables vanish obtain polynomials form proposition follows since fhp reduction polynomial ring degree bound section obtain upper bounds order degree differential polynomial see proposition involved characterization generator bounds imply particular upper bound degrees numerator denominator generator see section bounding order minimal polynomial estimate order variables differential polynomial minimal rank prime differential ideal introduced section remark differential dimension equals since fraction field isomorphic fhui let max ord without loss generality may assume ord ord consider elimination order since transcendental variable appears derivative appear follows algebraically transcendental continuing way successive derivatives conclude differentially transcendental implies differential ideal contains differential polynomial involving variable thus characteristic set considered elimination order form furthermore differential polynomial minimal rank take following lemma may assume characteristic set irreducible theorem ord ord every particular ord ord order differential prime ideal computed exactly order introduce system differential polynomials provides alternative characterization ideal enables compute order every denote let lemma ideal unique minimal differential prime ideal contain product moreover proof definitions clear moreover since characteristic set order given conclude polynomial exists observe initial separant polynomial every proposition follows system introduced following property use sequel recall definition lemma system proof let maximum orders differential polynomials every let jacobian matrix polynomials respect variables every minor corresponding partial derivatives respect variables scalar multiple zero modulo apply results order compute order proposition order differential ideal equals max ord proof lemma states ideal essential prime divisor shown lemma system therefore taking account maximum orders polynomials theorem regularity hilbertkolchin function implies order obtained value function precisely since differential dimension equals ord trdegf order compute transcendence degree involved formula observe first clear variables algebraically independent ring order ideal coincides transcendence degree without loss generality may assume ord since variable appears algebraic similarly since appears follows extension algebraic proceeding way successive derivatives conclude algebraic since ord arguments previous paragraph extension algebraic therefore trdegl trdegl inequality conclude proposition differential polynomial lowest rank ord reduction algebraic polynomial ideals stated proposition generator closely related polynomial lowest rank proposition polynomial found algebraic ideal polynomial ring following result enable work finitely generated ideal given known generators key point estimation index system see definition lemma index equals particular proof every let jacobian submatrix polynomials respect variables index obtained minimum rank rank holds ranks computed fraction field see section since order polynomials variables zero derivative appears implies columns jacobian submatrices systems corresponding partial derivatives respect null hand order system may suppose variable appears polynomial thus matrices block lower triangular matrices form denotes zero column vector column vector rank moreover diagonal structure see rank rank follows index system equals notation denote affine variety defined zariski closure solution set polynomial system every algebraic ideal corresponding variety ideal prime since kernel map irreducible variety moreover reduced complete intersection dimension ideal variety enables ideal minimal polynomial lies proposition following equality ideals holds proof start showing first note since conversely polynomial since otherwise contradicting fact since prime ideal follows lemma implies finally since see proposition conclude therefore previous proposition applied order effectively compute polynomial minimal rank consequently generator extension see propositions working polynomial ring finitely many variables remark consider polynomial ideal compute basis ideal pure lexicographic order variables smaller variables polynomial smaller polynomial contains least one variable degree bounds order estimate degree minimal polynomial previous section therefore proposition also degree generator relate eliminating polynomial algebraic variety suitable linear projection let order proposition consider fields frac frac minimality rank equivalent fact algebraically independent minimal polynomial let transcendence basis denote algebraically independent algebraically dependent since holds let cardinality consider projection construction dimension equals zariski closure hypersurface let irreducible polynomial defining hypersurface recall irreducible variety lemma inequality deg deg note ordu ordu however may property minimal rank following simple example shows example let following previous construction take minimal rank since vanishes fact even though polynomial minimal rank polynomial looking following relation degrees sufficient obtain degree upper bound generator proposition previous assumptions notation deg deg particular deg deg proof construction minimal polynomial minimal polynomial without loss generality may assume polynomials coefficients content respectively since infer divides proposition follows recalling generator obtained quotient two specializations variables derivatives polynomial see proposition two arbitrary generators related homographic map coefficients see conclude degrees numerator denominator generator bounded degree variety exhibit purely syntactic degree bounds terms number given generators maximum order upper bound degrees numerators denominators first since variety irreducible component algebraic set defined polynomials total degrees bounded theorem see instance theorem implies deg analysis particular structure system leads different upper bound deg exponential taking account irreducible variety dimension degree number points intersection generic linear variety codimension deg every generic affine linear form variables aijk bik every equation implies generic points satisfy proceeding inductively follows easily generically rjk deg rjk every substituting formulae clearing denominators deduce degree equals number common solutions system defined polynomials aijk rjk generic coefficients aijk bik inequality upper bounds degrees polynomials rjk follows polynomial total degree bounded every therefore bound implies deg conclude proposition previous assumptions notation degrees numerator denominator generator bounded min examples let differential field characteristic differentially transcendental element example let case ideal prime contain therefore dimension equals transcendence basis look polynomial compute polynomial elimination procedure since degu finally specializing compute specialization points respectively hence obtain following generator previous example polynomial lies polynomial ideal differentiation equations needed order compute consequently generator obtained algebraic rational function given generators however always case following example shows example let following previous arguments consider ideal prime ideal variety ideal dimension transcendence basis including maximal subset minimal polynomial since degu two specializations polynomial lead conclude fhui generator form references alonso recio rational function decomposition algorithm polynomials symb comp clausen shokrollahi algebraic complexity theory grundlehren der mathematischen wissenschaften vol springer berlin nearly optimal algorithms decomposition multivariate rational functions extended theorem complexity alfonso jeronimo massaccesi index order implicit systems diffrential equations linear algebra applications alfonso jeronimo complexity resolvent representation prime differential ideals complexity alfonso jeronimo ollivier sedoglavic geometric index reduction method implicit systems differential algebraic equations symbolic computation issue fliess martin rouchon approach equivalence flatness nonlinear systems ieee trans automat control fliess variations sur notion quelques aspects smf journ soc math france paris gao theorem differential fields journal systems science complexity rubio sevilla multivariate rational function decomposition computer algebra london symbolic comput rubio sevilla unirational fields transcendence degree one functional decomposition proc int symposium symb alg comput issac acm press new york heintz definability fast quantifier elimination algebraically closed fields theoret comput sci kolchin differential algebra algebraic groups academic press newyork kolchin extensions differential fields ann math kolchin extensions differential fields iii bull amer math soc beweis eines satzes rationale curven math ann netto einen gordaschen staz math ann ollivier une dimension acad sci paris serie ritt differential equations algebraic standpoint amer math soc colloq vol xiv new york ritt differential algebra amer math soc colloq vol new york sadik bound order characteristic set elements ordinary prime differential ideal applications appl algebra engrg comm comput sederberg improperly parametrized rational curves computer aided geometric design van der waerden modern algebra vol
| 0 |
may remarks finitarily approximable groups nikolay nikolov jakob schneider andreas thom abstract concept group class finite groups common generalization concepts sofic weakly sofic linear sofic group glebsky raised question whether groups approximable finite solvable groups arbitrary invariant length function answer question showing finitely generated perfect group property generalizing counterexample howie related note prove group approximated finite groups quotient approximated finite special linear groups moreover discuss question connected lie groups embedded metric ultraproduct finite groups invariant length function prove precisely abelian ones providing negative answer question doucha referring problem zilber show identity component lie group whose topology generated invariant length function abstract quotient product finite groups abelian last two facts give alternative proof result turing finally solve conjecture pillay proving identity component compactification pseudofinite group must abelian well results article applications theorems generators commutators finite groups first author segal section also use results liebeck shalev bounded generation finite simple groups introduction eversince work gromov gottschalk surjunctivity conjecture class sofic groups attracted much interest various areas mathematics major applications notion arose work elek kaplansky direct finiteness conjecture determinant conjecture recently joint work third author klyachko generalizations conjecture howie conjecture despite considerable effort group found far view situation attempts made provide variations problem might approachable terminology holt nikolay nikolov jakob schneider andreas thom rees sofic groups precisely groups approximated finite symmetric groups normalized hamming length sense definition natural vary class finite groups also metrics allowed note terminology differs one used similar concepts studied strongest form approximation satisfied lef resp lea groups case well known finitely presented group approximable finite resp amenable groups discrete length function lef resp lea fails residually finite resp residually amenable examples sofic groups fail lea thus also fail lef given see also answering question gromov third author proved higman group approximated finite groups invariant length function howie presented group result glebsky turned approximable finite nilpotent groups arbitrary invariant length function present article provides four results type see sections however setting restrict classes finite groups impose restrictions length functions approximating groups invariant see definitions recently glebsky asked whether groups approximated finite solvable groups sense definition section answer question establishing finitely generated perfect group counterexample see theorem key result theorem segal generators commutators finite solvable groups section using results first author liebeck shalev prove group approximable finite groups homomorphism metric ultraproduct finite simple groups type psln conjugacy length function see theorem section discuss approximability lie groups finite groups easy see topological group approximable symmetric groups continuously embeddable metric ultraproduct symmetric groups arbitrary invariant length function see remark using much deeper analysis show connected lie group appoximable finite groups sense definition precisely abelian see theorem question doucha asked groups equipped invariant length function embed metric ultraproduct finite groups invariant length function result implies compact connected lie group example group thus simplest example topological group weakly sofic continuously embeddable metric ultraproduct finite groups invariant length function remarks finitarily approximable groups however remark every linear lie group abstract subgroup algebraic ultraproduct finite groups indexed see remark furthermore section answer question zilber exist compact simple lie group quotient algebraic ultraproduct finite groups indeed show lie group equipped invariant length function generating topology abstract quotient product finite groups abelian identity component see theorem hence compact simple lie group fails approximable finite groups sense zilber slight variation theorem also answers question pillay moreover point theorem theorem provide alternative proof main result turing finally using approach previous two results solve conjecture pillay bohr compactification pseudofinite group abelian see theorem results section follow theorem generators commutators finite groups first author segal preliminaries section recall basic concepts introduce notion abstract topological groups present examples metric groups length functions metric group mean group equipped metric however allow metric attain value infinity needed definition make sense metric group define corresponding length function norm inherits following three properties iii conversely function associate metric defined gives correspondence length functions metrics indicate length function metric correspond equipping decoration length function called invariant constant conjugacy classes happens precisely article length functions invariant metrics let introduce following types length functions finite group shall use article discrete length function simplest one defined corresponds discrete metric ddg conjugacy pseudo length function nikolay nikolov jakob schneider andreas thom defined conjugacy class proper length function trivial center projective rank length function defined pgln prime power min lift finally cayley length function respect subset defined min call family length functions sequence finite groups lipschitz continuous respect second family groups example since finite group conjugacy length function projective rank length function lipschitz continuous respect discrete length function also lipschitz continuous respect call families lipschitz equivalent example lipschitz equivalent class nonabelian finite simple groups follows see argument end section abstract groups define metric approximation abstract group class finite groups throughout article let class definition abstract group called function finite subset exist group invariant length function map iii note definition differs slightly definition one impose restrictions invariant length functions however equivalent definition indeed may even require definition without changing essence namely choosing small enough setting min defining replace sense impose restrictions length functions groups remarks finitarily approximable groups terms property discrete property strong property coincide moreover similar soficity local property expressed following remark remark abstract group every finitely generated subgroup property let present examples abstract groups subsequently denote alt resp fin class finite alternating groups resp class finite groups indeed abstract groups sense seen generalization sofic resp weakly sofic groups shown section example group sofic resp weakly sofic resp abstract group groups approximable certain classes finite simple groups lie type studied every certainly since take restriction identity discrete length function definition hence remark implies example every locally abstract group metric ultraproducts groups since another common equivalent characterization groups via metric ultraproducts invariant length function recall concept next details algebraic geometric structure ultraproducts see also definition let sequence finite groups invariant length function ultrafilter index set metric ultraproduct defined group modulo normal subgroup limu null sequences equipped invariant length function defined limu representative important length functions since otherwise would authors use slightly different definition restricting sequences uniformly bounded length however prefer definition since ultraproduct always quotient product finite groups another thing mention ultraproduct definition always topological group group operation taking inverses continuous respect topology induced holds invariant nikolay nikolov jakob schneider andreas thom lastly remark algebraic ultraproduct family finite groups isomorphic metric ultraproduct groups equipped discrete length function respect ultrafilter sense view every algebraic ultraproduct metric ultraproduct point announced characterization abstract groups via metric ultraproducts let call class trivial either promised characterization lemma class every abstract group isomorphic discrete subgroup metric ultraproduct invariant length function diam distance images two different elements one countable chosen natural order ultrafilter conversely subgroup metric ultraproduct invariant length function abstract group proof result identical corresponding proof sofic case well known hence omit topological groups view lemma natural generalize notion group topological groups using ultraproducts definition topological group called embeds continuously metric ultraproduct invariant length functions lemma indicates following class examples topological groups example every abstract group equipped discrete topology topological group conversely topological group abstract group forget topology present classes examples need auxiliary result following lemma gives sufficient condition metric group isomorphic ultraproduct finite metric groups proof trivial lemma let group invariant length function index set ultrafilter sequence finite groups invariant length function metric ultraproduct assume mappings isometric homomorphism lim lim isometric embedding ultraproduct defined remarks finitarily approximable groups embedding surjective every exists limu iii surjects onto subgroup elements finite length previous assertion holds let class finite products class subgroups finite products respectively investigate profinite groups topological groups standard example given following lemma lemma let profinite group isomorphic metric ultraproduct topological group proof want apply lemma equip invariant length function max let ultrafilter set let restriction define way every distance minimal definition hence easy verify condition lemma fulfilled define exists compactness limu limu limu ends proof previous example derive following result lemma group following equivalent topological group metrizable iii inverse limit countable inverse system maps surjective closed topological subgroup countable product proof implications iii trivial iii let countable system open neighborhoods find open normal subgroup subgroup group proposition proposition let collection subgroups since hausdorff holds moreover hence may assume closed finite intersections apply proposition obtain inverse limit respect natural maps standard construction inverse limit embeds countable product definition embeds countable product need show nikolay nikolov jakob schneider andreas thom countable product lemma proof complete remark previous lemma implies group embeds continuously metric ultraproduct invariant length function already embeds ultraproduct countably many groups able present following important example example topologically finitely generated group topological group proof indeed embeds continuously product conq tinuous finite quotients finite generation implies countably many proposition restrict map product subgroups still embedding latter embeds countable product hence lemma however also simple find examples profinite groups approximable finite groups example uncountable products finite groups metrizable hence approximable finite groups turn lie groups following example demonstrates connected abelian lie groups always approximated finite abelian groups sense definition henceforth let abd class finite abelian groups direct sum cyclic groups lemma every connected abelian lie group equipped euclidean length function isometrically isomorphic subgroup elements finite length ultraproduct abd length function hence abd proof wish apply iii lemma euclidean length function let ultrafilter set define let canonical isomorphism set map moreover equip unique length function turns isometry let map minimal define clearly tends zero hence condition lemma holds condition iii follows compactness remarks finitarily approximable groups closed balls finite radius argument end proof lemma proof complete see theorem section connected abelian lie groups connected lie groups groups subsequently let sol resp nil class finite solvable resp nilpotent groups section establish following theorem theorem finitely generated perfect group consequence finite group solvable indeed finite solvable group hand finite group contains perfect subgroup hence remark theorem initially howie proved group mimic proof finitely generated perfect group extend establishing groups even using techniques segal preparation proof theorem need auxiliary result recall topology group initial topology induced homomorphisms equipped discrete topology hence closure subset topology characterized follows element lies closure homomorphism adapting theorem one prove following theorem relating groups topology free group finite rank theorem let presentation group finite sequence holds topology converse holds closed respect finite products subgroups remark closed respect finite products subgroups previous theorem implies residually abstract groups since finitely generated group finite sequence obtain topology view theorem closed subgroups prove existence group suffices find normal subgroup free group finite rank element sequence surjective homomorphism nikolay nikolov jakob schneider andreas thom classes nil sol closed respect subgroups shall construct situation described subsequently let freely generated fix presentation perfect group element assumption perfect equivalent fact hence find modulo consider surjective homomorphism finite group later assumed nilpotent resp solvable writing translates modulo clearly generate modulo surjective need lemma state becomes necessary introduce notation group define commutator two elements write set subgroups write subgroup generated lemma proposition let groups suppose denotes subgroup proof theorem part apply previous lemma moreover choose integer lth term lower central series hence exist lij lir modulo assuming nilpotent last congruence shows nil knil fixed sequence entries set thus prove need following deeper result segal finite solvable groups theorem theorem assume finite solvable group moreover assume generated elements fixed sequence indices whose entries length depend gij remarks finitarily approximable groups proof theorem part assume solvable want apply theorem since surjective elements generate may set still define elements congruences conclude sequence lrr good choice thus bounded terms theorem gives similarly nilpotent case fixed sequence sol entries whose length ksol knil bounded terms thus note finite generation crucial indeed exist countably infinite locally groups perfect even characteristically simple example groups since finite nilpotent definition finitely generated known locally groups simple seems open problem exist simple groups groups let psl class simple groups type psln prime power recall fin class finite groups section prove following result theorem finitely generated group quotient particular every simple group prove theorem need preparation first recall classical lemma goursat lemma goursat lemma let subdirect product restricted projection maps surjective set ker ker image graph isomorphism need preceding lemma following auxiliary result recall profinite group called semisimple direct product finite simple groups moreover finite group almost simple unique minimal normal subgroup simple case aut nikolay nikolov jakob schneider andreas thom lemma let closed subdirect product profinite group almost simple contains closed normal semisimple subgroup solvable derived length three simple factor normal proof let projection maps proposition inverse limit groups finite together natural maps using goursat lemma one show induction finite exist finite simple groups aut aut situation projection either isomorphism exists finite simple group aut aut restriction socle natural projection onto socle clear inverse limit groups finite maps contains inverse limit socles groups together restricted maps routine check desired properties fact solvable derived length three implied schreier conjecture start proof theorem prove group endow groups psln conjugacy metric see equation definition group theorem perfect cyclic quotient clearly desired property let perb fect freely generated let profinite completion hhn iifb normal closure identifying image profinite completion follows theorem since sequence closure left taken equivalent saying map induces embedding profinite group set almost simple claim holds proof assume contrary perfectness assumption also set closed definition remarks finitarily approximable groups hence theorem applied closures taken since get hhy abelian preceding argument since otherwise would abelian contradiction proving claim claim implies homomorphism subdirect product almost simple apply lemma semisimple group provided quotients let otherwise image lemma solvable would trivial contradicting claim proper normal subgroup semisimple hence group simple factors theorem contained maximal normal subgroup former result isomorphic abstract group metric ultraproduct conjugacy length function since left note situation even normal lemma invariant aut claim setting proof otherwise first inclusion holds assumption whereas second follows commutator identity since last choice second last equality holds hence maps continuously finite let simple factor discrete group aut via conjugation action image map clearly contains inner automorphism since induced must elements generate dense subgroup induce inner automorphisms previous fact trivial center lemma implies contains normal subsets since theorem depending since independent simple factor implies contradiction previous claim deduce still hob momorphism since solvable quotient nikolay nikolov jakob schneider andreas thom homomorphism restricts homomorphic image metric ultraproduct since latter simple proposition left show metric ultraproduct sequence finite simple groups conjugacy length function respect ultrafilter embeds metric ultraproduct groups pslni equipped conjugacy length function since would property let briefly sketch argument firstly limit ranks groups bounded along ultrafilter rank alternating group defined sporadic groups also considered groups bounded rank resulting ultraproduct simple group lie type pseudofinite field alternating group respectively first case clearly embeds psln appropriately chosen however latter metric ultraproduct groups psln conjugacy length function sequence prime powers second case similar hence may assume ultraproduct involve finite simple groups families bounded rank assume contains alternating groups replace alternating group psln power prime namely natural embedding psln psln equipped projective rank length function induces cay cayley length function respect conjugacy class transposition ambient symmetric group latter lipschitz equivalent conjugacy class length function theorem hence assume groups classical chevalley steinberg groups follows lemmas end section theorem conjugacy length function projective rank function coming natural embedding psln pgln groups also lipschitz equivalent hence embed ultraproduct ultraproduct groups pslni equipped projective length function former lipschitz equivalence replaced conjui gacy length function ends proof approximability lie groups section utilize following theorem first author segal deduce two results concerning approximability lie groups finite groups one result compactifications pseudofinite groups remarks finitarily approximable groups theorem theorem let symmetric generating set finite group depends remark remarked open problem time writing decide whether finite product conjugacy classes free group always closed profinite topology rather straightforward consequence theorem case indeed theorem implies profinite closure product conjugacy classes contains entire commutator subgroup well known fact see theorem commutator width group infinite implication first observed segal independently discovered gismatullin actually shall use following immediate corollary theorem corollary let quotient product finite groups fixed constant recall fin denotes class finite groups first prove following theorem theorem connected lie group topological group abelian lemma already know connected abelian lie groups left prove connected lie group actually abelian consequence following auxiliary result lemma let continuous homomorphisms metric ultraproduct finite groups invariant length function holds let first prove theorem using lemma proof theorem assume connected lie group embedding metric ultraproduct finite groups invariant length function image exponential map lemma implies commute injective commute hence connectedness abelian ends proof nikolay nikolov jakob schneider andreas thom still left prove lemma proof lemma continuity choose large enough set apply corollary gives whence invariance triangle inequality since arbitrary proof complete note theorem provides answer question doucha whether groups invariant length function embed metric ultraproduct finite groups invariant length function since every compact lie group equipped invariant length function generates topology every group identity component example group previous theorem indeed theorem even provide topological types groups occur subgroups metric ultraproduct continue next result let state following two remarks remark theorem topology lie group matters indeed linear lie group abstract group remark since finitely generated subgroups residually finite malcev theorem hence remark thus linear lie group embeddable abstract group metric ultraproduct finite groups invariant length function indexed say partially ordered set pairs consisting finite subset positive rational number show even choose index set namely sln embedded algebraic ultraproduct sln ultrafilter set prime indeed ultraproduct isomorphic numbers sln pseudofinite field straightforward see contains field together algebraic closure however algebraically closed field characteristic zero cardinality result due shelah hence isomorphic note view algebraic ultraproduct metric ultraproduct induced topology sln discrete since lie groups admit finitely presented subgroups residually finite clear embeddings exist without assumption linearity remark one approximates symmetric groups one even embed real line metric ultraproduct groups invariant length function symmetric group shown invariant length functions satisfy every using identity simple deduce remarks finitarily approximable groups continuous homomorphism metric ultraproduct finite symmetric groups invariant length function trivial referring question zilber also question pillay whether compact simple lie group quotient algebraic ultraproduct finite groups present following second application corollary theorem lie group equipped metric generating topology abstract quotient product finite groups abelian identity component proof result almost identical proof theorem proof let lie group invariant length function image exponential map find applying corollary yields whence invariance triangle inequality shows commute hence generated image exponential map must abelian theorem implies compact simple lie group simplest example quotient product finite groups answering zilber question hence also answers question pillay moreover theorem remains valid replace product finite groups pseudofinite group group model theory finite groups also provides negative answer question pillay whether surjective homomorphism pseudofinite group compact simple lie group state last theorem section digress briefly pointing application theorems referring call compact group compatible invariant length function finite set group bijection define routine check invariant length function set minimal nikolay nikolov jakob schneider andreas thom situation apply lemma setting ultrafilter one checks easily may apply lemma hence turingapproximable group isomorphic metric ultraproduct finite groups invariant length function thus theorem well theorem imply lie group abelian identity component main result lemma latter condition also sufficient compact lie group let turn pseudofinite groups compactification abstract group mean compact group together homomorphism dense image pilay conjectured bohr compactification universal compactification pseudofinite group abelian identity component conjecture answer conjecture affirmative following result theorem let pseudofinite group identity component compactification abelian proof easy application corollary proof pseudofinite satisfies statement corollary image easy compactness argument shows property let aut irreducible unitary representations image weyl theorem embeds continuously embeds compact quotient corollary holds proof theorem follows abelian must abelian well embedding acknowledgements second third author want thank alessandro carderi interesting discussions content paper part phd project second author research supported erc consolidator grant finished first version article circulated among experts pointed independently slightly earlier lev glebsky found solution zilber problem along lines references goulnara arzhantseva liviu linear sofic groups algebras transactions american mathematical society yves cornulier sofic group away amenable groups mathematische annalen pierre deligne extensions centrales non finies groupes acad sci paris remarks finitarily approximable groups michal doucha metric topological groups metric approximation metric ultraproducts appear groups geometry dynamics elek endre sofic groups direct finiteness journal algebra hyperlinearity essentially free actions sofic property mathematische annalen tsachik gelander limits finite homogeneous metric spaces enseign math lev glebsky luis manuel rivera sofic groups profinite topology free groups journal algebra lev glebsky approximations groups characterizations sofic groups equations groups journal algebra edouard goursat sur les substitutions orthogonales les divisions espace annales scientifiques normale mikhael gromov endomorphisms symbolic algebraic varieties journal european mathematical society derek holt sarah rees closure results groups arxiv preprint james howie topology free group counterexample mathematische zeitschrift aditi kar nikolay nikolov sofic group sciences anton klyachko andreas thom new topological methods solve equations groups algebraic geometric topology martin liebeck aner shalev diameters finite simple groups sharp bounds applications annals mathematics dermot mclain characteristically simple group mathematical proceedings cambridge philosophical society nikolay nikolov dan segal generators commutators finite groups abstract quotients compact groups inventiones mathematicae abderezak ould houcine point alternatives pseudofinite groups journal group theory anand pillay remarks compactifications pseudofinite groups fundamenta mathematicae derek robinson finiteness conditions generalized soluble groups part new ergebnisse der mathematik und ihrer grenzgebiete band dan segal closed subgroups profinite groups proceedings london mathematical society words notes verbal width groups vol cambridge university press saharon shelah cardinality ultraproduct finite sets journal symbolic logic abel stolz andreas thom lattice normal subgroups ultraproducts compact simple groups proceedings london mathematical society andreas thom examples hyperlinear groups without factorization property groups geometry dynamics metric approximation higman group journal group theory nikolay nikolov jakob schneider andreas thom andreas thom john wilson metric ultraproducts finite simple groups comptes rendus mathematique geometric properties metric ultraproducts finite simple groups arxiv preprint alan turing finite approximations lie groups annals mathematics john wilson profinite groups vol clarendon press boris zilber perfect infinities finite approximation infinity truth ims lecture notes series nikolov university oxford oxford address schneider dresden dresden germany address thom dresden dresden germany address
| 4 |
temporal multimodal fusion video emotion classification wild valentin pateux jurie orange labs france orange labs france normandie unicaen ensicaen cnrs caen france sep abstract paper addresses question emotion classification task consists predicting emotion labels taken among set possible labels best describing emotions contained short video clips building standard framework lying describing videos audio visual features used supervised classifier infer labels paper investigates several novel directions first improved face descriptors based convolutional neural networks proposed second paper explores several fusion methods temporal multimodal including novel hierarchical method combining features scores addition carefully reviewed different stages pipeline designed cnn architecture adapted task important size training set small compared difficulty problem making generalization difficult model ranked emotion wild challenge accuracy keywords emotion recognition multimodal fusion recurrent neural network deep learning disg fear hap sad neu ral rise figure emotions afew dataset row represents emotion set faces sampled across representative video clip please note video clips contain faces introduction related work emotion recognition topic broad current interest useful many applications advertising psychological disorders understanding also topic importance research areas video summarization face normalization expression removal even emotion recognition could appear almost solved problem laboratory controlled conditions still many challenging issues case videos recorded wild paper focuses task emotion classification video clip assigned one one emotion based content classes usually six basic emotions anger disgust fear happiness sadness surprise addition neutral class emotion recognition emotion recognition wild challenge precisely paper presents methodology experiments well results obtained edition emotion recognition wild challenge emotion recognition received lot attention scientific literature one large part literature deals possible options defining representing emotions use discrete classes joy fear angriness etc also normandie unicaen ensicaen cnrs straightforward way interesting represent emotions degrees arousal valence proposed restricted case facial expressions action units also used focusing activation different parts face links made two representations discrete classes mapped arousal valence space deduced action units another important part literature focuses ways represent contents features subsequently used classifiers early papers make use features local binary patterns lbp gabor features discrete cosine transform representing images linear predictive coding coefficients lpc relative spectral transform linear perceptual prediction modulation spectrum modspec enhanced autocorrelation eac audio standard classifiers svn knn classification see details winner emotiw demonstrated relevance action units recognition emotions among first propose learn features instead using descriptors relying deep convolutional networks recently winner emotiw introduced feature efficient representation faces literature emotion recognition contents also addresses question fusion different modalities modality seen one signals allowing perceive emotion among recent methods fusing several modalities mention use convnets moddrop multiple kernel fusion used modalities context emotion recognition face images even context seems also tremendous importance instance general understanding scene even based simple features describing whole image may help discriminate two candidate classes recent methods emotion recognition supervised hence requires training data availability resources becoming critical several challenges collected useful data avec challenge focuses use several modalities track arousal valence videos recorded controlled conditions emotionet challenge proposes dataset one million images annotated action units partially discrete compound emotion classes finally emotion wild challenge deals classification short video clips seven discrete classes videos extracted movies shows recorded wild ability work data recorded realistic situations including occlusions poor illumination conditions presence several people even scene breaks indeed important aforementioned paper deals participation emotion wild emotiw challenge build pipeline audio features extracted opensmile toolkit two video features computed one descriptor model face emotion database introduced lstm network one features processed classifier output classifiers combined late fusion starting pipeline propose improve different ways main contributions approach first recent literature suggests late fusion might optimal way combine different modalities see paper investigates different directions including original hierarchical approach allowing combine scores late fusion features early fusion different levels seen way combine information optimal level description features scores representation addresses important issue fusion ensure preservation unimodal information able exploit information second investigate several ways better use temporal information visual descriptors among several contributions propose novel descriptor combining lstm third observed amount training data labeled short video clips rather small compared number parameters standard deep models considering complexity diversity emotions context supervised methods prone show paper effect reduced carefully choosing number parameters model favoring transfer learning whenever possible figure comparison bounding boxes given procedure color images provided afew dataset images random faces afew bounding boxes slightly larger rest paper organized follows presentation proposed model done section detailing different modalities fusion methods section presents experimental validation well results obtained validation test sets challenge presentation proposed approach figure presents overview approach inspired one modalities consider extracted audio video associated faces analyzed two different models cnn two main contributions temporal fusion novel descriptor overall method works follows one hand opensmile library used produce dimensional features used perceptron predict classes well compact descriptors vectors audio hand video classification based face analysis detecting faces normalizing appearance one set features computed model another set features obtained model cases features produced temporal fusion descriptions done across frames sequences producing per modality scores compact descriptors dimensional vectors modalities audio combined using score predictions compact representations explaining faces detected normalized rest section gives details modality processed fusion performed face detection alignment emotiw challenge provides face detections frame video however preferred use annotations detect faces motivation twofold first want able process given video emotiw adding external training data second necessary master face alignment process processing features fully connected opensmile features features frame frame temporal fusion features features overlap frames window consecutive frames temporal fusion overlap features temporal fusion softmax score size features vector features figure model includes audio without overlapping modalities modality gives feature vectors scores fused using different methods noted dimensions features vectors chosen balanced contributions modalities making trade number parameters performance images face datasets vgg models see section fer dataset used vgg model face detection use internal detector provided orange labs found observations face detector detects faces versus validation set lower false positive rate false positives versus validation set one use provide emotiw annotations detector applied frame per frame one several faces detected tracking based relative positions allows define several face tracks second time alignment based landmarks done choose longest face sequence video finally apply temporal smoothing positions faces filter jittering figure compares one normalized faces detection alignment one provided challenge data audio features classifier audio channel video fed opensmile toolkit emotiw competitors get description vector length commonly used approach audio learn classifier top opensmile features support vector machine seems dominant choice classification even approaches like random forests able control finely dimensionality description vector one used later fusion process learn perceptron relu activation opensmile description using batch normalisation dropout inference extract video description vector size hidden layer perceptron along softmax score representing faces convolutional neural network current popular method see representing faces context emotion recognition especially emotiw use configuration file challenge cnn vggface model images emotion images dataset fer dataset using model way balance relatively small size emotiw dataset afew dataset images fer dataset first processed detecting aligning faces following procedure explained section model fer dataset using training public test set training use data augmentation jittering scale flipping rotating faces aim make network robust small misalignment faces also apply strong dropout last layer vgg keeping nodes prevent achieve performance fer private test set slightly higher previously published results assess quality description given vgg model emotion recognition first benchmarked validation set sfew dataset containing frames videos afew used emotiw challenge achieved score without retraining model sfew result using committee deep models challenge baseline face sequences detected afew videos resized fed model extract length layer following pipeline fan also compute softmax score frame overlap frames window frames window frames figure case frames overlap see windows sharing half faces inference face sequences detected afew videos resized split video several windows consecutive without overlapping windows shown figure fed weighted extract second fully connected layer last softmax layer window videos method propose bears similarities multiple instance learning mil within framework mil video considered bag windows one single label straightforward way apply mil would train video bag windows add final layer choosing prediction one maximum score among scores batch loss would computed prediction weights defined play role selecting iteratively best scoring windows representing faces convolutional neural network convolutional neural networks shown give good performance context facial expressions recognition video fan model randomly chosen windows consecutive faces inference model applied central frame video one limitation approach test time guaranty best window capturing emotion middle video problem occurs training large part windows randomly selected training contain emotion contain correct emotion indeed videos annotated whole frames different labels sometimes expression address limitation following way first finetune model using windows video optimizing classification performance beginning convergence able learn meaningful windows others weigh window based scores precisely video window video epoch weight computed per modality temporal fusion vgg weighted representations applied frame videos turning videos temporal sequences visual descriptors name elements sequences descriptors whether description frame window classify sequences descriptors investigated several methods straightforward one score descriptor take maximum softmax scores final prediction similarly maximum means softmax across time also considered better take account temporal dependencies consecutive descriptors another option use long memory recurrent neural networks lstm unlike chose use variable length lstm allowing take descriptors inputs prevent also applied dropout lstm cells weight decay final fully connected layer random grid search applied one models temperature parameter decreasing epoch score window video normalize weight ensure video random grid search including temperature descent made validation set sequences fewer frames padded reach sufficient length used give one description vector output final fully connected layer lstm one softmax score video final architecture hidden units final fully connected layer hidden units maximal length input sequence frames final architecture hidden units final fully connected layer hidden units maximal length input sequence windows overlap frames windows overlap fully connected hidden representation modalities features fully connected multimodal fusion score figure fully connected model moddrop three modalities concatenated fully connected moddrop applied obtained hidden representation fed second regular fully connected layer output scores last least different modalities efficiently combined maximize overall performance explained section two main fusion strategies used score fusion late fusion consists predicting labels based predictions given modality features fusion early fusion consists taking input latent features vectors given modality learning classifier top last editions challenge papers focused score fusion using svm multiple kernel fusion weighted means differently several authors tried train audio image modalities together see combining early features using soft attention mechanism achieve performance propose approach combining rest section describes four different fusion methods experimented including novel method denominated score trees formal let number modalities weights matrix first layer divided weights block matrices modeling unimodal intermodal contributions ranging number modalities first fully connected equation written term add loss simply decreasing time setting high values first iterations leads zeroing block matrices lowering later reintroduces progressively coefficients observation approach provided better convergence considered problem baseline score fusion experimented several standard score fusion like majority voting means scores maximum scores linear svm fully connected neural network modality drop also experimented moddrop method neverova consists applying dropout modality learning crossmodality correlations keeping unimodal information reports results gesture recognition apply method audio features shown figure according much better simply feeding concatenation modalities features fully connected neural network letting network learn joint representation indeed fully connected model would unable preserve unimodal information learning important step make convergence possible moddrop first learn fusion without reason conditioned weight matrix first layer diagonal blocks equal zeros released constraint number iterations warranty preservation unimodal information explore alternative method turned better apply adapted weight decay blocks decreased contribution loss time score trees motivation combine information coming scores information coming features together building call score trees see figure illustration fully connected classification neural network applied separately features different modalities outputting vector size vector concatenated scores two modalities create vector size fully connected classification neural network fed outputs prediction vector size aim make predictions respect predictions coming modalities finally three new prediction vectors concatenated fed last fully connected classifier gives overall scores method generalized number modalities weighted mean weighted mean approach winners edition consists weighting score modality sum weights chosen cross validation validation set selecting ones giving best performance applied audio models training validation test angry disgust fear happy sad neutral surprise total table afew number video sequences per class fully connected fully connected fully connected fully connected fully connected fully connected bounding boxes timestamps kept extract candidates temporal windows ten seconds around time stamps asked human annotators select annotate relevant ones ensure quality annotations evaluated human beings validation set reached performance depending annotator compatible figure observed fully connected figure score tree architecture experimental validation results introducing afew dataset dataset challenge section presents experimental validation method first present experiments done modalities taken separately present experiments fusion finally introduce experiments done emotiw challenge performance obtained experiments single modalities modality evaluated separately validation set afew modalities performs better regarding unidirectional bidirectional lstm architectures evaluated one two layers several recent papers bidirectional lstm claimed efficient unidirectional one could seen way augment data nevertheless case observed bidirectional lstm prone training set therefore perform well validation set observation made increasing number layers best performing architecture cases unidirectional lstm vgg model without lstm first evaluated taking maximum scores sequences giving accuracy validation set different lstm architectures tested best performance one given table note improvement compared fan explained fact model uses whole sequences feeding model information data augmentation also helps acted facial emotion wild dataset acted facial emotion wild afew dataset used emotiw challenge version afew composed training videos validation videos test videos video labeled one emotion among angry disgust fear happy sad neutral surprise addition video cropped aligned faces extracted also provided another important specificity dataset class distribution subsets shown table difference make performance test set different one validation set classes challenging others experimented method modality alone performance given table observe implementation trained random windows evaluated central windows good fan trained central windows performed better either proposed lstm tested without overlapping windows evaluate weighted prediction window maximal softmax score among video first taken performs better without overlapping observe lower difference training validation accuracy could explained fact number external training data enlarge training set collected external data selecting video clips personal dvds movie collection checking overlap selected movies ones afew movies processed following pipeline faces first detected using face detector see section details method validation accuracy maximum scores unidirectional lstm one layer bidirectional lstm one layer unidirectional lstm two layers bidirectional lstm two layers fan table performance validation set afew fusion method validation test accuracy accuracy majority vote mean moddrop score tree weighted mean table performance different fusion methods validation test sets method validation accuracy central window random window weighted overlap weighted frames overlap weighted frames overlap lstm overlap lstm frames overlap fan table performance validation set afew submitted method presented paper emotiw challenge submitted runs performance given table difference runs follows submission moddrop fusion audio overlap submission addition another overlap improving performance test set well validation set submission fusion based score trees achieve better accuracy test set observing slight improvement validation set submission addition one two one frames overlap one without new models selected among best results random grid search according potential complementarity degree evaluated measuring dissimilarity confusion matrices fusion method moddrop submission weighted mean fusion preceding modalities giving gain test set losing one percent validation set highlighting generalization issues submission best submission sixth method models trained training validation sets improves accuracy improvement also observed former editions challenge surprisingly adding data bring significant improvement gain less one percent validation set could explained fact annotations correlated enough afew annotation windows lower overlap choice windows therefore easier second observation use lstm weighted leads highest scores end observed descriptor performs significantly better one audio audio modality gave performance lower method use perceptron classifier worse svm nevertheless allowed use audio features fusion participation emotiw challenge experiments fusion table summarizes different experiments made fusion simple baseline fusion strategy majority vote means scores perform well modality alone proposed methods moddrop score tree achieved promising results validation set good simple weighted mean test set explained largest number parameters used moddrop score tree fact parameters cross validated validation set best performance obtained validation set accuracy significantly higher performance baseline algorithm provided organizers based computing lbptop descriptor using svr giving accuracy validation set proposed method ranked competition observed year improvement top accuracy compared previous editions small improvement might explained fact methods saturating converging towards human performance assumed around however performance top human annotators whose accuracy higher means still room improvement submission test accuracy table performance submissions test set angry disgust fear happy neutral sad surprise angry disgust fear happy neutral sad surprise references saad ali mubarak shah human action recognition videos using kinematic features multiple instance learning ieee transactions pattern analysis machine intelligence moez baccouche franck mamalet christian wolf christophe garcia atilla baskurt sequential deep learning human action recognition international workshop human behavior understanding springer moez baccouche franck mamalet christian wolf christophe garcia atilla baskurt convolutional sparse sequence classification bmvc sarah adel bargal emad barsoum cristian canton ferrer cha zhang emotion recognition wild videos using images proceedings acm international conference multimodal interaction acm lisa feldman barrett batja mesquita maria gendron context emotion perception current directions psychological science lisa feldman barrett james russell structure current affect controversies emerging consensus current directions psychological science fabian ramprakash srinivasan qianli feng yan wang aleix martinez emotionet challenge recognition facial expressions emotion wild arxiv preprint linlin chao jianhua tao minghao yang zhengqi wen audio visual emotion recognition temporal alignment perception attention arxiv preprint junkai chen zenghai chen zheru chi hong emotion recognition wild feature fusion multiple kernel learning proceedings international conference multimodal interaction acm abhinav dhall roland goecke shreya ghosh jyoti joshi jesse hoey tom gedeon individual emotion recognition emotiw proceedings acm international conference multimodal interaction abhinav dhall roland goecke jyoti joshi jesse hoey tom gedeon emotiw video emotion recognition challenges proceedings acm international conference multimodal interaction acm abhinav dhall roland goecke simon lucey tom gedeon collecting large richly annotated databases movies ieee multimedia abhinav dhall ramana murthy roland goecke jyoti joshi tom gedeon video image based emotion recognition challenges wild emotiw proceedings acm international conference multimodal interaction acm yong wei wang liang wang hierarchical recurrent neural network skeleton based action recognition proceedings ieee conference computer vision pattern recognition paul ekman wallace friesen facial action coding system florian eyben martin schuller opensmile munich versatile fast audio feature extractor proceedings acm international conference multimedia acm lijie fan yunjie spatiotemporal networks video emotion recognition arxiv preprint yin fan xiangju dian yuanliu liu emotion recognition using hybrid networks proceedings acm international conference multimodal interaction acm felix gers schmidhuber fred cummins learning forget continual prediction lstm technical report ian goodfellow dumitru erhan pierre luc carrier aaron courville mehdi mirza ben hamner cukierski yichuan tang david thaler lee challenges representation learning report three machine learning contests international conference neural information processing springer alex graves santiago schmidhuber bidirectional lstm networks improved phoneme classification recognition artificial neural networks formal models markus martin schels sascha meudt palm friedhelm schwenker revisiting emotiw challenge wild really journal multimodal user interfaces heysem kaya furkan albert ali salah emotion recognition wild using deep transfer learning score fusion image vision computing pooya khorrami thomas paine thomas huang deep neural networks learn facial action units expression recognition proceedings ieee international conference computer vision workshops kim jihyeon roh dong lee hierarchical committee deep convolutional neural networks robust facial figure confusion matrix obtained seventh submission see disgust surprise classes never predicted model three dominant classes happy neutral angry well recognized neutral class largest number false positives underlines difficulty even humans draw margin presence absence emotion rows denote true classes columns predicted classes conclusions paper proposes multimodal approach video emotion classification combining vgg models image descriptors explores different temporal fusion architectures different multimodal fusion strategies also proposed experimentally compared validation test set afew emotiw challenge proposed method ranked accuracy competition winners one important observation competition discrepancy performance obtained test set one validation set good performance validation set warranty good performance test set reducing number parameters models could help limit overfitting using fusion models moreover gathering larger set data would also good way face problem finally another interesting path future work would add contextual information scene description voice recognition even movie type extra modality expression recognition journal multimodal user interfaces davis king machine learning toolkit journal machine learning research jul agata agnieszka landowska mariusz szwoch wioleta szwoch emotion recognition application software engineering international conference human system interaction hsi ieee jean kossaifi georgios tzimiropoulos sinisa todorovic maja pantic database valence arousal estimation image vision computing natalia neverova christian wolf graham taylor florian nebout moddrop adaptive gesture recognition ieee transactions pattern analysis machine intelligence omkar parkhi andrea vedaldi andrew zisserman deep face recognition bmvc vol robert plutchik henry kellerman theories emotion vol academic press karen simonyan andrew zisserman convolutional networks action recognition videos nips tran lubomir bourdev rob fergus lorenzo torresani manohar paluri learning spatiotemporal features convolutional networks proceedings ieee international conference computer vision michel valstar jonathan gratch schuller fabien ringeval dennis lalanne mercedes torres torres stefan scherer giota stratou roddy cowie maja pantic avec depression mood emotion recognition workshop challenge proceedings international workshop emotion challenge acm peter washington catalin voss nick haber serena tanaka jena daniels carl feinstein terry winograd dennis wall wearable social interaction aid children autism proceedings chi conference extended abstracts human factors computing systems acm xuehan xiong fernando torre supervised descent method applications face alignment proceedings ieee conference computer vision pattern recognition baohan yanwei jiang boyang leonid sigal heterogeneous knowledge transfer video emotion recognition attribution summarization ieee transactions affective computing anbang yao junchao shao ningning yurong chen capturing facial features latent relations emotion recognition wild proceedings acm international conference multimodal interaction acm xiangxin zhu deva ramanan face detection pose estimation landmark localization wild computer vision pattern recognition cvpr ieee conference ieee
| 1 |
identifiability generalised randles circuit models may alavi adam mahdi stephen payne david howey randles circuit including parallel resistor capacitor series another resistor generalised topology widely employed electrochemical energy storage systems batteries fuel cells supercapacitors also biomedical engineering example model interface electroencephalography baroreceptor dynamics paper studies identifiability generalised randles circuit models whether model parameters estimated uniquely data shown generalised randles circuit models structurally locally identifiable condition makes model structure globally identifiable discussed finally estimation accuracy evaluated extensive simulations index circuit identifiability system identification parameter estimation introduction randles proposed equivalent circuit kinetics rapid electrode reactions since model developed become basis study many electrochemical energy storage systems batteries fuel cells supercapacitors figure shows generalised randles model consisting ohmic resistor series number parallel resistors capacitors capacitor electrochemical applications ohmic resistor represents usually conduction charge carriers electrolyte metallic conductors resistors capacitors parallel pairs represent charge transfer resistance double layer capacitance respectively approximation diffusion process number parallel depends many pairs required frequency response generalised randles model fits device impedance spectra within frequency range interests instance number parallel pairs determined minimising error model measured voltages capacitor also known warburg term accounts diffusion process battery supercapacitor may represent state charge noted voltage alavi energy power group department engineering science university oxford brain stimulation engineering laboratory duke university durham usa email mahdi payne institute biomedical engineering department engineering science university oxford old road campus research building oxford united kingdom emails howey energy power group department engineering science university oxford parks road oxford united kingdom email source considered generalised randles model figure focused impedance model paper generalised randles circuit models also employed biomedical engineering electric circuit model interface electroencephalography includes two parallel pairs series two resistors two voltage sources special case circuit given figure noted considered parallel viscoelastic analog generalised randles model employed cardiovascular cerebral haemodynamics modeling describing viscoelastic properties aortic wall coupling nerve endings baroreceptor neurons carotid sinus aortic arch relating fluctuation arterial blood pressure cerebral blood flow velocity identification parameter estimation generalised randles model different numbers parallel important condition monitoring fault diagnosis control authors showed randles circuit used monitoring battery charge transfer overvoltage identification tests parameter estimation batteries suggested identification fitting impedance spectra frequency domain presented objective paper study identifiability generalised randles model shown figure whether model parameters estimated uniquely data typically identifiability problem divided two broad areas parameter estimation accuracy structural identifiability parameter estimation accuracy considers practical aspects problem come real data noise bias studying structural identifiability hand one assumes informative data available therefore fact concept unidentifiable parameters assigned infinite number values yet still lead identical inputoutput data thus structural identifiability necessary condition identifiability parameter estimation number analytical approaches structural identifiability proposed including laplace transform transfer function taylor series expansion similarity transformations differential algebra linear systems shown controllability observability properties closely related concept structural identifiability example shown singleoutput linear systems structurally identifiable observer canonical form fig generalised randles equivalent circuit model controllable see chapter however controllable observable systems still unidentifiable general case recently significant interest identifiability analysis battery models structural identifiability equivalent circuit model ecm including two capacitors discussed comparing number unknown parameters transfer function circuit structural identifiability general nonlinear ecm analysed based observability conditions shown cells serial connections observable demonstrating complete estimation model parameters lumped measurements series cells possible trough independent measurements however shown lumped models parallel connectivities observable provided none parallel cells identical identifiability battery electrochemical models discussed particular shown electrochemical model parameters identifiable given typical cycles shown shape cycles plays crucial role identifiability battery parameters also demonstrated system identification method employed monitoring battery film growth identifiability problem randles ecms studied authors consider models include two capacitors analysis based fisher information matrix fim fim provides information sensitivity measurement model parameters using likelihood functions bound estimation errors developed using theorem method proposed optimally shape battery cycles improve identifiability paper shows generalised randles model figure structurally globally identifiable structurally locally identifiable finite becomes globally identifiable assuming ordering generalised circuit finally identifiability model assessed extensive simulations model parameterisation problem statement parameterised models generalised randles circuit given figure derived define electric current system input terminal voltage system output voltages across internal pairs states model parameters belongs open subset parametrisation using kirchhoff laws model structure randles circuit parameterised written matrices depend parameter vector given time constants transfer function parametrisation denote model represents laplace operator using formula parameterised generalised randles circuit written problem statement determine conditions model structure equivalently form unknown parameter vector locally globally structurally identifiable iii main results following definition structural identifiability given follows definition let model structure parametrized belongs open subset consider equation almost model structure said globally identifiable unique solution locally identifiable finite number solutions unidentifiabile infinite number solutions remark instead definition one use coefficient map defined follows consider associate following coefficient map defined model structure globally identifiable coefficient map injective locally identifiable unidentifiable infinitely following lemma used proof identifiability generalised randles circuit model lemma let coefficient map associated pairwise different following statements hold proof part identifiability equation given unique solution proves part monic normalised greatest order denominator part identifiability equation given claim equation admits finite precisely number solutions prove claim note equality two rational functions satisfied provided since distinct roots uniquely characterise monic polynomial degree permutations roots equation solutions let fix permutation consider assignment since assumed pairwise distinct expressions thought functions variable linearly independent finally since side equation linear combination linearly independent functions immediately obtain concludes proof part introduce concept reparametrisation see use proof main theorem definition reparametrisation model structure coefficient map map denotes image map moreover reparameterisation identifiable map identifiable main result section following theorem describes identifiability generalised randles circuit theorem let mrc denotes model structure matrices parametrised number parallel elements connected series see figure following conditions hold model structure mrc globally identifiable model structure mrc locally identifiable ordering generalised circuit model structure mrc globally identifiable proof consider model structure mrc let denote corresponding given parameter vector given write rational function degree associate coefficient map part coefficient map written explicitly generation informative data set direct computation check equation admits unique solution thus coefficient map model structure globally identifiable part coefficient map written following composition map reparametrisation defined coefficient map associated lemma map map inverse map since composition map thus model structure mrc locally identifiable part finally identifiability equation condition admits unique solution concludes part remark procedure applicable discrete time model euler first order approximation simplest approximation identifiability analysis easier coefficients discrete time equals number parameters however might lead numerical instability higher order approximations applied challenge remains prove whether maps discrete time model parameters simulations consider randles circuit section accuracy estimation presence noisy data subject random initial guess estimations studied first informative data set used simulations way generated described details provided references data set informative input persistently exciting persistently exciting input adequately excites modes system linear systems order system determines order persistent excitation order persistent excitation equals number coefficients monic need identified see theorem monic randles circuit unknown coefficients identified therefore necessary order persistently exciting input frequency domain means spectrum excitation input least nonzero points simulation focuses excitation signals widely employed electrochemical impedance spectroscopy eis techniques procedure applied inputs signal given cos represents number sinusoids denote magnitude frequency radians per second phase radians respectively spectrum signal given delta function impulse frequency spectrum sinusoid signal contains two nonzero points therefore sinusoid signals enough generate informative data set randles circuit model magnitudes frequencies phases arbitrary real values specific applications might impose additional constraints instance eis techniques magnitude input signal may vary milliampere ampere depending size energy storage system sake simplicity magnitudes assumed equal frequencies could equally logarithmically spread frequency band values depend system dynamics equal schroeder phase choice suggested reduce crest factor remark signal may exactly zero mean sometimes current flow required case excitation current needs superimposed known constant offset current see examples system identification data typically remove types offsets noisy voltage current time time excitation input associated output fig input excitation signal associated output voltage response figure computed using model noisy case standard deviation removed signals results discussion following operating point arbitrarily selected excitation signal figure generated using model fmin fmax schroeder phase randomly chosen crest factor signal voltage response figure computed using model offset removed ensure signals given true values smallest time constant sampling time several times larger simulation sampling frequency chosen times greater inverse minimum time constant test duration typically several times larger maximum time constant times suggested however might vary different applications simulation test duration applied identified using matlab system identification toolbox circuit parameters calculated directly coefficients using formulas shown table discussed later order study consistency results test repeated times every run random initial guess parameters roots denominator must positive real numbers estimations lead complex negative poles considered outliers discarded analysis relative mean error defined follows true value mean estimations true value estimation accuracies noisy data noise standard deviation compared together signal noise ratio table shows mean standard deviation relative errors estimations figure shows histograms estimations noisy data regardless random initial condition simulations estimations converge less largest table figure shows estimations noisy data distributed around true values however seen table mean values estimations noisy cases remain relative mean errors estimations less parameters except largest also increase noisy data compared case largest standard deviation could pure integrator associated appears might require modifications data set instance methodology proposed removes integral term modifying input signal imodified calculating coefficients circuit show compute parameters randles circuit using circuit estimations following set equations estimations estimations parameters circuit subsequently obtained conclusions parameters different topologies simply calculated using approach table provides formulas four widely used randles models showed generalised randles circuit model locally identifiable model structure becomes globally identifiable ordering circuit assumed results confirmed extensive simulations finally explicit formulas coefficients widely used randles circuits presented fig histogram accepted estimations runs noisy data relationships coefficients circuit parameters given parameters defined circuit integrator identification method set pole denominator fixed identification software typically allows fix number poles zeros certain values using first equation given roots one fixed using condition select smallest root remaining root circuit time constants obtained follows obtained solving acknowledgements work funded university oxford epsrc impact acceleration account technology fund awards authors would like thank anonymous reviewers editor fruitful comments significantly improved paper would like also extend thanks ross drummond stephen duncan xinfan lin shi zhao feedback first version work references randles kinetics rapid electrode reactions discuss faraday vol rahn wang battery systems engineering john wiley sons alavi birkl howey battery electrochemical impedance models journal power sources vol barsoukov macdonald impedance spectroscopy theory experiment applications john wiley sons buller thele doncker karden simulation models supercapacitors liion batteries power electronic applications ieee transactions industry applications vol andre meiler steiner wimmer sauer characterization batteries electrochemical impedance spectroscopy experimental investigation journal power sources vol yurkovich guezennec yurkovich battery model automotive applications journal power sources vol birkl howey model parameter estimation batteries hybrid electric vehicles conference hevc baronti chow online adaptive parameter coestimation battery cells ieee transactions industrial electronics vol mihajlovic grundlehner vullers penders wearable wireless eeg solutions daily life applications missing ieee journal biomedical health informatics vol bugenhagen cowley beard baroreceptor dynamics relationship type hypertension physiological genomics vol mahdi sturdy ottesen olufsen modeling dynamics control system plos computational biology vol mader olufsen mahdi modeling cerebral blood velocity orthostatic stress annals biomedical engineering lee lee nam kim cho yun choi kim kim jun modeling real time estimation lumped equivalent circuit model lithium ion battery international power electronics motion control conference moubayed kouta dernayka outbib parameter battery model ieee photovolatic specialists conference jang yoo equivalent circuit evaluation method lithium polymer battery using bode plot numerical analysis ieee transactions energy conversion vol jiang parameter method battery equivalent circuit model sae technical paper pattipati sankavaram pattipati system estimation framework pivotal automotive battery management system characteristics ieee transactions systems man cybernetics part applications reviews vol rahmoun biechl modelling batteries using equivalent circuit diagrams przeglad elektrotechniczny vol cattin new optimization algorithm battery equivalent electrical circuit international conference modeling optimization simulation raue kreutz maiwald bachmann schilling timmer structural practical analysis partially observed dynamical models exploiting likelihood bioinformatics vol cobelli distefano parameter structural concepts ambiguities critical review american journal physiology vol bellman structural mathematical biosciences vol pohjanpalo system based power series expansion solution mathematical biosciences vol chappell godfrey vajda global parameters nonlinear systems inputs comparison methods mathematical biosciences vol vajda godfrey rabitz similarity transformation approach analysis nonlinear compartmental mathematical biosciences vol anstett bloch nonlinear systems local state isomorphism approach automatica vol meshkat sullivant reparametrizations linear compartment models journal symbolic computation vol mahdi meshkat sullivant structural viscoelastic mechanical systems plos one vol glover willems parametrizations linear dynamical systems canonical forms ieee transactions automatic control vol distefano relationships structural controllability observability properties ieee transactions automatic control vol van den hof structural linear compartmental systems ieee transactions automatic control vol ljung system identification theory user ptr prentice hall upper saddle river ljung glad global arbitrary model parametrizations automatica vol audoly bellu saccomani cobelli global nonlinear models biological systems ieee transactions biomedical engineering vol sitterly wang yin wang enhanced battery models battery management ieee transactions sustainable energy vol rausch streif pankiewitz findeisen nonlinear observability single cells battery packs ieee international conference control applications cca schmidt bitzer imre guzzella electrochemical modeling systematic parameterization battery cell journal power sources vol forman moura stein fathy genetic analysis model experimental cycling lifepo cell journal power sources vol moura chaturvedi adaptive partial equation observer battery estimation via electrochemical model journal dynamic systems measurement control vol amato forman ersal ali stein peng bernstein noninvasive diagnostics using inaccessible subsystems asme annual dynamic systems control conference joint jsme motion vibration conference sharma fathy fisher analysis battery model american control conference acc rothenberger anstrom brennan fathy maximizing parameter battery model using optimal periodic input shaping asme dynamic systems control conference peeters hanzon symbolic computation fisher information matrices parametrized systems automatica vol stoica system identification prenticehall norton introduction identification academic press zhu multivariable system identification process control elsevier howey mitcheson brandon online measurement battery impedance using motor controller excitation ieee transactions vehicular technology vol schroeder synthesis signals binary sequences low autocorrelation ieee transactions information theory vol ljung system identification toolbox use matlab mathworks anderson moore optimal filtering dover publications table estimation results noise free outlier estimates noise outlier estimates parameter true value mean mean table coefficients four widely used randles circuit models estimated estimated set identify pole estimated estimated set identify pole roots roots choose choose roots
| 3 |
markov random fields iterated toric fibre products feb jan draisma florian oosterhof abstract prove iterated toric fibre products finite collection toric varieties defined binomials uniformly bounded degree implies markov random fields built finite collection finite graphs uniformly bounded markov degree introduction main results notion toric fibre product two toric varieties goes back relevance algebraic statistics since captures algebraically markov random field graph obtained glueing two graphs along common subgraph see also proved certain conditions one explicitly construct markov basis large markov random field bases components related results see however conditions always satisfied nevertheless conjecture hope raised building larger graphs glueing copies finite collection graphs along common subgraph might uniform upper bound markov degree models thus constructed independent many copies graph used special case conjecture proved paper theorem prove conjecture general along way link recent work representation stability indeed important point would like make apart proving said conjecture algebraic statistics natural source problems asymptotic algebra ideas representation stability apply main theorems reminiscent sam recent stabilisation theorems equations higher syzygies secant varieties veronese embeddings markov random fields let finite undirected simple graph node let random variable taking values finite set joint probability distribution said satisfy local markov properties imposed graph two variables conditionally independent given hand joint probability distribution said factorise according maximal clique configuration random variables labelled exists interaction parameter configuration random variables mcl set maximal cliques restriction jan draisma florian oosterhof two notions connected theorem says positive joint probability distribution factorises according satisfies markov properties see theorem set positive joint probability distributions satisfy markov properties therefore subset image following map ideal polynomials vanishing interest algebraic statistics since components monomials generated finitely many binomials differences two monomials standard coordinates finite generating set binomials used set markov chain testing whether given observations variables compatible assumption joint distribution factorises according graph zero locus often called graphical model suppose graphs node sets equals fixed set graphsinduced equal fixed graph moreover fix number states glue copies along common subgraph mean first taking disjoint copies identifying nodes labelled fixed across copies denote graph obtained glueing copies graph ahs theorem let graphs common subgraph number states associated node exists uniform bound multiplicities ideal ahs generated binomials degree proof shows one needs finitely many combinatorial types binomials independent generate result similar flavour independent set theorem graph fixed vary interestingly underlying categories responsible two stabilisation phenomena opposite see remark example proved ideal complete bipartite graph two states random variables generated degree graph obtained glueing copies along common subgraph consisting nodes without edges derive theorem general stabilisation result toric fibre products introduce next markov random fields iterated toric fibre products toric fibre products fix ground field let natural number let vector spaces define bilinear operation definition toric fibre product subsets equals set remark toric fibre product defined level ideals coordinate functions coordinate functions coordinate functions ring homomorphism coordinate rings dual sends compose homomorphism projection modulo ideal kernel composition precisely toric fibre product ideals introduced paper multigradings play crucial role computing toric fibre products ideals affect definition toric fibre products product associative commutative reordering tensor factors iterate construction form products like also lives product vector spaces variety lives taking toric fibre products general varieties rather hadamardstable ones choose coordinates definition hadamard product defined adq defined set called contains vector unit element moreover remark remark proposition ideal generated differences two monomials particular subtorus torus toric varieties abstract suppose corresponding hadamard also fix identifications multiplication equipping spaces natural coordinates corresponding hadamard multiplication two operations well defined satisfy consequently hadamardstable toric fibre product formulate second main result jan draisma florian oosterhof theorem let let set kdi let subset exists uniform bound exponents ideal generated polynomials degree remark straightforward generalisation theorem also holds closed given ideal coordinate ring says hadamard product lies ideal toric fibre product remark since generality would slightly obscure arguments decided present explicitly version also remark also theorem remains valid remove condition contain vector require closed see remark organisation paper remainder paper organised follows section introduce categories affine dually finop algebras point see section iterated toric fibre products together form rather fins fins product category copies fin indeed sit naturally cartesian product copies tensors prove section noetherian theorem noetherianity result similar flavour recent result see also proposition follows proof strategy finitely generated finop noetherian result played crucial role proof artinian conjecture however noetherianity result concerns certain finop rather modules complicated finally section first prove theorem derive theorem corollary acknowledgements authors partially supported draisma vici grant stabilisation algebra geometry netherlands organisation scientific research nwo current paper partly based second author master thesis eindhoven university technology see thank johannes rauh seth sullivant fruitful discussions prague stochastics august affine finop category fin objects finite sets morphisms maps opposite category denoted finop another category whose objects called somethings covariant functor fin finop contravariant functor fin associated denoted covariant case contravariant case generally replace fin category form category morphisms natural transformations paper always closely related fin fins product fin three instances fin finop crucial paper markov random fields iterated toric fibre products example fix functor finop homfin associates map composition building example functor polynomial ring variables labelled finop associates homomorphism third define affine space space factors labelled sends homfin linear morphism determined hadamard product follow natural convention empty hadamard product equals vector particular holds previous formula image ring space related follows basis consisting vectors standard basis coordinate ring generated dual basis homfin pullback homomorphism dual linear map indeed verified following computation used section general algebra shall mean associative commutative homomorphisms required preserve finop assigns finite set algebra map algebra homomorphism ideal finop subset finite set maps ideal finop quotient given finop finite sets index set element unique smallest ideal contains ideal constructed hom ideal generated finop called noetherian ideal generated finitely many elements various taken finite example finop noetherian indeed polynomial ring single variable homomorphism identity noetherianity follows noetherianity algebra finop noetherian instance consider monomials jan draisma florian oosterhof map pigeon hole principle two indices position variable least two indices equal since contains variable divide generates generated monomial finop hand piece homogeneous polynomials degree noetherian finop see proposition shall see following section certain interesting quotients noetherian tensors form noetherian let variety tensors form vectors claim defines must verify map map dual algebra homomorphism sends indeed example seen map sends well known infinite ideal equals ideal generated binomials constructed follows partition two parts let write element equals binomial ideal generated partitions functor ideal finop infinite follows computation arbitrary follows since binomials mapped binomials maps fin moreover finitely generated see also lemma lemma ideal finop finitely generated proof determinantal equation exist distinct equation comes equation via map identity maps pigeon hole principle happens similarly hence certainly generated main result section following theorem coordinate ring tensors noetherian finop proof follows general technique namely pass suitable category close fin allows basis argument however relevant proved new quite subtle use category also implicit section defined follows markov random fields iterated toric fibre products definition objects category finite sets equipped linear order morphisms surjective maps additional property function min strictly increasing also implies prove theorem set prove stronger statement fact get concrete grip following construction let denote abelian finop defined multiplication given addition map map sending elements matrices nonnegative integral entries constant column sum let kmn denote sending monoid kmn following proposition reformulation fact proposition finop isomorphic finop kmn true regarded osop proof finite set homomorphism kmn sends matrix positions zeroes elsewhere surjective kernel ideal moreover morphism fin define natural transformation choose monomial order implies every object define linear order follows smallest column equal satisfies chosen monomial order straightforward verification shows monomial order call elements monomials even though kmn polynomial ring moreover various orders interrelated follows lemma homos proof smallest column index differ differ column min equal respectively former larger latter furthermore smallest position differ hence hence min min hence fact addition individual also need following partial order union jan draisma florian oosterhof definition let objects say divides exist homos case write key combinatorial property relation defined following proposition relation sequence exist proof first associate monomial ideal polynomial ring place holder generated monomials crucial fact use monomial ideals sequence ideals exist words monomial ideals respect reverse inclusion prove proposition suppose contrary exists sequence sequence called bad basic properties bad sequence exists moreover among bad sequences additional property choose one cardinality minimal among bad sequences starting write last column one labelled largest element remainder dickson lemma exists subsequence increase weakly coordinatewise ordering restricting subsequence may moreover assume also consider new sequence since sequence also satisfies claim furthermore bad suppose instance set max exists homos extend element homos setting since smaller find contradiction badness original sequence hand suppose instance write max exists homos required holds since exists element column coordinatewise least large column extend element homos setting since maximal element destroy property markov random fields iterated toric fibre products function min increasing argument moreover property contradiction since found bad sequence satisfying strictly smaller underlying set position arrived contradiction next use basis argument proof theorem prove stronger statement kmn noetherian let ideal kmn object let denote set leading terms nonzero elements relative ordering proposition implies exists finite collection element divisible correspondingly exist elements leading monomial leading coefficient see generate suppose exists contained ideal generated let minimal leading term among elements ideal generated without loss generality leading coefficient construction exists homos lemma find leading monomial equals hence subtracting monomial times obtain element smaller leading monomial ideal generated contradiction need following generalisation theorem theorem noetherian proof algebra isomorphic mnr natural embedding mnr forming block matrix image consists block matrices constant partial column sums subalgebra noetherian algebra necessarily noetherian true current setting crucial point mnr priori means since summands constant partial column sums difference fact difference lies image observation proof case goes unaltered arbitrary remark similar arguments passing also used proofs main results section prove theorems toric fibre products prove theorem work product copies category fin one varieties whose iterated fibre products consideration let let set jan draisma florian oosterhof kdi consider fins assigns product morphism fins linear map determined hadamard product one let subset consisting tensors rank one thus fins let subset tuple fins variety subset lemma association defines fins subvariety proof definition clear elements tensors rank furthermore morphism fins linear map sends use also since theorem follows know coordinate ring noetherian fins theorem equal general theorem equal general theorem follows fins theorem proved follows coordinate ring subring spanned monomials corresponding matrices constant column sum using proposition fact finite product sets one finds natural fins division relation implies coordinate ring noetherian oss finally general general result follows proof theorem proves theorem full generality remark place used contain vector proof lemma happens empty require conclusion theorem still holds since one work directly category oss morphisms surjective remark replace closed subschemes rather subvarieties still fins subscheme since coordinate ring latter noetherian proof goes unaltered markov random fields iterated toric fibre products remark independent set theorem graph fixed state space sizes grow unboundedly independent set fixed case given maps finite sets thought state space smaller model state space larger model obtain natural map larger model smaller model hence graphical model naturally finop coordinate ring fint note reversal roles two categories compared lemma markov random fields given finite undirected simple graph number states attached node graphical model parameterisation lemma finite graph graphical model proof parameterisation sends vector domain vector target space moreover two parameter vectors closure following relate graph glueing toric fibre products given finite graphs node sets induces graph moreover fix number states set interpret ambient space part probability table variables fixed states variables scaling conditional joint probabilities given joint state write restrictions respectively maximal clique define correspondingly decompose restrictions respectively graphical model xgi closure image parameterisation setting exactly setting previous sections bilinear map takeqiterated products type space right naturally isomorphic space probability tables joint distribution jan draisma florian oosterhof variables labelled vertices glued graph identification following proposition ahs proof suffices prove gluing two graphs note clique contained entirely either already let parameter vectors domains respectively mcl parameter defined computation proves conversely given parameter vector let restriction maximal cliques first third type set equal second type equal third type yields opposite inclusion proof theorem proposition ideal ideal iterated toric fibre product lemma varieties xgi hadamard closed hence theorem applies generated polynomials degree less independent also generated binomials degree references jan draisma jochen kuttler tensors defined bounded degree duke math jan draisma finiteness model chirality varieties adv persi diaconis bernd sturmfels algebraic algorithms sampling conditional distributions ann markov random fields iterated toric fibre products alexander thomas kahle seth sullivant multigraded commutative algebra graph decompositions algebr david eisenbud bernd sturmfels binomial ideals duke math hammersley clifford markov fields finite graphs lattices unpublished http christopher hillar seth sullivant finite bases infinite dimensional polynomial rings applications adv thomas kahle johannes rauh toric fiber products versus segre products abh math semin univ steffen lauritzen graphical models volume oxford statistical science series oxford univ oxford diane maclagan antichains monomial ideals finite proc math florian oosterhof stabilisation iterated toric fibre products master thesis eindhoven university technology http johannes rauh seth sullivant markov basis http johannes rauh seth sullivant lifting markov bases higher codimension toric fiber products symb comp steven sam ideals bounded rank symmetric tensors generated bounded degree invent steven sam syzygies bounded rank symmetric tensors generated bounded degree math takafumi shibuta bases contraction ideals algebr steven sam andrew snowden methods representations combinatorial categories math seth sullivant toric fiber products algebra jan draisma mathematical institute university bern sidlerstrasse bern switzerland eindhoven university technology netherlands address florian oosterhof department mathematics computer science eindhoven university technology box eindhoven netherlands address
| 0 |
logical methods computer science vol submitted published security policies membranes systems global computing daniele gorla matthew hennessy vladimiro sassone dip informatica univ roma sapienza address gorla dept informatics univ sussex address matthewh dept informatics univ sussex address abstract propose simple global computing framework whose main concern code migration systems structured sites site divided two parts computing body membrane regulates interactions computing body external environment precisely membranes filters control access associated site also rely notion trust sites develop basic theory express enforce security policies via membranes initially control actions incoming agents intend perform locally adapt basic theory encompass sophisticated policies number actions agent wants perform also order considered introduction computing increasingly characterised global scale applications ubiquity interactions mobile components among main features forthcoming global ubiquitous computing paradigm list distribution location awarness whereby code located specific sites acts appropriately local parameters circumstances mobility whereby code dispatched site site increase flexibility expressivity openness reflecting nature global networks embodying permeating hypothesis localised partial knowledge execution environment systems present enormous difficulties technical conceptual currently stage exciting future prospectives established engineering practice two concerns however appear clearly acm subject classification key words phrases process calculi mobile code security type systems work mostly carried first author dept informatics univ sussex marie curie fellowship authors would like acknowledge support global computing projects mikado myths logical methods computer science gorla hennessy sassone creative commons gorla hennessy sassone import security mobility control arising respectively openness massive code resource migrations focus present paper aim classifying mobile components according behaviour empowering sites control capabilities allow deny access agents whose behaviour conform site policy see every site system entity named structured two layers computing body programs run code possibly accessing local resources offered site membrane regulates interactions computing body external environment agent wishing enter site must verified membrane given chance execute preliminary check succeeds agent allowed execute otherwise rejected words membrane implements policy site wants enforce locally ruling requests access incoming agents easily expressed migration rule form relevant parts agent wishing migrate receiving site needs satisfied behaviour complies policy latter expressed membrane judgement represents inspecting incoming code verify upholds observe formulation represents runtime check incoming agents fundamental assumption openendedness kind checks undesirable might avoided order reduce impact systems performance make runtime semantics efficient possible adopt strategy allows efficient agent verification precisely adopt elementary notion trust point view set sites consistently partitioned good bad unknown sites situation like one rule assume willing accept trusted site digest behaviour modify primitive judgement refined migration rule notable difference verifies entire code trust signer certificate otherwise suffices match digest carried together effectively shifting work originator main concern paper put focus machinery membrane implement enforce different kinds policies first distill simplest calculus conceivably convey ideas still support study important remark abstracting agents local computations expressed several models concurrency example ccs calculus concerned instead agents migration site site main language mechanism rather local communication using language examine four notions policy show enforced using membranes start amusingly simple policy lists allowed actions move count action occurrences policies expressed deterministic finite automata note policies concerned behaviour single security policies membranes systems global computing agents take account coalitional behaviours whereby incoming agents apparently innocent join clusters resident agents apparently innocent perform cooperatively potentially harmful actions least overrule host site policy call resident policies intended applied joint composite behaviour agents contained site explore resident policies fourth final notion policy cases theory adapts smoothly need refine information stored membrane inspection mechanisms structure paper section define calculus used paper start straightforward policy prescribes actions agent perform running site section enhance theory control also many kind actions agent wants perform site order execution finally section extend theory control overall computation taking place site behaviour single agents paper concludes section comparison related work also given theoretical results proved appendix respect extended abstract paper contains examples together complete proofs simple calculus section describe simple calculus mobile agents may migrate sites site guarded membrane whose task ensure every agent accepted site conforms entry policy syntax syntax given figure assumes two pairwise disjoint sets basic agent actions act ranged localities loc ranged agents constructed using standard parallel composition replication operators process calculi one novel operator migration agent seeks migrate site order execute code moreover promises conform entry policy practical terms might consist certification incoming code conforms policy site decide whether accept framework certification policy describes local behaviour agent thus called digest system consists finite set sites running parallel site takes form site name code currently running membrane implements entry policy convenience assume site names unique systems thus given system identify membrane associated site named start simple kind policy progressively enhance gorla hennessy sassone basic actions act localities loc agents systems nil nil agent basic action migration composition replication empty system site composition figure simple calculus definition policies policy finite subset act loc two policies write enforces whenever intuitively agent conforms policy given site every action performs site contained migrate sites whose names example conforming policy info req home info req actions home location means actions performed set info req migration occur site home interpretation policies definition predicate enforces also intuitive code conforms policy enforces also automatically conforms purpose membranes enforce policies incoming agents words site wishing enforce policy tin membrane decide allow entry agent another site two possibilities first syntactically check code policy tin implementation would actually expect agent arrive proof fact proof would checked second would trust agent code conforms stated therefore check conforms entry policy tin assuming checking one policy another efficient code analysis would make entry formalities much easier deciding apply second possibility presupposes trust management framework systems topic much current research simplify matters simply assume site contains part membrane record level trust sites moreover assume three possible levels bad unknown good intuitively site behaves way properly calculate digests hand site tagged unknown behave non specified way thus sake security considered bad security policies membranes systems global computing figure reduction relation nil figure structural equivalence realistic scenario would possible refine unknown either good bad upon collection enough evidence consider reliable sake simplicity model framework definition membranes membrane pair partial function loc unknown good bad policy operational semantics defined policies membranes give operational semantics calculus formalises discussion manage agent migration given binary relation systems defined least relation satisfies rules figure rule says agent running parallel code site perform action note semantics record occurrence standard first allows reductions within parallel components second says reductions relative structural equivalence rules defining equivalence given figure interesting reduction rule last one governing migration agent got migrate site site provided predicate true enabling predicate formalises discussion role membrane requires turn notion code satisfying policy gorla hennessy sassone nil figure typechecking incoming agents notion define mtl good enforces mpl else mpl words target site trusts source site trusts professed policy faithful reflection behaviour incoming agent entry gained provided enforces entry policy mpl case mpl otherwise trusted entire incoming code checked ensure conforms entry policy expressed predicate mpl figure describe simple inference system checking agents conform policies infer judgements form rule simply says empty agent nil satisfies policies also straightforward satisfies policy allowed residual satisfies rule says check sufficient check separately similarly replicated agents interesting rule checks checks migration allowed policy also checks code spawned conforms associated professed policy sense agent allowed entry site assumes responsibility promises makes conformance policies safety outlined reduction semantics sites seek enforce policies either directly checking code incoming agents entry policies simply checking professed policy trusted agents extent strategy works depends surprisingly quality site trust management example let home site name following trust function mth alice bob secure good consider system home bob alice secure entry policy home mph info req secure secure mps give home since mth bob good agents migrating bob home trusted digests checked entry policy mph contains agent home security policies membranes systems global computing enforces mph entry policy home transgressed another example suppose alice trusted home contains agent home secure policy enforces entry policy secure mps enforces mph migration allowed alice home moreover incoming agent conforms policy demanded home second migration agent also successful secure trusts home mts home good therefore digest checked entry policy secure reduction home bob alice secure entry policy secure foiled problem example trust knowledge home faulty trusts sites properly ensure professed policies enforced let divide sites trustworthy otherwise bipartition could stored external record stating nodes trustworthy typechecked ones however economy prefer record information membranes demanding trust knowledge trustworthy sites proper reflection division easily defined assume following ordering trust levels unknown bad unknown good reflects intuitive idea sites classified unknown may perhaps information subsequently classified either good bad hand good bad refined sites classified either reclassified definition trustworthy sites coherent systems system site trustworthy mtk good coherent mtk mtl every trustworthy site thus trustworthy site believes site trusted mtk good indeed trustworthy represented mtl good similarly believes bad indeed bad uncertainty classifies unknown may either good bad course coherent systems expect sites classified trustworthy act trustworthy manner amounts saying code running must one time gained entry satisfying entry policy note using policies definition satisfies entry policy mpk continues satisfy policy running theorem property coherent systems call therefore checked syntactically figure give set rules deriving judgement two interesting rules firstly says whenever trustworthy subtlety means conforms policy also digests proffered agents also trusted second relevant rule typing unknown sites need check resident code agents emigrating sites trusted example example continued let system example suppose home trustworthy mth home good gorla hennessy sassone trustworthy trustworthy figure systems nil figure labelled transition system coherent necessary sites bob alice secure also trustworthy consequently example derive would necessary derive home mpb mpb entry policy bob requires judgement enforces mph since take mph possible one also check code running alice stops system wellformed establishing would also require judgement home secure mpa turn eventually requires enforces mps impossible take mps systems know entry policies respected one way demonstrating reduction strategy correctly enforces policies prove system preserved reduction legal computations take place within trustworthy sites first requirement straightforward formalize theorem subject reduction proof see appendix security policies membranes systems global computing formalise second requirement need notion computations agent mind first define labelled transition system agents details immediate actions agent perform residual actions rules judgements let range act loc given figure straightforward judgements extended ranges act loc standard manner exists finally let act denote set elements act loc occurring theorem safety let system every trustworthy site implies act enforces proof see appendix entry policies calculus previous section based simple notion entry policies namely finite sets actions location names agent conforms policy site executes actions migrating location however syntax semantics calculus completely parametric policies required collection policies binary relation enforces binary relation indicating code conforms policy collection policies endowed two relations define predicate thereby get reduction semantics calculus section investigate two variations notion entry policies discuss extent prove reduction strategy correctly implements multisets entry policies policies previous section express legal actions agents may perform site however many situations restrictive policies desirable clarify point consider following example example let mail serv site name mail server following entry policy mpms list send retr del reset quit server accepts client agents performing requests listing mail messages messages resetting mailbox quitting consider system mail serv spam mail serv send send according typechecking figure send mpms gorla hennessy sassone nil enforces figure typechecking policies multisets however agent spamming virus practical implementations rejected mail serv scenarios would suitable policies able fix number messages sent achieved setting changing policies sets agent actions multisets actions consequently predicate enforces multiset inclusion first let fix notation view multiset set equipped occurrence function associates natural number element set model permanent resources also allow occurrence function associate element infinite number occurrences multiset notationally stands element occurring infinitely many times multiset notation extended sets multisets let denote multiset example example continued coming back example would sufficient define mpms sendk reasonable constant way agent send messages session wants send messages disconnect mail serv leave reconnect immigrate later practice would prevent major spamming attacks time spent operations would radically slow spam propagation theory presented sections adapted case policies multisets actions judgment redefined figure operator stands multiset union key rules first two properly decrease type satisfied typechecking third one needed recursive agents general freely unfolded hence actions intend locally perform iterated arbitrarily many times instance agent send satisfies policy notice new policy satisfaction judgement prevents spamming virus example typechecking policy mail serv defined example analysis previous section also repeated appropriate notion system difficult formulate basic problem stems difference entry policies resident policies fact agents ever entered site respects entry policy gives guarantees whether joint effect code currently occupying site also satisfies instance security policies membranes systems global computing terms example mail serv ensures incoming agent send messages nevertheless two agents gained entry running concurrently mail serv legally send jointly messages therefore necessary formulate terms individual threads code currently executing site let say thread form note every agent written form thread judgment modified replacing rule figure thread trustworthy theorem subject reduction multiset policies proof similar theorem necessary changes outlined appendix statement safety must changed reflect focus individual threads rather agents moreover must keep account also multiple occurrences actions trace thus let act return multiset formed actions occurring theorem safety multiset policies let system every trustworthy site thread implies act enforces proof see appendix finite automata entry policies second limitation setting presented section policies sometimes need prescribe precise order executing legal actions common interactions precise protocol pattern message exchange must respected end define policies deterministic finite automata dfas short example let consider example usually mail servers requires preliminary authentication phase give access mail services express fact could implement entry policy mail serv mpms automaton associated regular expression list send retr del reset server accepts client requests upon authentication via mechanism moreover policy imposes session regularly committed requiring sequence actions terminated quit could needed save status transaction avoid inconsistencies give formal definitions needed adapt theory developed section start defining dfa language associated enforces predicate dfas way agent satisfy dfa usual dfa quintuple gorla hennessy sassone finite set states input alphabet reserved state called starting state set final states also called accepting states transition relation framework alphabet dfas considered finite subset act loc moreover sake simplicity shall always assume dfas paper minimal definition dfa acceptance enforcement let dfa acps contains leads state final state acp defined enforces holds true whenever acp acp notice expected efficient way extablish enforces given automata see proposition appendix formally describe language associated agent exploiting notion concurrent regular expressions cre short introduced model concurrent processes purposes following subset cre suffices denotes empty sequence characters ranges act loc denotes concatenation interleaving shuffle operator closure intuitively represents language represents given cre language associated written lang easily defined formal definition recalled appendix given process easily define cre associated formally cre nil cre cre cre cre cre cre cre definition dfa satisfaction agent satisfies dfa written lang cre acp holds every subagent form proposition prove dfa satisfaction decidable although extremely hard establish substantiate hypothesis verifying digests preferable inspecting full code point view computational complexity ready state soundness variation simply consists finding proper notion systems section entry policy express properties single threads instead coalitions threads hosted site thus modifiy rule figure thread lang cre acps trustworthy essentially requires languages associated threads suffixes words accepted theorem since may appear quite weak security policies membranes systems global computing worth remarking predicate consistency check way express agent state respect policy soundness theorems reported proved appendix theorem subject reduction automata policies theorem safety automata policies let system every trustworthy site thread lang cre implies exists acp conclude section two interesting properties enforceable using automata example two actions lock unlock constraint lock must always followed unlock let lock lock unlock thus desired policy written using regular expression formalism example secrecy let secret secret action require whenever agent performs secret migrate anymore policy enforces agents performed secret always remain let secret loc thus desired policy resident policies change intended interpretation policies previous section policy dictated proposed behaviour agent prior execution site point entry implied safety systems property see rules focus policies intended describe permitted coalitional behaviour agents execution site nevertheless resident policies still used determine whether new agent allowed access site question entry permitted addition incoming agent code currently executing site violate policy let consider example illustrate difference entry resident policies example let licence serv site name server makes available licences download install software product distribution policy based queue first agents landing site granted licence following ones denied policy server mps get licencek however policy interpreted entry policy applying theory section system grants licences incoming agent moreover situation continues indefinitely effectively handing licences incoming agents wish policies previous section resident policies outline two different schemes enforcing policies simplicity confine attention one kind policy multisets gorla hennessy sassone static membranes first scheme conservative sense many concepts developed section entry policies redeployed let reconsider rule figure membrane takes consideration incoming code digest deciding entry via predicate membrane enforce resident policy must also take account contribution code already running namely need mechanism joining policies incoming resident rule let assume set policies relation enforces partial order every pair elements least upper bound denoted multiset policies case simply multiset union addition need able calculate minimal policy process satisfies let denote pol multiset policies adjust rules figure essentially eliminating weakening perform calculation resulting rules given figure judgements form lemma every one implies exists policy enforces proof first statement proved structural induction second induction derivation definition define partial function pol letting pol unique policy exists extra concepts change rule figure take current resident code account sufficient change side condition latter defined mtl good pol enforces mpl else mpl digest needs checked compare pol result adding digest policy resident code resident policy mpl hand source site untrusted need analyse incoming code parallel resident code clear theory developed section readily adapted revised reduction semantics particular subject reduction safety theorems remain true spare reader details however also clear approach enforcing resident policies serious practical drawbacks implementation would need freeze retrieve current content site namely agent calculate minimal policy satisfied merged digest order check predicate enforces typecheck composed agent reactivate according result checking phase activate even language equipped passivation operator overall operation would still computationally intensive consequently suggest another approach security policies membranes systems global computing nil enforces figure type inference agents policies multisets dynamic membranes previous approach repeatedly calculate policy current resident code time new agent requests entry allow policy membrane decrease order reflect resources already allocated resident code particular moment time policy currently membrane records resources remain future agents may wish enter entry agent corresponding decrease membrane policy formally need change migration rule rule one checks incoming code digest membrane policy also updates membrane defined judgement mtl good let enforces pol otherwise mpl mpl first notice migration occurs membrane target site changes latter obtained former eliminating resources allocated mpl incoming code source site deemed good calculated via incoming digest otherwise direct analysis code required calculate pol revised schema reasonable implementation point view soundness difficult formalise prove computation proceeds permanent record kept system original resident policies individual sites therefore defined relative external record resident policies system initiated purpose use function mapping trustworthy sites policies sufficient record original polices sites interested behaviour elsewhere define notion systems relative written formal definition given table crucial rule trustworthy sites site relative original record mpl pol guarantees original resident policy namely gorla hennessy sassone trustworthy pol enforces trustworthy figure systems theorem subject reduction resident policies proof outlined appendix introduction external records original resident policies also enables give safety result theorem safety resident policies let system every trustworthy site implies act enforces proof see appendix conclusion related work presented framework describe distributed computations systems involving migrating agents activity agents good sites constrained membrane implements layer dedicated security site described membranes enforce several interesting kind policies basic theory presented simpler case refined tuned throughout paper increase expressiveness framework clearly kind behavioural specification agent considered policy example promising direction could considering logical frameworks exploiting model checking proof checkers calculus presented basic even simpler ccs synchronization occur clearly aim basic framework focus membranes conjecture suitably advancing theory presented ideas lifted complex calculi including synchronization value passing name restriction related work last decade several calculi distributed systems code mobility appeared literature particular structuring system flat hierarchical collection named sites introduced possibility dealing sophisticated concrete features example sites considered unity failure mobility access control present work seen contribution last research line presented scenario membranes evolve however membranes presented section describe left site hand dynamically evolving type site always constrains overall security policies membranes systems global computing behaviour agents site modified upon privileges computations borrowed notion trust sites agents coming trusted sites accepted without control relaxed choice examining digest agents coming trusted sites moreover fixed net trust believe communication added basic framework richer scenario partial knowledge site evolve computation recovered related paper authors develop generic type system smoothly instantiated enforce several properties dealing arity mismatch communications deadlock race control linearity work one kind type modify subtyping relation order yield several relevant notions safety main difference approach different kind types thus different type checking mechanisms variations propose would nice lift work general framework closer leave future work work also related policies described deterministic finite automata constrain access critical sections concurrent functional language type effect system provided guarantees adherence systems policy particular sequential behaviour thread guaranteed respect policy interleavings threads locks safe unlike paper code migration explicit distribution thus one centralised policy used membranes filters computing body site external environment also considered membranes computationally capable objects considered kind process evolve communicate outer inner part associated node order regulate life node differs conception membranes simple tools verification incoming agents conclude remark understanding membranes radically different concept policies indeed loc security automata control execution agents running site monitoring technique consists accepting incoming code unconditionally blocking runtime actions abiding site policy clearly order implement strategy execution action must filtered policy contrasts approach membranes containers regulate interactions sites environments computation taking place within site control membrane therefore rely monitoring acknowledgement authors wish acknowledge reviewers paper positive attitude fruitful comments joanna jedrzejowicz kindly answered questions regular languages interleaving iterated interleaving references amadio modelling mobility theoretical computer science gorla hennessy sassone boudol generic membrane model proc global computing volume lncs pages springer bouziane primitive recursive algorithm general petri net reachability problem proc focs pages ieee cardelli gordon mobile ambients theoretical computer science erlingsson schneider sasi enforcement security policies retrospective proc new security paradigms workshop pages acm ferrari moggi pugliese metaklaim type safe language global computing mathematical structures computer science fournet gonthier maranget calculus mobile agents proc concur volume lncs pages springer garg raghunath concurrent regular expressions replationship petri nets theoretical computer science gorla hennessy security policies membranes systems global computing proc fguc entcs elsevier gorla pugliese resource access mobility control dynamic privileges acquisition proc icalp volume lncs pages hennessy riely resource access control systems mobile agents information computation hopcroft ullman introduction automata theory languages computation addisonwesley igarashi kobayashi generic type system proceedings popl pages acm mayr algorithm general petri net reachability problem siam journal computing milner calculus communicating systems milner communicating mobile systems cambridge university press nguyen rathke typed static analysis concurrent resource access control draft peterson petri net theory modeling systems prentice hall riely hennessy trust partial typing open systems mobile agents journal automated reasoning schmitt stefani distributed process calculus proc popl pages acm appendix technical proofs outline proofs technical results paper section section proofs section lemma subsumption enforces proof induction derivation judgment security policies membranes systems global computing proof theorem subject reduction proof induction inference notice trustworthiness invariant reduction therefore coherence defined terms trustworthiness sites also preserved reduction outline proof inference deduced using rule typical example hypothesis implies thus need prove imply two possible situations trustworthy judgment mpl holds hypothesis judgment mpl implied indeed coherence hypothesis mtl mtk mtk good exactly required mpl otherwise know mpk rule implies judgment mpl obtained using lemma since defined enforces mpl see section thus using obtain desired mpl trustworthy case simple rule always allows derive case used similar although simpler case rule used requires simple inductive argument finally prove case rule used need know coherency systems preserved structual equivalence proof fact straightforward left reader proof theorem safety let site prove act enforces statement proved induction length base case trivial since act may assume let consider induction prove transition inferred using rule rule definition rule desired used argument similar cases follow straightforward manner induction thus apply induction number actions performed obtain act enforces sufficies conclude act act enforces proofs section proofs given appendix easily adapted setting entry policies multisets outline main changes first recall enforces multiset inclusion judgments must inferred using rules figure rule used lemma remains true revised setting gorla hennessy sassone proof theorem subject reduction straightforward adaptation corresponding proof previous section significant change case replication unfolded via rule hypothesis therefore definition rule enforces since enforces lemma induction easy prove sufficies obtain desired proof theorem safety rule know proceed induction base case trivial inductive case consider induction prove transition inferred using rule rule definition rule desired used case simpler cases follow straightforward manner induction coming back main claim use induction obtain act enforces thus act enforces proofs section start recalling formal definition language associated cre follows lang lang lang lang lang lang lang lang lang lang otherwise notice definition lang hides trick also thus expected also consider interleaving strings different length start accounting complexity predicate enforces satisfiability relation policies automata stated following proposition proposition enforces calculated polynomial time decidable security policies membranes systems global computing proof let let acp definition check whether equivalent check whether following steps carried following calculate automaton associated done resulting automaton states calculate automaton associated done creates automaton states checking emptyness done using search starts starting state graph underlying stops whenever final state reached final state reached empty done thus overall complexity proved cre represented labelled petri net language accepted petri net lang easily construct dfa accepting complement language accepted see item previous proof construct product dfa seen petri net petri net associated cre petri net accepts lang cre acp see emptyness language solved algorithm reachability problem corresponding petri net problem proved decidable solvable time prove subject reduction theorem setting types dfas aim need adapt lemma need simple result languages associated dfas processes lemma enforces proof transitivity subset inclusion lemma acps lang cre lang cre viceversa lang cre lang cre proof trivial proof theorem subject reduction relies rule proof induction inference give base cases inductive steps handled standard way consider cases trustworthy sites case sites easier follows write mean dfa obtained setting starting state act case definition rule holds threads definition lang cre acps lemma lang cre sufficies infer gorla hennessy sassone mig case identify two good case coherence know moreover definition holds enforces mpl lemma mpl sufficies conclude good case simpler defined mpl proof theorem safety proof quite easy indeed rule holds mpl definition implies every lang cre acpsi mpl since automaton mpl minimal reachable state starting state say finite string definition lemma holds acp mpl proves thesis proofs section show main things modify carry proofs given appendix obviously judgment must replaced everywhere similarly becomes proof theorem subject reduction proof induction inference inductive steps simple give base steps act hypothesis trustworthy case trivial otherwise know hypothesis pol enforces definition judgment hence function pol pol pol hence pol enforces required mig hypothesis consider case trustworthy thus know pol mpl enforces premise rule two possible situations holds defined enforces mpl mpl mtl good case sufficient preserve coherence fact moreover rule know enforces rule pol pol pol enforces pol pol enforces pol pol pol mpl enforces required mtl good case previous proof rephrased using pol instead digest proof theorem safety prove slightly general result easily implies claim desired let system trustworthy site pol mpl implies act enforces proof induction base case trivial inductive case consider start easy prove pol pol security policies membranes systems global computing transitivity multiset inclusion claim pol mpl thus node trustworthy induction therefore act enforces hence act act enforces required conclude original claim theorem obtained result proved noticing enforces work licensed creative commons license view copy license visit http send letter creative commons nathan abbott way stanford california usa
| 6 |
error control surgical simulation feb huu phuoc satyendra hadrien university strasbourg cnrs icube strasbourg fance university luxembourg research unit engineering science luxembourg luxembourg inria nancy grand est france cardiff university school engineering queens buildings parade cardiff wales present first posteriori adaptive finite element approach realtime simulation demonstrate method needle insertion problem methods use corotational elasticity frictional interaction model problem solved using finite elements within refinement strategy relies upon finite element method combined posteriori error estimation driven local simulating soft tissue deformation results control local global error level mechanical fields displacement stresses simulation show convergence algorithm academic examples demonstrate practical usability percutaneous procedure involving needle insertion liver latter case compare force displacement curves obtained proposed adaptive algorithm obtained uniform refinement approach conclusions error control guarantees tolerable error level exceeded simulations local mesh refinement accelerates simulations significance work provides first step discriminate discretization error modeling error providing robust quantification discretization error simulations index element method error estimate adaptive refinement interaction ntroduction motivation simulations becoming increasingly common various applications geometric design medical simulation focus simulation interaction surgeon interventional radiologist deformable organs simulations useful help surgeons train rehearse complex operations guide intervention time reliable simulations could also central robotic surgery number factors concurrently involved defining accuracy surgical simulators mainly modeling error discretization error work area looking sources error compounded lumped overall error little work done discriminate modeling error interaction choice constitutive models discretization error use approximation methods like fem however impossible validate complete surgical simulation approach importantly understand sources error without evaluating discretization error modeling error https first ingredient mechanical simulation ability simulate deformation solid interest deformable solid mechanics problem usually solved finite element method fem methods used discretize equilibrium equations usually uneconomical prohibitively expensive use fixed mesh simulations indeed coarse meshes sufficient reproduce smooth behavior whereas nonsmooth behavior discontinuities engendered cuts material interfaces singularities boundary layers stress concentrations require finer mesh adequate approaches thus needed refine discretization areas yet existing numerical methods used surgical simulation use either fixed discretization finite element mesh meshfree point cloud reduced order method adapt mesh using heuristics knowledge approach currently able adapt finite element mesh based rational posteriori error estimates objective thus devise robust fast approach local remeshing surgical simulations ensure approach used clinical practice method robust enough deal realistically possible interaction surgical tools organ fast enough simulations approach also lead improved convergence economical mesh obtained time step final goal achieve optimal convergence economical mesh studied future work paper propose benchmark local mesh refinement coarsening approach based estimation discretization error incurred fem solution corotational model representing soft tissues general ideas presented used directly geometric design based deformable models proposed approach similitudes octree approaches error numerical simulations useful first review various sources error numerical simulations first error source arises mathematical model formulated given physical problem known modeling error second error arises upon discretization mathematical model example using fem meshfree methods finally numerical error incurred finite precision computers round errors paper focus second source error namely discretization error therefore assume model use descriptive reality solving right problem ask question whether solving problem right words correctly main difficulty answering question comes fact exact solution numerical solution could compared generally available different approaches exist address problem reviewed literature see simple methods available practice indicate error distribution categorized two classes first class indicators assumes exact solution stresses smooth enough least locally rely construction improved numerical solution raw numerical solution raw numerical solution compared two solutions significantly different certain threshold error level high mesh refined two solutions close together mesh kept unchanged coarsened idea proposed zienkiewicz zhu asymptotic convergence exact error studied second class indicators relies computation residual governing equations within computational cell typically finite element residualbased error indicators lead mesh refinement solution leads large residuals keep mesh constant coarsen element residuals relatively small compared given tolerance estimates first proposed babuska rheinboldt based local error estimators mesh refinement methodologies devised derive mesh adaptation see recent comprehensive presentation requires two key ingredients marking strategy decides elements refined refinement rule defines elements subdivided element marking use maximum strategy see section iii details strategies strategy percentage strategy see also used simulation percutaneous operations percutaneous procedures important part modern clinical interventions biopsy brachytherapy cryotherapy regional anesthesia success procedures depends good training careful planning optimize path target avoiding critical structures instances procedure also assisted robotic devices unfortunately natural tissue motion due breathing instance deformation due needle insertion generally lead incorrect inefficient planning address issues one must rely accurate simulation needle insertion problems computational speed also important since simulation core optimization algorithm needle path robotic control loop main works needle insertion see survey propose model interaction needle soft tissues using fem various methods proposed literature three main research directions followed soft tissue model flexible needle model needletissue interactions needle model usually issue terms modeling choice computational cost instance authors report computation times milliseconds fem needle model composed timoshenko beam elements soft tissue models usually based fem rely linear constitutive laws large body work covers modeling simulation soft tissue deformation even computation constraints overall interaction model needle tissue remains major challenge combines different physical phenomena puncturing cutting sliding friction poynting effect capture essential characteristics interactions existing methods usually rely experimental force data remeshing techniques order align nodes fem mesh needle path approach avoiding remeshing used simulate interactions however simulations account realistic anatomical details addition misra showed needle steering occurs using asymmetric needle tips modeled using microscopic observations interactions unfortunately assumption made region domain needle inserted simulations involving detailed meshes become slow real issue context presented errorcontrolled simulation needle insertion thus unsolved problem whose solution requires tackling number difficulties developing models review cutting simulation provided using models within discrete approaches like fem methods others accelerating simulation advanced hardware model order reduction validating interaction model combined discrete solution solving right problem verifying discrete solution controlling discretization error associated discrete model solving problem right paper propose focus last point aim model interactions using adaptive meshing strategy driven simple posteriori error estimation techniques similarly require mesh conform needle path mesh subdivision introduced means improve accuracy interactions mesh refinement method guided error estimate resulting interaction imposed boundary conditions elements mesh subdivided numerical error threshold reached subdivision process completely reversible refined elements set back initial topology refinement longer needed refinement approach rely usual octree structure see also thus allowing variety subdivision schemes well suited needle insertions detailed section iii using approach interactive computation times achieved detailed tissue motion near needle shaft tip computed opens new possibilities fast simulations flexible needle insertion soft tissues illustrate convergence study adaptive refinement scheme possible scenarios section odel discretization section describe model discretization approach used needle soft tissue interaction problem statement needle insertion three types constraints defined see fig coulomb friction law used describe see fig needle tip cut tissue stick cut slip finally needle shaft constraints defined along needle shaft needle shaft enforced follow insertion trajectory created advancing needle tip coulomb friction law applied constraints represent stick sliding contact tissue needle shaft stick sliding strong form model tissue needle dynamic deformable objects thus regarded dynamic elastic solids governing equations model formulated div grad grad fig three types constraints needle soft tissue surface puncture red needle tip constraint green needle shaft constraints blue local coordinate system defined constraint point frictional contacts within three types constraints first puncture constraint defined needle tip tissue surface constraint satisfies condition direction normal tissue surface needle penetrate tissue tangent direction coulomb friction law considered order take account needle tip tissue surface stick slip cauchy stress tensor body force vector mass density strain tensor internal variables denotes partial derivative respect time denotes outward unit normal vector denotes contact force needle tissue object domain boundary conditions shown fig denotes distance needle tip tissue surface direction denotes contact force direction let represent puncture strength tissue condition expresses contact force exists needle tip contact tissue surface contact force higher threshold puncture strength tissue denotes friction parameter second needle tip constraint defined tip needle soon penetrates tissue depending relationship contact forces normal direction along needle shaft tangent direction fig body subjected traction boundary part body force imposed displacement boundary part simplified illustration fem discretization spatial temporal discretization space discretization basic idea fem discretize domain finite elements nodes depicted figure based discretization concept see obtain discrete problem element feext element mass matrix element stiffness matrix damping matrix feext external force applied element internal force reads bte bte ebe solving position velocity updated needle tissue matrix fourthorder stiffness tensor denote current initial position element respectively however using results inaccuracy large rotations problems observed artificially inflated deformation elements overcome felippa decomposed deformation gradient element rigid deformation parts element nodal internal force becomes take account interaction contact constrained dynamic system solved needle tissue becomes rte stands element rotation matrix element local frame respect initial orientation updated time step using corotational formulation results visual artifacts global mass stiffness damping matrices system assembled element ones rewritten global system equation acceleration position velocity vectors respectively ext represents net force difference external internal forces applied object simulations diagonally lumped mass matrix employed stiffness matrix computed based formulation described allows large rotations needle well tissue higher accuracy computed strain field soft tissue domain discretized using hexahedral elements avoid complex issue generating exact hexahedral mesh domain use mesh conform boundary domain immersed boundary method needle hand modeled using beam elements case node needle degrees freedom translations rotations tissue model uses translational degrees freedom per node since fem formulation based discretization physical domain naturally introduces discretization error result control error source section iii present adaptive refinement scheme time discretization temporal discretization use implicit backward euler scheme described follows denotes time step inserting gives final discrete system kvt constraint enforcement interaction adv denotes lagrange multipliers representing interaction forces needle tissue provides direction constraints different types constraints needle tissue used solving interaction detailed section remark combining advanced clinically relevant interaction straightforward approach iii rror estimate adaptive refinement achieve faster accurate fem simulations different adaptive techniques proposed literature approaches common refinement procedure limited cubic elements recursively subdivided eight finer elements overcome limitation generic remeshing techniques proposed however complex implement may lead elements refinement algorithm designed independent type element tetrahedra hexahedra others produces high quality mesh thanks elements predefined template starting initial relatively coarse mesh required achieve simulation criterion based posteriori error estimate evaluated drive local refinement elements stress increases considered refinement elements stress decreases taken lower refinement coarsening level define approximate error element energy norm distance fem solution denoted improved solution denoted obtained smoothing procedure among elements increasing stress elements error exceeds predefined threshold subdivided refined similarly among elements decreasing stress elements error smaller threshold coarsened notice limited regularity mesh start reasonable heterogeneous mesh starting point prior refinement error estimate using superconvergent patch recovery spr procedure smoothed stress field recovered stresses computed element center idea technique based fact stress strain superconvergent points element center case linear hexahedral elements accurate higher order element nodes values employed recover nodal stress strain within least squares sense representation patch hexahedral elements shown fig component fem solution nodal recovered stresses computed defining polynomial interpolation within element patch certain threshold refined let maxe defined mark element refinement marking strategies strategy percentage strategy see also used however maximum strategy described cheapest among hence preferred use maximum strategy large value leads small number elements marked refinement small value leads large number elements marked refinement studies presented section set paj xyz let determine unknowns minimize sampling points element patch ajk minimization results finding ptk ptk available nodal recovered stress values obtained simply employing evaluated corresponding node adaptive refinement criterion satisfied within element element replaced several elements according predefined template template simply set nodes associated topology defined using isoparametric formulation template nodes added using natural coordinates position cartesian coordinates new node defined einstein summation convention applied nodes removed element hexahedral elements shape function computed barycentric coordinates template node respect node procedure summarized remove element refined add template nodes template elements using element shape functions update topology global mesh compute stiffness matrix new elements needed update mass damping matrices worth mentioning refinement new element fulfills refinement criterion refined using predefined template results multiresolution mesh see fig conversely coarsening criterion satisfied already refined elements coarsening procedure applied simply removing respective fine elements updating associated matrices superconvergent sampling point patch assembly point nodal value determined patch fig smoothed gradient obtained element patch natural coordinates element marking strategy obtaining error distribution across elements employ maximum strategy select elements must refined next level mesh strategy elements error see higher similar displacements recovered stresses also obtained using element shape functions physical coordinates fig adaptive subdivision process element subdivided topologically transformed reference shape using template expressed natural coordinates cartesian coordinates mesh computed using element shape functions process applied recursively completely reversible handling since elements refined using templates regardless neighboring elements incompatible nodes hanging nodes generated avoid discontinuities simulation nodes need handled special way nodes considered slave independent master degrees freedom dofs one possible options use lagrange multipliers approach increases total number dofs solve unknown lagrange multipliers addition usually leads systems approach follow method proposed considers reduced system without tjunctions solving new positions let denote transformation matrix reduced system full one matrix contains barycentric coordinates slaves respect masters contains normal dofs reduced system matrix computed full system matrix nodal forces reduced space computed full space reduced system dvr solved find dvr difference velocity current previous time step dvr available difference velocity full space easily deduced dvf tdvr latter employed update new position velocity object heuristic example shown fig explicitly illustrate method handling especially full reduced systems defined resulting computation transformation matrix subdivision node full system see fig within heuristic illustration considering node one dof static condition applied displacement node expressed node full system reduced system fig illustration handling method schematic example assuming using constitutive law hyperelastic formulation system matrix needs updated time step consequently local updating topology limited impact computation main overhead comes handling somewhat compensated reduced dimensions linear system solved although reduced matrix denser initial one experience shown consider mesh elements subdivided approximately nodes resulting mesh obviously number depends strongly template mesh used refinement also fact elements subdivided locally within one several regions eedle tissue interaction algorithm model interaction needle tissue consider two different constraints penetration puncture sliding avoid remeshing modeling interaction use constraints based approached described however unlike solve constrained system differently entering tissue constraint created needle tip contact tissue surface penetration constraint represented mathematically stand position needle tissue respectively immediately entering tissue sliding constraint needle tissue created along needle trajectory friction considered nonlinear constraints tissue denoted subscript needle denoted subscript expressed global coordinate system using lagrange multipliers follows transformation matrix general case node three dofs built straightforwardly example system matrices soft tissue needle respectively account contraint directions needle tissue local coordinate system attached needle constraints needle tissue expressed two directions orthogonal needle shaft resulting sliding constraint expressions constraints global coordinate system built transforming local fig shows reduced system node considered displacement fields full reduced systems expressed taking account transformation matrix constraint expressions local coordinate system global one however formulating problem leads non positive definite global matrix makes system challenging solve alternative approach proposed solve interaction problem three steps predictive motion interaction constraints constraint solving corrective motion however alternative requires computation matrix inverse approach time consuming especially large systems unlike method solve constrained problem iteratively using augmented lagrangian method wjdv penalty weight matrix finite values advantage method exact solution interaction obtained compared penalty method see additional dofs needed compared classical lagrange multiplier method critical feature approach system matrix positive definite therefore iterative solvers conjugate gradient used efficiently worth stressing using augmented lagrangian method solving interaction combined handling tissue straightforward indeed mentioned sufficient solve reduced space easily computed reduced solution esults discussions demonstrate efficiency method present several numerical studies first present convergence stress error typical domain motivation test localized nature stress concentrated corner domain mimics localized scenario needle insertion demonstrate computational advantage adaptive refinement uniform refinement also present computational time problem point benefits local mesh refinement needle insertion simulation study needle insertion scenario friction show impact local refinement displacement field around needle shaft also interaction force profile finally complicated scenario simulated insertion needle liver undergoing breathing motion simulations needle soft tissue follow linear elastic constitutive law associated formulation convergence study show advantage adaptive refinement scheme compared uniform mesh refinement convergence study performed domain shown fig domain clamped right boundary simply supported vertical direction top boundary dimension set thickness domain young modulus poisson fig boundary conditions domain test define relative error fig show plots relative error versus number dofs uniform adaptive refinement see uniform refinement relative error converges slope corresponds theoretical slope singular problems comparison uniform refinement adaptive refinement converges higher slope clearly achieve certain expected error simulation adaptive refinement needs fewer dofs uniform refinement ratio tested material respectively domain subjected uniformly distributed traction force left surface boundary starting mesh hexahedral elements excluding corner elements two types refinements performed first one called uniform refinement consists subsequently subdividing every hexahedral element smaller elements second approach called adaptive refinement elements satisfy marking condition refined subdivided smaller elements uniform adaptive dofs fig convergence relative error comparison uniform adaptive refinements demonstrate performance adaptive local refinement terms computational time compared uniform full refinement mesh expected relative error studied domain problem result reported table local refinement decreases number dofs factor associated computational dofs full refinement local refinement time total time table computational time topological changes system matrix solve view observations strong argument support employment adaptive refinement scheme limiting discretization error simulations impact local mesh refinement displacement field present results simulation needle insertion homogeneous tissue model study consider young modulus needle tissue whereas poisson ratio taken friction coefficient needle tissue set displacement field due frictional interactions needle viewed plane tissue shown fig shown nonlinear variation displacement field vicinity needle mesh adaptively refined near needle shaft interaction captured good case full refinement see fig indeed closer position needle shaft higher obtained displacement field conversely coarse element used refined simulation behavior reproduced within element fig important point refinement using anisotropic template fig relevant since generates fewer dofs using isotropic template fig still catching nonlinear displacement field impact local mesh refinement interaction order gain insight nonlinear behavior interaction around needle shaft exhibit effect adaptive refinement computational time precision needle insertion simulation phantom tissue test carried see fig study consider young modulus mpa needle mpa tissue whereas poisson ratio taken tissue needle respectively linear elastic model based corotational formulation employed needle well tissue dimension tissue needle length radius respectively three meshing schemes employed coarse mesh resolution nodes fine mesh resolution nodes adaptive mesh starting coarse mesh nodes adaptively refining mesh simulation within adaptive meshing scheme mesh refinement piloted error estimate described section iii investigate sensitivity interaction parameters frictional coefficient puncture strength resulting mesh adaptation thus computational output two scenarios studied first concerns varying puncture strength parameter keeping frictional coefficient tissue needle shaft second dedicated study influence frictional coefficient setting keeping puncture strength unchanged within two scenarios frictional coefficient tissue surface set fig shows plots integrated interaction force along needle shaft versus displacement needle tip first scenario second scenario depicted fig shows contact force needle tip tissue surface higher tissue puncture strength needle penetrates tissue right penetration event relaxation phase observed induces decreasing force needle bases thereafter observed needle moves forward interaction force increases due increasing frictional force along needle shaft directly proportional insertion distance contact force needle tip greater cutting strength soft tissue needle cuts tissue continues going ahead immediately cutting action relaxation phase observed anew behavior periodically observed needle insertion observations clearly shown fig obtained zooming fig typical behavior distinguished phases presented fig observed mesh refinement resulting global behavior system less stiff explained fact beneath mesh refinement greater displacement field obtained results needletissue interaction also observed section also shown using adaptive refinement scheme interaction behavior close using fine mesh see fig seen fig interesting observation number dofs adaptive refinement simulation significantly fewer using fine mesh obviously results important gain terms computational time indeed simulation using adaptive refinement mesh runs nearly fps compared fps using uniform fine mesh seen fig smaller frictional coefficient behavior adaptive refinement scheme differs simulation using fine mesh aligns nicely fact smaller friction force lead mesh refinement around needle shaft indeed seen fig refinement case frictional coefficient mostly due penetration force tissue surface fig variation tissue displacement resulting friction needle insertion measured along vertical line located needle tip mesh adaptive refinement using anisotropic template adaptive refinement using isotropic template full refinement graph shows benefits anisotropic refinement fig schematic representation needle insertion simulation phantom tissue phantom tissue clamped right surface application liver method proposed paper applied liver model undergoing breathing motion mimic typical case ablation tumor young modulus poisson ratio needle tissue section employed frictional coefficient set needle inserted pulled back puncture force tissue surface set induced error estimate needle insertion constraints applied liver lead refinements different regions initial mesh dofs needle advances liver combining motion liver due breathing effect mesh progressively refined accurately take account interaction needle liver maximum number dofs needle completely inserted liver needle steadily pulled back mesh progressively coarsened needle completely outside liver thereafter refinement process due movement liver breathing effect imposed boundary conditions number dofs stage applying adaptive procedure guaranteed discretization error fully controlled computational cost also kept small possible indeed without adaptive remeshing procedure applied initial mesh simulation runs fps discretization error whereas adaptive refinement performed runs fps decreasing discretization error note frame rates result computational resolutions needle tissue interactions also visualization cost order investigate benefits adaptive refinement scheme needle inserted retracted liver phantom tests uniform adaptive refinement schemes carried within uniform refinement case coarse mesh dofs fine mesh dofs used liver discretization whereas upon adaptive refinement scenario simulations start coarse mesh dofs adaptively refined two schemata marked element refined elements elements integrated interaction force along needle shaft plotted versus needle tip displacement needle inserted pulled back see fig observed needle outside tissue interaction force also detected needle completely retracted tissue clearly shown interaction depends strongly mesh used especially mesh coarse mesh influence reveals stronger effect insertion stage pullback one fully understood fact frictional coefficient needle shaft tissue important insertion steps pullback ones versus respectively using coarse mesh puncture force tissue surface well captured compared case fine mesh adaptive refined mesh employed see fig mesh refinement around needle shaft guided error estimate interaction converges solution fine mesh however maximum number dofs using adaptive refinement schemes respectively therefore observed section using adaptive refinement scheme results significantly fewer dofs compared employment uniform fine mesh lead conclusion even starting coarse mesh employing adaptive refinement scheme interaction simulated precisely compared coarse mesh significantly lower computational cost compared uniform fine mesh onclusion perspectives paper contributes structured approach answering important rarely tackled question accuracy mesh mesh adaptive force force mesh mesh adaptive displacement mesh mesh adaptive force displacement displacement fig comparison interaction forces along needle shaft within two cases without refinement different mesh resolutions adaptive refinement penetration strength varied keeping frictional coefficient needle shaft soft tissue mesh mesh adaptive force force mesh mesh adaptive displacement mesh mesh adaptive force displacement displacement fig comparison interaction forces along needle shaft within two cases without refinement different mesh resolutions adaptive refinement frictional coefficient needle shaft varied keeping penetration strength tissue surface mesh force force mesh mesh adaptive displacement displacement fig puncture cutting relaxation behaviors shown plot fig typical behavior shown phases phase needle puncturing tissue surface phase penetration event relaxation occurs phase interaction force increases due fact frictional force increases insertion distance needle tip cut tissue advance forward relaxation occurs agreement realtcut towards real time multiscale simulation cutting materials applications surgical simulation computer guided surgery inria thanks funding european project rasimas bordas grateful many helpful discussions pierre kerfriden karol miller christian duriez michel audette diyako ghaffari dofs mesh mesh adaptive displacement fig number dofs needle insertion simulation fig surgical simulation novelty paper drive local adaptive mesh refinement needle insertion robust posteriori estimate discretization error seen first step control error associated acceleration methods needle surgical simulations separate modeling solving right problem discretization error solving problem right verification discrete scheme guaranteed approach posteriori estimate asymptotically converges exact error use implicit approach also control error equilibrium equations assuming proper material model kinematics problem guarantee accuracy solution case explicit time stepping approaches validation approach considered focus one source error discretization whilst limitation believe quantifying discretization errors separately modeling errors necessary devise accurate surgical simulators better understand resulting simulation results natural direction research building recent work simulations devise approaches able learn data acquired simulation paradigm model would adapt real situation opposed driven continuous indirect comparison case work unknown exact solution turn approach would facilitate simulations considered currently investigating directions bayesian inference parameter identification model selection uncertainty quantification approaches acknowledgements first last author supported fellowship last author part university strasbourg institute advanced study bpc bordas satyendra tomar also thank partial funding time provided european research council starting independent research grant erc stg grant eferences nealen physically based deformable models computer graphics computer graphics forum vol wang linear subspace design shape deformation acm transactions graphics tog vol courtecuisse simulation contact cutting heterogeneous medical image analysis vol zienkiewicz finite element method basis fundamentals elsevier vol nguyen meshless methods review computer implementation aspects mathematics computers simulation vol adaptive nonlinear finite elements deformable body simulation using dynamic progressive meshes computer graphics forum vol stable deformations proceedings acm symposium computer animation ainsworth oden posteriori error estimation finite element analysis john wiley sons vol seiler robust interactive cutting based adaptive octree simulation mesh visual computer vol posteriori error estimation techniques finite element methods ser numerical mathematics scientific computation oxford university press oxford zienkiewicz zhu superconvergent patch recovery spr adaptive finite element refinement computer methods applied mechanics engineering vol carstensen bartels averaging technique yields reliable posteriori error control fem unstructured grids low order conforming nonconforming mixed fem math vol electronic bartels carstensen averaging technique yields reliable posteriori error control fem unstructured grids higher order fem math vol electronic rheinboldt error estimates adaptive finite element computations siam numer vol preoperative trajectory planning percutaneous procedures deformable environments computerized medical imaging graphics vol abolhassani needle insertion soft tissue survey medical engineering physics vol duriez interactive simulation flexible needle insertions based constraint models lecture notes computer science vol part misra mechanics flexible needles robotically steered soft tissue int rob vol simulation cuts deformable bodies eurographics state art reports dick hexahedral multigrid approach simulating cuts deformable objects ieee transactions visualization computer graphics vol liu quek chapter fundamentals finite element method finite element method second edition second edition liu eds oxford butterworthheinemann zienkiewicz taylor finite element method solid mechanics ser referex materiales butterworthheinemann fig refinement patterns colored stress level adaptive scenario using different frictional coefficients fig simulation needle insertion liver using dynamic mesh refinement scheme driven error estimate visual depiction simulation runs processor uniform mesh dofs uniform mesh dofs adaptive adaptive force displacement fig phantom interaction force needle insertion pullback interaction force varies due advancing friction tissue cutting strength globally increases positive values insertion stage displacement needle retracted interaction force changes direction varies due retrograding friction globally decreases gets zero needle completely pulled form loop profile curve resulting needle insertionretraction felippa haugen unified formulation corotational finite elements theory computer methods applied mechanics engineering vol pinelli immersed boundary method generalised finite volume finite difference solvers journal computational physics vol baraff witkin large steps cloth simulation proceedings siggraph kwak hexahedral mesh generation remeshing metal forming analyses journal materials processing technology vol koschier adaptive tetrahedral meshes brittle fracture simulation symposium computer animation koltun sifakis eds eurographics association burkhart adaptive subdivision tetrahedral meshes computer graphics forum vol paulus virtual cutting deformable objects based efficient topological operations visual computer vol sifakis hybrid simulation deformable solids proc symposium computer animation uzawa arrow iterative methods concave programming preference production capital cambridge university press finite element method penalty mathematics computation vol papadopoulos solberg recent advances contact mechanics lagrange multiplier method finite element solution frictionless contact problems mathematical computer modelling vol seiler enriching coarse interactive elastic objects deformations proceedings acm symposium computer animation simulation detailed surface deformations surgery training simulators ieee transactions visualization computer graphics vol oct seiler efficient transfer local deformations simulations workshop virtual reality interaction physical simulation eurographics association rappel bayesian inference stochastic identification elastoplastic material parameters introduction misconceptions additional insight corr vol online available http hauseux accelerating monte carlo estimation derivatives finite element models computer methods applied mechanics engineering online available http
| 5 |
jul convolutional neural associative memories massive capacity noise tolerance amin karbasi computer science department swiss federal institute technology zurich zurich switzerland amir hesam salavati computer communication sciences department ecole polytechnique federale lausanne lausanne switzerland amin shokrollahi computer communication sciences department ecole polytechnique federale lausanne lausanne switzerland july abstract task neural associative memory retrieve set previously memorized patterns noisy versions using network neurons ideal network ability learn set patterns arrive retrieve correct patterns noisy queries maximize pattern retrieval capacity maintaining reliability responding queries majority work neural associative memories focused designing networks capable memorizing set randomly chosen patterns expense limiting retrieval capacity paper show target memorizing patterns inherent redundancy belong subspace obtain aforementioned properties sharp contrast previous work could improve one two aspects expense third specifically propose framework based convolutional neural network along iterative algorithm learns redundancy among patterns resulting network retrieval capacity exponential size network moreover asymptotic error correction performance network linear size patterns extend approach deal patterns lie approximately subspace extension allows memorize datasets containing natural patterns images finally report experimental results synthetic real datasets support claims introduction ability neuronal networks memorize large set patterns reliably retrieve presence noise attracted large body research past three decades design artificial neural associative memories similar capabilities ideally perfect neural associative memory able learn patterns large pattern retrieval capacity problem called associative memory spirit similar reliable information transmission faced communication systems goal efficiently decode set transmitted patterns noisy channel despite similarity common methods deployed fields graphical models iterative algorithms name witnessed huge gap efficiency achieved specifically deploying modern coding techniques shown number reliably transmitted patterns noisy channel made exponential length patterns particularly achieved imposing redundancy among transmitted patterns contrast maximum number patterns reliably memorized current neural networks scales linearly size patterns due common assumption neural network able memorize subset patterns drawn randomly set possible vectors length see example hopfield venkatesh psaltis jankowski muezzinoglu recently kumar suggested new formulation problem suitable set patterns considered storing enforce set constraints formed bipartite graph opposed complete graph considered earlier work one layer feeds patterns network takes account inherent structure role bipartite graph indeed similar tanner graphs used modern coding techniques tanner using model kumar provided evidence resulting network memorize exponential number patterns expense correcting single error recall phase introducing structure salavati karbasi could improve error correction performance constant number errors paper similar model considered kumar consider set patterns weak minor components patterns lie subspace making use inherent redundancy introduce first convolutional neural associative network provably exponential storage capacity prove architecture correct linear fraction errors develop online learning algorithm ability learn patterns arrive property specifically useful size dataset massive patterns learned streaming manner extend results case patterns lie approximately subspace extension particular allows efficiently memorize datasets containing natural patterns evaluate performance proposed architecture learning algorithm numerical simulations provide rigorous analysis support claims storage capacity error correction performance method order optimum method significantly improve results except constants learning algorithm extension subspace learning method proposed oja kohonen additional property imposing learned vectors sparse sparsity essential phase remainder paper organized follows section provide overview related work area section introduce notation formally state problems focus work namely learning phase recall phase storage capacity present learning algorithm section error correction method section section devoted pattern retrieval capacity report experimental results synthetic natural datasets section finally proofs provided section related work famous hopfield network among first neural mechanisms capable learning set patterns recalling subsequently hopfield employing hebbian learning rule hebb hopfield considered neural network size binary state neurons shown mceliece capacity hopfield network bounded log due low capacity hopfield networks extension associative memories neural models also explored hope increasing pattern retrieval capacity particular jankowski investigated neural associative memory neuron assigned multivalued state set complex numbers shown muezzinoglu capacity networks increased cost prohibitive weight computation mechanism overcome drawback modified gradient descent learning rule mgdr devised lee recently order increase capacity robustness line work considered exploiting inherent structure patterns done either making use correlations among patterns memorizing patterns sort redundancy note differ previous work one important aspect possible set patterns considered learning common structures employing neural cliques gripon berrou among first demonstrate considerable improvements pattern retrieval capacity hopfield networks possible albeit still passing polynomial boundary capacity similar idea proposed venkatesh learning patterns boost capacity achieved dividing neural network smaller fully disjoint blocks using idea capacity creased size clusters nonetheless observed improvement comes price limited noise tolerance capabilities deploying higher order neural models contrast pairwise correlation considered hopfield networks peretto niez showed storage capacity improved degree correlation models state neurons depends state neighbors also correlations among however main drawback work lies prohibitive computational complexity learning phase recently kumar introduced new model based bipartite graphs capture higher order linear correlations without prohibitive computational complexity learning phase proposed model improved later kumar assumption bipartite graph fully known sparse expander proposed algorithm kumar increased pattern retrieval capacity addition restrictive assumptions performance recall phase still par paper introduce convolutional neural network capable memorizing exponential number structured patterns able correct linear fraction noisy neurons similar model considered kumar assume patterns lie low dimensional subspace note network size neuron hold finite number states capable memorizing exponential number patterns also correcting linear fraction noisy nodes network best hope addition importantly practice extend results set patterns approximately belong subspace worth mentioning learning set input patterns robustness noise focus neural associative memories instance vincent proposed interesting approach extract robust features autoencoders approach based artificially introducing noise learning phase let network learn mapping corrupted input correct version way shifted burden recall phase learning phase contrast consider another form redundancy enforce suitable pattern structure helps design faster algorithms derive necessary conditions help guarantee correct linear fraction noise without previously exposed although neural architecture technically considered deep belief network dbn shares similarities dbns typically used features means several consecutive stages pooling rectification etc multiple stages help network learn interesting complex features important class dbns convolutional dbns input layer also known receptive field divided multiple overlapping patches network extracts features patch jarrett since divide input patterns overlapping smaller clusters model similar convolutional dbns furthermore also learn multiple features case dual vectors patch feature extractions differ different patches indeed similar approach proposed contrast convolutional dbns focus work classification rather recognition exact patterns noisy versions moreover dbns find proper dictionary classification also need calculate features input pattern alone increases complexity whole system especially denoising part objective model however dictionary defined terms dual vectors consequently previously memorized patterns computationally easy recognize yield vector output feature extraction stage words output happen input pattern noisy another advantage model dbns much faster learning phase precisely using single layer overlapping clusters model information diffuses gradually network criteria achieved dbns constructing several stages socher problem formulation section set notation formally define learning phase recall phase storage capacity learning phase throughout paper pattern denoted vector length integer words set could thought short term firing rate neurons let denote states neurons neural network neuron updates state based states neighbors precisely neuron first computes weighted applies nonlinear activation function weight neural connection neurons denotes neighbors neuron several possible activation functions used literature including limited linear threshold logistic tangent hyperbolic functions denote dataset patterns dimensional matrix patterns stored rows goal work memorize patterns strong local correlation among entries specifically divide entries pattern overlapping lengths note due overlaps entry pattern member multiple shown figure denote enforce local correlations assume form subspace dimension done imposing linear constraints cluster linear constraints captured learning phase form dual vectors specifically find set vectors wmi constraint neurons pattern neurons figure bipartite graph figure see three subpatterns along corresponding clusters subpattern overlaps weights chosen ensure patterns lying subspace orthogonal set hwj denotes set represents inner product weight matrix constructed placing dual vectors next equation written equivalently cluster represents bipartite graph connectivity matrix next section develop iterative algorithm learn weight matrices encouraging sparsity within connectivity matrix one easily map local constraints imposed global constraint introducing global weight matrix size first rows matrix correspond constraints first cluster rows correspond constraints second cluster forth hence inserting zero entries proper positions construct global constraint matrix use local global connectivity matrices eliminate noise recall phase recall phase recall phase noisy version say already learned pattern given assume noise additive vector size denoted whose entries assume values independently corresponding probabilities words entry noise vector set probability values chosen simplify analysis approach easily extended noise models denote realization noise formula note therefore goal recall phase remove noise recover desired pattern task accomplished exploiting facts chosen set patterns satisfy set constraints opted sparse neural graphs learning phase based two properties develop first recall algorithm corrects linear fraction noisy entries capacity last issue look work retrieval capacity proposed method retrieval critical storage capacity defined maximum number patterns neural network able store without significant errors returned answers recall phase hence storage capacity usually measured terms network size well known retrieval capacity affected certain considerations neural network including range values states patterns inherent structure patterns topology neural networks work show careful combination patterns structure neural network topology leads exponential storage capacity size network learning algorithm section develop algorithm learning weight matrix given cluster assumptions lie subspace dimension hence adopt iterative algorithm proposed oja experiments considered larger integer values noise well noise model considered simplify notations analysis note since entries cap values respectively karhunen learn corresponding null space however order ensure success denoising algorithm proposed section require sparse end objective function shown penalty term encourage sparsity furthermore seeking orthogonal basis approach proposed instead wish find vectors orthogonal patterns hence optimization problem finding constraint vector formulated follows minw problem drawn training set indicates inner product positive constant penalty term favor sparse results paper consider tanh easy see large function tanh approximates shown figure therefore larger gets closer another popular choice widely used compressed sensing see example donoho tao pick note optimization problem without constraint trivial solution vector minimize objective function shown subject norm constraint use stochastic gradient descent follow similar approach calculating derivative objective function considering updates required randomly picked pattern obtain following iterative algorithm equations iteration number subpattern pattern drawn iteration small positive constant tanh sign figure approximation sign tanh increase value approximation becomes accurate gradient penalty term function encourages sparsity see consider entry namely note relatively small values larger values see figure thus proper choices equation suppresses small entries towards zero favors sparser results simplify iterative equations approximate function following threshold function shown figure otherwise small positive threshold following approach taken oja karhunen assume small enough equation expanded powers also note inner product small power expansion omit term applying approximations obtain iterative learning algorithm shown algorithm words projection onto figure sparsity penalty suppresses small values towards zero note gets larger support gets smaller given data vector projection weight vector updated order reduce projection convergence analysis main idea proving convergence learning algorithm consider learning cost function defined follows show gradually learn patterns data set cost function goes zero order establish result need specify learning rate follows assume first show weight vector never becomes zero lemma assume iterations mentioned earlier proofs given section lemma ensures reach iteration case next prove convergence algorithm minimum figure soft threshold function two different values theorem conditions lemma learning algorithm converges local minimum moreover orthogonal patterns data set note similar convergence result proven without introducing penalty term however recall algorithm crucially depends sparsity level learned consequence encouraged sparsity adding penalty term experimental results section show fact strategy works perfectly learning algorithm results sparse solutions order find constraints required learning phase need run algorithm least times practice perform process parallel speed learning phase also meaningful biological point view constraint neuron act independently others although running algorithm parallel may result redundant constraints experimental results show starting different random initial points algorithm converges linearly independent constraints almost surely recall phase learning phase finished weights neural graphs fixed thus recall phase assume connectivity matrix cluster denoted learned satisfies recall phase proposed model consists two parts part clusters try remove noise see shortly cluster succeeds correcting single error high probability individual error correction performance fairly limited part capitalizes overlap among clusters improve overall performance recall phase follows describe parts details recall algorithm part shown algorithm exploit fact connectivity matrix neural network cluster sparse orthogonal memorized patterns result noise added algorithm performs series forward backward iterations remove iteration pattern neurons decide locally whether update current state amount feedback received pattern neuron exceeds threshold neuron updates state remains intact order state results need define degree distribution poly nomial node perspective precisely let fraction pattern neurons degree cluster define degree distribution polynomial pattern neurons cluster principle encapsulates information need know regarding cluster namely degree distribution following theorem provides lower bound average probability correcting single erroneous pattern neuron cluster theorem sample neural graph randomly let algorithm correct least single error cluster probability least average degree pattern neurons number pattern neurons number constraint neurons cluster respectively order maintain current value neuron add pattern neurons figure shown figure sake clarity practice usually set sign small positive threshold gain intuition simplify expression theorem follows min dmin minimum degree pattern neurons cluster assumed dmin shows significance pattern neurons extreme case dmin obtain trivial bound fraction pattern neurons degree equal particular large obtain total number pattern neurons degree thus even single pattern neuron zero degree probability correcting single error drops significantly theorem provides lower bound probability correcting single error connectivity graph sampled according degree distribution polynomial following lemma shows mild conditions depends neighborhood relationship among neurons algorithm correct single input error probability lemma two pattern neurons share exact neighborhood cluster algorithm corrects least single error denote average remaining paper let probability correcting one error averaged clusters lemma suggests nuder mild conditions fact many practical settings discussed later close thus pessimistically assume single error given cluster algorithm corrects probability declares failure one error recall algorithms mentioned earlier error correction ability algorithm fairly limited result clusters work independently correct external errors however clusters overlap combined performance potentially much better basically help resolving external errors cluster whose pattern neurons correct states provide truthful information neighboring clusters figure illustrates idea property exploited recall approach formally given algorithm words approach proceeds applying algorithm fashion cluster clusters either eliminate internal noise case keep new states help clusters revert back original states note scheduling scheme neurons change states towards correct values algorithm spirit similar famous decoding algorithm communication systems erasure channels called peeling algorithm luby make connection concrete first need define contracted version neural graph follows contracted graph compress constraint nodes cluster single super constraint node see figure super constraint node essentially acts check node capable detecting correcting single error among neighbors pattern neurons contrast declares failure two neighbors corrupted noise error corrected cluster number errors overlapping clusters may also reduce turn help eliminate errors introducing contracted graph similarity peeling decoder evident peeling decoder constraint called checksum node capable correcting single erasure among neighbors similarly one erasure among neighbors checksum node declares erasure however erasure eliminated checksum node helps constraint nodes namely connected erased node one less erasure among neighbors deal based similarity borrow methods modern coding theory obtain theoretical guarantees error rate proposed recall algorithm specifically use density evolution first developed luby generalized richardson urbanke accurately bound error correction performance resp denote fraction edges adjacent pattern resp let pattern degree constraint nodes degree resp call distribution super constraint degree distribution similar section convenient define degree distribution polynomials follows consider given cluster pattern neuron connected decision subgraph defined subgraph rooted branched super constraint nodes excluding decision subgraph tree depth meaning node appears say step cluster fails step cluster succeeds step cluster fails step cluster fails initial step step cluster succeeds step cluster succeeds step algorithm finishes successfully figure overlaps among clusters help neural network achieve better error correction performance assume cluster correct one input error words number input errors higher one cluster declares failure corresponding graph figure figure contraction graph tree assumption holds levels example decision subgraph shown figure finally say node unsatisfied connected noisy pattern node recall denotes average probability super constraint node correcting single error among neighbors theorem assume chosen randomly according degree distribution pair number vertices grows large algorithm succeed correcting errors high probability long worth make remarks theorem first condition given theorem used calculate maximal fraction errors algorithm correct instance degree distribution pair threshold algorithm corrects errors high probability second predicted threshold theorem based pessimistic assumption cluster correct single error third constructed randomly according given degree distributions graph graph size grows decision subgraph becomes tree probability close hence shown see example richardson urbanke recall performance graphs concentrated around average case given theorem pattern retrieval capacity discussing pattern retrieval capacity note number patterns effect learning recall algorithm except obvious influence learning time precisely long figure decision subgraph depth third edge left figure sub patterns lie subspace learning algorithm yields matrix orthogonal patterns training set similarly recall phase algorithms need compute noise vector remember retrieval capacity defined maximum number patterns neural size able store hence order show pattern retrieval capacity method exponential need demonstrate exists training set patterns length arn theorem let matrix formed vectors length entries furthermore let brnc min exists set vectors size arn rank moreover algorithm learn set proof construction construction used synthetically generate patterns lie subspace experimental results section evaluate performance proposed algorithms synthetic natural datasets codes used paper available online http synthetic scenario systematic way generate patterns satisfying set linear constraints outlined proof theorem proof constructive provides easy way randomly sample patterns linear constraints simulations consider neural network pattern neuron connected approximately clusters number connections neither small ensure information propagation big adhere sparsity requirement learning phase algorithm performed parallel cluster order find connectivity matrix recall phase round pattern sampled uniformly random training set entries corrupted additive noise independently probability algorithm subsequently used denoise corrupted patterns average process many trials calculate error rate compare analytic bound derived theorem learning results left right panels figure illustrate degree distributions pattern constraint neurons respectively ensemble randomly generated datasets network size divided overlapping clusters size around pattern neuron connected clusters average horizontal axis shows normalized degree pattern constraint neurons vertical axis represents fraction neurons given normalized degree normalization done respect number pattern constrain neurons cluster parameters learning algorithm figure illustrates results network size divided clusters size average learning parameters pattern neuron connected clusters average note overall normalized degrees smaller compared case indicates sparser clusters average almost cases tried learning phase converges within two learning iterations going data set twice recall results figure illustrates performance recall algorithm horizontal vertical axes represent average fraction erroneous neurons final degree distribution degree distribution normalized degree pattern neuron degrees normalized degree constraint neuron degrees figure pattern constraint neuron degree distributions average constraints per cluster learning parameters tern error rate per respectively performance compared theoretical bound derived theorem well two constructions proposed kumar salavati karbasi parameters used simulation clusters approach proposed salavati karbasi network size clusters first level one cluster second level identical simulations convolutional neural network proposed paper clearly outperforms prior art note theoretical estimates used figure calculated probability correcting single error cluster via lower bound theorem fixing corresponding curves figure show later estimate tighter cluster correct single error probability close figure shows final per network clusters comparing per network neurons clusters witness degraded performance first glance might seem surprising increased network size number clusters however key point determining performance algorithm number clusters rather size clusters cluster nodes degree distribution network around pattern neurons per cluster network around neurons degree distribution degree distribution normalized degree pattern neurons degrees normalized degree constraint neurons degrees figure pattern constraint neuron degree distributions average constraints per cluster learning parameters per cluster clearly increasing network size without increasing number clusters chance cluster experiencing one error increases remember cluster correct single error turn results inferior performance hence increasing network size helps number clusters increased correspondingly real datasets far tested proposed method synthetic datasets generated patterns way belong subspace many real datasets images natural sounds however patterns rarely form subspace rather due common structures come close forming one focus section show proposed method adapted scenarios specifically let denote dataset patterns length assume patterns vectorized form rows matrix eigenvalues correlation matrix indicate close patterns subspace note positive semidefinite matrix eigenvalues particular eigenvalue positive multiplicity patterns belong subspace similarly set eigenvalues final pattern error rate kumar salavati karbasi figure recall error rate along theoretical bounds different architectures network pattern neurons clusters compare performance method two constructions either notion cluster considered kumar overlaps clusters assumed salavati karbasi final pattern error rate figure recall error rate theoretical bounds different architectures network pattern neurons clusters respectively close zero patterns close subspace space figure illustrates eigenvalue distribution correlation matrix dataset images size sampled classes dataset krizhevsky hinton image quantized levels based notation evident figure almost half eigenvalues less suggesting patterns close subspace simulation scenario order adapt method new scenario patterns approximately belong subspace need slightly modify learning recall algorithms use datase running example however principles described easily applied datasets start first alter way patterns represented way makes easier learn algorithm specifically since images quantized levels represent pixel bits instead pattern neurons represent patterns dataset binary pattern neurons adopt modified description facilitates learning process apply algorithm learn patterns dataset obviously since patterns exactly form subspace expect eigenvalue eigenvalue index figure eigenvalues dataset images size uniformly sampled classes dataset krizhevsky hinton algorithm finish weight vector orthogonal patterns nevertheless applying learning algorithm weight vector whose projection patterns rather small following procedure obtain neural graphs clusters note approximately rather exactly orthogonal patterns main observation following interpret deviation subspace noise consequently apply recall method algorithm patterns find network actually learned response original patterns dataset words algorithm identifies projection original patterns subspace hence learned patterns orthogonal connectivity matrix idea shown figure original image left quantized version middle image learned proposed algorithm worth observing network learned focuses actual objects rather unnecessary details recall phase approach similar given set noisy patterns goal retrieve correct versions simulations assume noise added learned patterns see also consider situation noise added quantized patterns original image quantized image learned image figure original learned images learning results figure illustrates average cost defined section learning one constraint vector versus number iterations example learning parameters considered neural netowrk pattern neurons clusters size learning process terminates orthogonal weight vector found iterations done illustrates required number iterations algorithm weight vector orthogonal patterns dataset obtained see figure majority cases one pass dataset enough furthermore also uploaded short video clip learning algorithm action iteration iteration sample images dataset clip available following link http recall results figures show recall error rate increase noise level neural network used sec update thresholds recall algorithm set recall procedure algorithm performed times corresponding error rates calculated evaluating difference final state pattern neurons running learning cost number converged vectors figure average cost versus time learning weight vector network pattern neurons clusters size clusters set around constraints cluster required number iterations figure number iterations required algortihm learn vector orthogonal patterns dataset network pattern neurons clusters size around constraints cluster final pattern error rate figure recall error rate network pattern neurons clusters cluster size equal proposed recal method applied dataset images sampled database algorithm patterns dataset figure illustrates instances recalled images figure original images first column learned images second column noisy versions third column recalled images forth column figure also shows input output signal noise ratios snr example note examples snr increases apply recall algorithm example chose also uploaded short video clip recall algorithm action sample images dataset found http analysis section contains proofs theorems technical lemmas used paper proof lemma proceed induction end assume let symbol error rate input ber figure symbol error rates network pattern neurons clusters applied dataset images sampled database note thus order must given sufficient order achieve desired inequality proves lemma proof theorem let define correlation matrix lie within domain cluster follows also let define hence furthermore recall learning cost function alg let expectation choice pattern matrix corresponding cluster dataset thus noting obtain omitting second order terms obtain note since simplify follows thus order show algorithm converges need show turn implies noting must show left hand side right hand side inequality shows readily implies sufficiently large number iterations algorithm converges local minimum lemma know thus solution orthogonal patterns data set proof theorem case single error easily show noisy pattern neuron always updated towards correct direction algorithm simplicity let assume first pattern neuron cluster noisy one furthermore let noise vector denoting ith column weight matrix sign sign hence algorithm obtain means noisy node gets updated towards correct direction therefore source error would correct pattern neuron getting updated mistakenly let denote probability correct pattern neuron gets updated happens equivalent hwi sign kwi however cases neighborhood different neighborhood among constraint nodes hwi sign kwi specifically let indicate set neighbors among constraint neurons cluster case entries zero therefore letting probability note inequality help obtain upper bound bound since one noisy neuron know average node connected constraint neurons implies probability sharing exactly neighborhood degree neuron taking average pattern neurons obtain following bound average probability correct pattern neuron mistakenly updated degree distribution polynomial therefore probability correcting one noisy input lower bounded proves theorem proof lemma without loss generality suppose first pattern neuron contaminated external error result sign sign sign sign ith column hence feedback transmitted constraint neurons sign result decision parameters pattern neuron algorithm hsign hsign note denominator simply kwi hsign assumption two pattern neurons share exact set neighbors therefore least entry say wik thus result first neuron noisy one update value towards correct state proof theorem proof spirit similar theorem richardson urbanke consider message transmitted edge given cluster node given noisy pattern neuron iteration algorithm message failure indicating super constraint node unable correct error super constraint node receives least one error message neighbors among pattern neurons event happens connected one noisy pattern neuron super constraint node receive error message neighbors unable correct single error given noisy neuron event happens probability let denote probability failure message average probability pattern neuron sends erroneous message neighboring cluster node degree super constraint neuron contracted graph similarly let denote average probability super constraint node sends message declaring violation least one constraint neurons ede consider message transmitted given pattern neuron degree given super constraint node iteration algorithm message indicate noisy pattern neuron pattern neuron noisy first place probability neighbors among super constraint nodes sent violation message iteration therefore probability node noisy hence average probability pattern neuron remains noisy iteration note denoising operation successful theree fore must look maximum proof theorem proof based construction build data set required properties memorized proposed neural network consider matrix rank chosen min let entries integers assume start constructing patterns data set follows pick random vector set pattern entries since entries entries however need design entries less let column entry equal therefore let minj choose turn ensures entries less furthermore selected way min result sure set dataset form subspace dimension dimensional space since vectors integer entries patterns forming implies storage capacity exponential number long conclusions final remarks paper proposed first neural network structure learns exponential number patterns size network corrects linear fraction errors main observation made natural patterns seem inherent redundancy proposed framework captured redundancies appear form linear close linear constraints experimental results also reveal learning algorithm seen feature extraction method tailored patterns constraints extending line thought sophisticated feature extraction approaches light recent developments deep belief networks jarrett coates vincent ngiam interesting future direction pursue references emmanuel terence tao signal recovery random projections universal encoding strategies ieee transactions information theory adam coates andrew importance encoding versus training sparse coding vector quantization lise getoor tobias scheffer editors international conference machine learning icml pages new york usa acm donoho compressed sensing ieee transactions information theory david donoho maleki montanari algorithms compressed sensing proceedings national academy sciences pnas vincent gripon claude berrou sparse neural networks large learning diversity ieee transactions neural networks donald hebb organization behavior neuropsychological theory wiley sons new york john hopfield neural networks physical systems emergent collective computational abilities proc natl acad sci stanislaw jankowski andrzej lozowski jacek zurada multistate neural associative memory ieee transactions neural networks learning systems kevin jarrett koray kavukcuoglu marc aurelio ranzato yann lecun best architecture object recognition ieee international conference computer vision iccv pages amin karbasi amir hesam salavati amin shokrollahi iterative learning denoising convolutional neural associative memories international conference machine learning icml pages alex krizhevsky geoffrey hinton learning multiple layers features tiny images master thesis department computer science university toronto raj kumar amir hesam salavati mohammad shokrollah exponential pattern retrieval capacity associative memory ieee information theory workshop itw pages raj kumar amir hesam salavati mohammad shokrollahi associative memory exponential pattern retrieval capacity iterative learning ieee transaction neural networks learning systems appear quoc jiquan ngiam zhenghao chen daniel jin hao chia pang wei koh andrew tiled convolutional neural networks advances neural information processing systems nips pages lee improvements hopfield associative memory using generalized projection rules ieee transactions neural networks michael luby michael mitzenmacher mohammad amin shokrollahi daniel spielman efficient erasure correcting codes ieee transactions information theory robert mceliece edward posner eugene rodemich santosh venkatesh capacity hopfield associative memory ieee transactions information theory mehmet kerem muezzinoglu cuneyt guzelis jacek zurada new design method multistate hopfield associative memory ieee transactions neural networks jiquan ngiam pang wei koh zhenghao chen sonia bhaskar andrew sparse filtering john richard zemel peter bartlett fernando pereira kilian weinberger editors advances neural information processing systems nips pages erkki oja juha karhunen stochastic approximation eigenvectors eigenvalues expectation random matrix math analysis applications erkki oja teuvo kohonen subspace learning algorithm formalism pattern recognition neural networks ieee international conference neural networks volume pages peretto niez long term memory storage capacity multiconnected neural networks biological cybernetics tom richardson ruediger urbanke modern coding theory cambridge university press new york usa amir hesam salavati coding theory neural associative memories exponential pattern retrieval capacity phd thesis ecole polytechnique federale lausanne epfl url http amir hesam salavati amin karbasi neural networks ieee international symposium information theory isit pages richard socher jeffrey pennington eric huang andrew christopher manning recursive autoencoders predicting sentiment distributions proceedings conference empirical methods natural language processing pages association computational linguistics tanner recursive approach low complexity codes ieee transactions information theory joel tropp stephen wright computational methods sparse solution linear inverse problems proceedings ieee santosh venkatesh connectivity versus capacity hebb rule theoretical advances neural computation learning pages springer santosh venkatesh demetri psaltis linear logarithmic capacities associative neural networks ieee transactions information theory pascal vincent hugo larochelle yoshua bengio manzagol extracting composing robust features denoising autoencoders international conference machine learning icml pages new york usa acm lei adam krzyzak erkki oja neural nets dual subspace pattern recognition method international journal neural systems algorithm iterative learning input dataset stopping point output choose pattern uniformly compute update follows end algorithm error correction input training set threshold iteration tmax output tmax forward iteration calculate weighted input sum wij neuron set sign backward iteration neuron computes wij update state pattern neuron according sign end algorithm sequential peeling algorithm input output unsatisfied unsatisfied apply algorithm cluster remained unsatisfied revert state pattern neurons connected initial state otherwise keep current states end end declare satisfied otherwise declare failure learned original snri image image snro original learned snri image image snro original learned snri image image snro original learned snri image image snro original learned snri image image snro figure examples learning recall phase images sampled snri snro denote input output snr respectively
| 9 |
new approach network performance analysis sep ming ding data australia david bell labs ireland guoqiang mao university technology sydney data australia zihuai lin university sydney australia paper propose new approach network performance analysis based previous works deterministic network analysis using gaussian approximation first extend previous works ratio sir analysis makes analysis formal microscopic analysis tool second show two approaches upgrading analysis macroscopic analysis tool finally perform comparison proposed analysis existing macroscopic analysis based stochastic geometry results show analysis possesses special features shadow fading naturally considered dnaga analysis analysis handle user distributions type fading iii shape size cell coverage areas analysis made arbitrary treatment hotspot network scenarios thus analysis useful network performance analysis generation systems general cell deployment user distribution microscopic level macroscopic level ntroduction due potential large performance gains dense orthogonal deployments small cell networks scns within existing macrocell gained much momentum design generation systems envisaged workhorse capacity enhancement generation systems context new powerful network performance analysis tools needed better understand performance implications dense orthogonal scns bring network performance analysis tools broadly classified two groups macroscopic analysis microscopic analysis macroscopic analysis usually assumes user equipments ues base stations bss randomly deployed often following homogeneous poisson distribution invoke stochastic geometry theory essence macroscopic analysis investigates network performance high level coverage probability ratio sir distribution averaging possible deployments instead microscopic analysis allows detailed analysis ieee personal use permitted requires ieee permission please find final version ieee link http digital object identifier orthogonal deployment means small cells macrocells operating different frequency spectrum small cell scenario defined often conducted assuming ues randomly placed locations known generally speaking macroscopic analysis predicts network performance statistical sense microscopic analysis useful study optimization within microscopic analysis paying special attention uplink authors considered single interfering cell coverage area presented expressions interference considering path loss shadow fading authors conjectured interference hexagonal grid based cellular network may follow lognormal distribution verified via simulation went step analytically derived upper bound error approximating interference single cell gaussian distribution error measured distance real cumulative density function cdf approximate cdf shown small practical scns basis interference analysis investigated approximate distribution aggregate interference scenario power lognormal distribution practical networks also investigated network performance scns current networks using simulations paper objective extend previous works analyze sir performance create novel compelling approach network performance analysis unify macroscopic microscopic analyses within single framework overcoming drawbacks current tools end work composed following three steps extension interference analysis sir analysis makes analysis formal microscopic analysis tool upgrade developed microscopic analysis tool macroscopic analysis tool comparison proposed macroscopic analysis tool existing macroscopic analysis based stochastic geometry since macroscopic microscopic analyses unified framework based deterministic network analysis dna using gaussian approximation presented framework referred analysis hereafter result work contributions paper summarized follows based gaussian approximation theorem presented approximate distributions signal power sir interested derived tractable expressions using numerical integration giving rise analysis although analysis stands alone solid contribution family microscopic analysis two approaches upgrading analysis macroscopic analysis investigated first one approach directly averages performance given many analyses many random deployments obtain performance macroscopic analysis second one analytical approach constructs idealistic deterministic deployment conducts analysis deployment obtain performance macroscopic analysis interesting results comparison dnaga analysis stochastic geometry analysis presented results show analysis qualifies new network performance analysis tool special merits stochastic geometry shadow fading naturally considered analysis stochastic geometry usually distributions type multipath fading treated analysis stochastic geometry usually iii apart common assumption cell coverage areas voronoi cells made stochastic geometry shape size cell coverage areas analysis made arbitrary making suitable network performance analysis hotspot scns remainder paper structured follows section network scenario system model described section iii analysis presented followed upgrade macroscopic analysis tool section results validated compared stochastic geometry analysis via simulations section finally conclusions drawn section etwork cenario ystem odel paper consider transmissions assume small cell schedules one resource resource block reasonable assumption line networks long term evolution lte worldwide interoperability microwave access wimax note small cell bss serving contribute interference thereby bss ignored analysis regarding network scenario consider scn multiple small cells operating carrier frequency shown fig detail scn consists small cells managed network includes small cell interest denoted interfering small cells denoted focus particular denote active associated small cell moreover denote coverage area small cell associated ues randomly distributed note coverage areas adjacent small cells may overlap due arbitrary shapes sizes fig schematic model considered scn distance distance denoted dbm respectively since microscopic analysis tool consider deterministic deployment bss set known randomly distributed distribution function fzb hence dbm random variable whose distribution readily expressed analytical form due arbitrary shape size arbitrary form fzb regarding fzb two remarks following remark unlike existing works uniform distribution considered handle probability density function pdf general distribution denoted fzb fzb integral equals one remark even fzb constant say distribution uniform within small cell coverage area guarantee distribution uniform within entire scenario ues deployed outside hotspot areas may cause distribution within entire scenario instead stochastic geometry ues usually assumed uniformly distributed within entire scenario creating voronoi cells less general practical assumption fzb note sequel characterization distribution meant within next present modeling path loss shadow fading transmission power fading noise based definition dbm path loss modeled lbm dbm path loss reference distance dbm path loss exponent practice constants obtainable field tests note lbm due randomness dbm shadow fading denoted sbm usually assumed follow lognormal distribution based assumption sbm modeled independently identically distributed gaussian variance sbm transmission power dbm denoted subject power control mechanism fractional path loss compensation fpc scheme based fpc scheme modeled lbb sbb target received power dbm considered fpc factor lbb sbb discussed fading channel denoted hbm assume equipped one antenna important note consider general type multipath fading assuming effective channel gain associated hbm defined hbm follows distribution pdf example characterized exponential distribution gamma distribution case rayleigh fading nakagami fading respectively hence distribution hbm derived analytically finally ignore additive noise scns generally work region iii roposed nalysis proposed analysis consists three steps interference analysis signal power analysis sir analysis presented following interference analysis based definition rvs discussed section received interference power dbm written plugged step defined respectively apparently independent rvs besides first part defined since sbb gaussian rvs easy show also gaussian whose mean variance definition aggregate interference power interfering ues formulated previous work show distribution well approximated power lognormal distribution approximation summarized following distribution first analyze distribution shown considering small approximation error distance provided approximate gaussian whose mean variance respectively mean variance obtained using numerical integration involving fzb details omitted brevity distribution second analyze distribution shown considering small approximation error distance provided approximate another gaussian whose mean variance respectively mean variance omit details brevity note upper bound total approximation error two steps obtained summation individual approximation errors two steps expressions shown total approximation error small practical scns without requirement uniformity distribution type fading shape size cell coverage areas intuitively speaking results show larger variance gaussian better approximation due increasing dominance gaussian distribution third invoke main results indicate sum multiple independent lognormal rvs well approximated power lognormal accordingly case since approximated gaussian sum shown well approximated power lognormal expressed pdf cdf written shown top next page cdf standard normal distribution obtained computed procedure obtain omitted brevity interested readers referred appendix details result pdf cdf written shown top next page scalar factor originated variable change finally approximate distribution shown presented top next page note step approximation error depends approximate error introduced power lognormal approximation shown reasonably small good enough practical cases exp based definition rvs discussed section received signal power dbm written plugged step besides defined respectively first part defined easy show gaussian whose mean variance similar discussion subsection consider small approximation error distance shown approximate gaussian whose mean variance respectively mean variance result note unlike discussion subsection accurate approximate gaussian randomness gaussian distributed largely removed transmission power control mechanism rendering less dominant role gaussian distribution compared distribution words comparable even smaller variance making approximation error large according results therefore derive approximate distribution using different method presented theorem theorem approximate cdf derived number terms employed gausshermite numerical integration weights abscissas tabulated table proof since gaussian mean variance shown pdf written besides according definition rvs section assume cdf hence cdf approximated signal power analysis exp exp step obtained step computed using variable change moreover step derived using numerical integration exp number terms approximation weights abscissas tabulated table residual error order decays fast increases finally step obtained dropping proof thus completed comparing case rayleigh fading propose corollary compute approximate expression corollary case rayleigh fading approximate cdf computed exp exp proof discussed section condition rayleigh fading channel gain follows exponential distribution unitary mean proof completed deriving based variable change details omitted brevity case nakagami fading propose corollary compute approximate expression corollary case nakagami fading approximate cdf computed exp exp exp respectively gamma incomplete gamma functions respectively shape scale parameters gamma distribution associated channel gain nakagami fading proof discussed section condition agami fading channel gain follows gamma distribution parameters proof completed deriving based variable change details omitted brevity sir analysis approximate sir derive approximate distribution theorem theorem approximate cdf derived number terms employed gausshermite numerical integration weights abscissas tabulated table proof approximate cdf derived exp exp change moreover step derived using numerical integration finally step obtained dropping proof thus completed comparing case rayleigh fading propose corollary compute approximate expression corollary case rayleigh fading approximate cdf computed plugged obtain proof proof completed applying corollary theorem theorem details omitted brevity case nakagami fading propose corollary compute approximate expression corollary case nakagami fading approximate cdf computed plugged obtain proof proof completed applying corollary theorem theorem details omitted brevity acroscopic pgrade nalysis theorem crafted powerful microscopic analysis tool based proposed analysis deal wide range network assumptions system parameters section investigate two approaches upgrading analysis microscopic analysis tool macroscopic one putting dnaga league stochastic geometry approach microscopic macroscopic analyses closely related average performance many microscopic analyses conducted large number random deployments converges performance macroscopic analysis given examined realizations deterministic deployments follow deployment assumption used macroscopic analysis therefore directly average performance results obtained applying theorem large number random deployments obtain performance results macroscopic analysis step calculated using theorem step computed using variable analytical approach instead conducting theorem many deployments averaging results together obtain results macroscopic analysis construct idealistic deployment hexagonal lattice equivalent density perform single analysis deployment extract sir performance macroscopic analysis hexagonal lattice leads performance bss evenly distributed scenario thus strong interference due close proximity avoided imulation iscussion fig illustration scn deployment section conduct simulations validate proposed analysis using analytical approaches approach obtain results macroscopic analysis average results given theorem random deployments deployment random experiments conducted randomness positions deployment placement another random experiments conducted randomness shadow fading fading analytical approach one deployment hexagonal lattice examined set computation analysis ensure good accuracy results regard scenario parameters generation partnership project recommendations considered approach dummy macrocell sites deployed distance guide small cell deployment macrocell site shape hexagon equally divided macrocells macrocell contains randomly deployed small cells resulting small cells density around analytical approach small cells located hexagonal lattice cell density cases small cell coverage radius minimum distance minimum distance respectively moreover according dbm fig illustrates example random deployment according small cell bss represented xmarkers coverage areas dummy macrocells small cells marked dashed solid lines respectively ues randomly distributed mentioned small cell coverage areas important note although small cell coverage areas coverage areas small cells irregular shape due overlapping brevity following subsections omit detailed investigation interference analysis signal power analysis directly present sir results given analysis simulation validation analysis subsection validate accuracy analysis terms sir performance assuming two cases distribution fading case uniform distribution rayleigh fading case distribution nakagami fading case obtain sir results analysis using theorem corollary case invoke theorem corollary considering distribution assume fzb radial coordinate polar coordinate system origin placed position normalization constant make fzb resulting distribution ues likely locate close vicinity considering nakagami fading assume corresponds fading strong los component cases sir performance evaluated using simulation approach discussed subsection moreover upper bound sir also investigated using simulation analytical approach discussed subsection based deployment hexagonal lattice results shown fig note considered example distributed ues reflects reasonable network planning small cell bss deployed center clusters forms fzb considered analysis well sir simulation sir simulation cdf cdf sir simulation sir analysis sir simulation sir case stochastic geometry case case sir fig sir simulation seen fig sir results proposed analysis match simulation well particularly head portion approach maximum deviation cdfs obtained analysis simulation investigated cases around percentile analytical approach fitness becomes even better maximum deviation cases within percentile importantly cases sir performance given analytical approach shown within exact performance indicating usefulness characterizing network performance computation take case example numerical results plugged theorem hexagonal deployment finally note sir case outperforms case mainly ues tend stay closer serving bss case discussed leading larger signal power lower interference power comparison stochastic geometry section compare sir results dnaga analysis case stochastic geometry analysis fig average cell density assumption rayleigh note stochastic geometry analysis poses assumptions model sake tractability shadow fading rayleigh fading homogeneous poisson distribution ues bss entire scenario contrast analysis need assumptions works realistic model considering hotspot scn scenario shown fig discussed remark section sir fig sir stochastic geometry fig interesting aspects noteworthy first analysis analysis able give approximate results however approximation error analysis shown smaller second significant performance gap analysis stochastic geometry analysis analysis considers shadow fading top fading leads large variance sir analysis gives small sir variance analysis studies hotspot scn scenario recommended ues deployed closer serving bss voronoi cells considered third purpose fig reproduce results based voronoi cells analytically investigate practical network scenario shadow fading required ignored albeit impractical gamma approximation aggregate interference could invoked make approach analysis still valid besides analysis also handle case cell coverage areas constructed voronoi cells however would practical consider alternative association strategy uas connected smallest path loss plus shadow fading note uas blur boundaries voronoi cells longer always connected closest making analysis intricate realistic finally note integral computation needed compute results integration required theorem analysis however many deployments needed approach analysis one analytical approach onclusion proposed new approach network performance analysis unifies microscopic macroscopic analyses within single framework compared stochastic geometry analysis considers shadow fading general distribution type fading well shape size cell coverage areas thus analyze realistic networks useful network performance analysis systems general cell deployment distribution eferences small cell enhancements eutran physical layer aspects ding claussen jafari towards cellular systems understanding small cell deployments ieee commun surveys tutorials jun andrews baccelli ganti tractable approach coverage rate cellular networks ieee trans vol novlan dhillon andrews analytical modeling uplink cellular networks ieee trans wireless vol jun zhu wang yang distribution uplink interference ofdma networks power control ieee icc sydney australia jun tang chen cheng statistical model ofdma cellular networks uplink interference using lognormal distribution ieee wireless commun letters vol ding mao lin approximation uplink interference fdma small cell networks appear ieee globecom arxiv may ding mao lin microscopic analysis uplink interference fdma small cell networks submitted ieee trans wireless ding vasilakos chen dynamic tdd transmissions homogeneous small cell networks ieee icc sydney australia jun ding vasilakos chen analysis sinr performance dynamic tdd homogeneous small cell networks ieee globecom massey test goodness fit journal american statistical association vol abramowitz stegun handbook mathematical functions formulas graphs mathematical tables nineth dover physical layer procedures wimax forum wimax ieee air interface standard apr enhancements lte time division duplex tdd interference management traffic adaptation jun proakis digital communications third new york mcgrawhill liu almhana mcgorman approximating lognormal sum distributions power lognormal distributions ieee trans vehicular vol jul szyszkowicz yanikomeroglu fitting sum independent lognormals distribution ieee globecom
| 7 |
classification geometry general perceptual manifolds sueyeon daniel haim feb program applied physics school engineering applied sciences harvard university cambridge usa center brain science harvard university cambridge usa school engineering applied science university pennysylvania philadelphia usa racah institute physics hebrew university jerusalem israel edmond lily safra center brain sciences hebrew university jerusalem israel perceptual manifolds arise neural population responds ensemble sensory signals associated different physical features orientation pose scale location intensity perceptual object object recognition discrimination require classifying manifolds manner insensitive variability within manifold neuronal systems give rise invariant object classification recognition fundamental problem brain theory well machine learning study ability readout network classify objects perceptual manifold representations develop statistical mechanical theory linear classification manifolds arbitrary geometry revealing remarkable relation mathematics conic decomposition show special anchor points manifolds used define novel geometrical measures radius dimension explain classification capacity manifolds various geometries general theory demonstrated number representative manifolds including ellipsoids prototypical strictly convex manifolds balls representing polytopes finite samples ring manifolds exhibiting continuous structures arise modulating continuous degree freedom effects label sparsity classification capacity general manifolds elucidated displaying universal scaling relation label sparsity manifold radius theoretical predictions corroborated numerical simulations using recently developed algorithms compute maximum margin solutions manifold dichotomies theory extensions provide powerful rich framework applying statistical mechanics linear classification data arising perceptual neuronal responses well artificial deep networks trained object recognition tasks pacs numbers introduction fundamental cognitive task performed animals humans invariant perception objects requiring nervous system discriminate different objects despite substantial variability objects physical features example vision mammalian brain able recognize objects despite variations orientation position pose lighting background impressive robustness physical changes limited vision examples include speech processing requires detection phonemes despite variability acoustic signals associated individual phonemes discrimination odors presence variability odor concentrations sensory systems organized hierarchies consisting multiple layers transforming sensory signals sequence distinct neural representations studies high level sensory systems inferotemporal cortex vision auditory cortex audition piriform cortex olfaction reveal even late sensory stages exhibit significant sensitivity neuronal responses physical variables suggests sensory hierarchies generate representations objects although entirely invariant changes physical features still readily decoded ant manner downstream system hypothesis formalized notion untangling perceptual manifolds viewpoint underlies number studies object recognition deep neural networks artificial intelligence conceptualize perceptual manifolds consider set neurons responding specific sensory signal associated object shown fig neural population response stimulus vector changes physical parameters input stimulus change object identity modulate neural state vector set state vectors corresponding responses possible stimuli associated object viewed manifold neural state space geometrical perspective object recognition equivalent task discriminating manifolds different objects presumably signals propagate one processing stage next sensory hierarchy geometry manifolds reformatted become untangled namely easily separated biologically plausible decoder paper model decoder simple single layer network perceptron ask geometrical properties perceptual manifolds influence ability separated linear classifier bility general finite dimensional manifolds summary key results follows figure perceptual manifolds neural state space firing rates neurons responding images dog shown various orientations scales response particular orientation scale characterized population response population responses images dog form continuous manifold representing complete set invariances neural activity space object images corresponding cat various poses represented manifolds vector space linear separability previously studied context classification points perceptron using combinatorics statistical mechanics gardner statistical mechanics theory extremely important provides accurate estimates perceptron capacity beyond function counting incorporating robustness measures robustness linear classifier quantified margin measures distance separating hyperplane closest point maximizing margin classifier critical objective machine learning providing support vector machines svm good generalization performance guarantees theories focus separating finite set points underlying geometrical structure applicable problem manifold classification deals separating infinite number points geometrically organized manifolds paper addresses important question quantify capacity perceptron dichotomies input patterns described manifolds earlier paper presented analysis classification manifolds extremely simple geometry namely balls however previous results limited applicability neural manifolds arising realistic physical variations objects exhibit much complicated geometries statistical mechanics deal classification manifolds complex geometry specific geometric properties determine separability manifolds paper develop theory linear begin introducing mathematical model general manifolds binary classification sec formalism allows generate generic bounds manifold separability capacity limits small manifold sizes classification isolated points large sizes classification entire affine subspaces bounds highlight fact large ambient dimension maximal number separable manifolds proportional even though consists infinite number points setting stage statistical mechanical evaluation maximal using replica theory derive mean field equations capacity linear separation finite dimensional manifolds sec iii statistical properties optimal separating weight vector solution given form self consistent kkt conditions involving manifold anchor point anchor point representative support vector manifold position anchor point manifold changes orientations manifolds varied ensuing statistics distribution anchor points play key role theory optimal separating plane intersects fraction manifolds supporting manifolds theory categorizes dimension span intersecting sets points edges faces full manifolds relation position anchor points manifolds convex hulls mean field theory motivates new definition manifold geometry based measure induced statistics anchor points particular define manifold anchor radius dimension respectively quantities relevant since capacity general manifolds well approximated capacity balls radii dimensions interestingly show limit small manifolds anchor point statistics dominated points boundary manifolds minimal overlap gaussian random vectors resultant gaussian radius dimension related gaussian convex bodies sec beyond understanding fundamental limits classification capacity geometric measures offer new quantitative tools assessing perceptual manifolds reformatted brain artificial systems apply general theory three examples representing distinct prototypical manifold classes one class consists manifolds strictly smooth convex hulls contain facets exemplified ellipsoids another class convex polytopes arise manifolds consists finite number data points exemplified ellipsoids finally ring manifolds represent intermediate class smooth nonconvex manifolds ring manifolds continuous nonlinear functions single intrinsic variable object orientation angle differences manifold types show clearly distinct patterns support dimensions however show share common trends size manifold increases capacity geometrical measures vary smoothly exhibiting smooth small radius dimension high capacity large radius dimension low capacity crossover occurs importantly many realistic cases size smaller crossover value manifold dimensionality substantially smaller computed naive second order statistics highlighting saliency significance measures anchor geometry figure model manifolds affine subspaces manifold embedded orthogonal translation vector affine space center manifold scale varied manifold shrinks point expands fill entire affine space finally treat important case classification manifolds imbalanced sparse labels commonly arise problems object recognition well known highly sparse labels classification capacity random points increases dramatically fraction minority labels analysis sparsely labeled manifolds highlights interplay manifold size sparsity particular shows sparsity enhances capacity gaussian manifold radius notably large regime parameters sparsely labeled manifolds approximately described universal capacity function equivalent sparsely labeled balls radii dimensions demonstrated numerical evaluations sec conversely capacity low close dimensionality manifold affine subspace even extremely small theory provides first time quantitative qualitative predictions perceptron classification realistic data structures however application real data may require extensions theory discussed section vii together theory makes important contribution development statistical mechanical theories neural information processing realistic conditions model manifolds manifolds affine subspaces model set perceptual manifolds corresponding perceptual object manifold consists compact subset affine subspace affine dimension point manifold parameterized set orthonormal bases dimensional linear subspace containing components represents coordinates manifold point within subspace constrained bold notation indicates set vectors whereas arrow notation set defines indicates vector shape manifolds encapsulates affine constraint simplicity first assume manifolds geometry coordinate set manifolds extensions consider heterogeneous geometries provided sec study separability manifolds two classes denoted binary labels linear hyperplane passing origin hyperplane described weight vector normalized kwk hyperplane correctly separates manifolds margin satisfies since linear separability convex problem separating manifolds equivalent separating convex hulls conv conv conv position affine subspace relative origin defined via translation vector closest origin orthogonal translation vector perpendicular affine displacement vectors points affine subspace equal projections fig assume simplicity normalized investigate separability properties manifolds helpful consider scaling manifold overall scale factor without changing shape define scalar scaling relative center manifold converges point hand manifold spans entire affine subspace manifold symmetric ellipsoid natural choice center later provide appropriate definition center point general asymmetric manifolds general translation vector need coincide shown fig center however also discuss later special case centered manifolds translation vector center coincide bounds linear separability manifolds dichotomies input points zero margin number dichotomies separated linear hyperplane origin given binomial coefficient cnk zero otherwise result holds input vectors obey mild condition vectors general position namely subsets input vectors size linearly independent large probability dichotomy linearly separable depends upon exhibits sharp transition critical ratio value aware comprehensive extension cover counting theorem general manifolds nevertheless provide lower upper bounds number linearly realizable dichotomies considering limit following general conditions first limit linear separability manifolds becomes equivalent separability centers leads requirement centers manifolds general position second consider conditions manifolds linearly separable manifolds span complete affine subspaces weight vector consistently assign label points affine subspace must orthogonal displacement vectors affine subspace hence realize dichotomy manifolds weight vector must lie null space dimension dtot rank union affine displacement vectors basis vectors general position dtot min affine subspaces separable required projections orthogonal translation vectors need also separable dtot dimensional null space general conditions number dichotomies affine subspaces linearly separated related number dichotomies finite set points via relationship conclude ability linearly separate affine subspaces exhibits transition always separable never large separable critical ratio see supplementary materials sec general manifolds finite size number dichotomies linearly separable lower bounded upper bounded introduce notation denote maximal load randomly labeled manifolds linearly separable margin high probability therefore considerations follows critical load zero margin bounded bounds highlight fact large limit maximal number separable manifolds proportional even though consists infinite number points sets stage statistical mechanical evaluation maximal number manifolds described following section iii statistical mechanical theory order make theoretical progress beyond bounds need make additional statistical assumptions manifold spaces labels specifically assume individual components drawn independently identical gaussian distributions zero mean variance binary labels randomly assigned manifold equal probabilities study thermodynamic limit finite load addition manifold geometries fied set particular affine dimension held fixed thermodynamic limit assumptions bounds extended linear separability general manifolds finite margin characterized reciprocal critical load ratio maximum load separation random points margin given gardner theory many interwith gaussian measure esting cases affine dimension large gap overly loose hence important derive estimate capacity manifolds finite sizes evaluate dependence capacity nature solution geometrical properties manifolds shown shown appendix prove general form inverse capacity exact thermodynamic limit average random dimensional vectors whose components normally distributed components vector represent signed fields induced solution vector basis vectors manifold gaussian vector represents part variability due quenched variability manifolds basis vectors labels explained detail inequality constraints written equivalently constraint point therefore fold minimal projection consider support function concave used write min constraint min easily mapped note definition conventional convex support function defined via max operation kkt conditions gain insight nature maximum margin solution useful consider kkt conditions convex optimization kkt tions characterize unique solution given mean field theory manifold separation capacity following gardner framework compute statistical average log volume space solutions case written kwk heaviside function enforce margin constraints along delta function sure kwk following focus properties maximum margin solution namely solution largest load fixed margin equivalently solution margin maximized given vector support function point convex suphull minimal overlap port function differentiable subgradient unique equivalent gradient support function arg min since support function positively homogeneous thus depends unit vector values differentiable subgradient unique defined uniquely particular subgradient obeys kkt conditions latter case conv may reside see capacity written terms scale factor either zero positive corresponding positive zero whether positive meaning case multiplying yields thus obeys self consistent equation function max equation follows yields kkt expression capacity see eqs kkt relations equations statistics mean field theory derives appropriate statistics equations fields single manifold see consider projecting solution vector onto affine subspace one manifolds say fine dimensional vector signed fields solution affine basis vectors manifold represents reduces contribution manifolds since subspaces randomly oriented contribution well described random gaussian vector finally self consistency requires fixed represents point minimal overlap residing margin hyperplane otherwise contribute max margin solution thus decomposition field induced specific manifold decomposed contribution induced specific manifold along contributions coming manifolds self consistent equations well relating gaussian statistics naturally follow requirement represents support vector mean field interpretation kkt relations kkt relations nice interpretation within framework mean field theory maximum margin solution vector always written linear combination set support vectors although infinite numbers input points manifold solution vector decomposed vectors one per manifold conv vector convex hull manifold large limit vectors uncorrelated hence squaring equation yields kwk coordinates affine subspace manifold see anchor points manifold supports vectors contributing solution play key role theory denote equivalently affine subspace components manifold anchor points particular configuration manifolds manifolds could replaced equivalent set anchor points yield maximum margin solution important stress however individual anchor point determined configuration associated manifold also random orientations manifolds fixed manifold location anchor point vary relative configurations manifolds variation captured mean field theory dependence anchor point random gaussian vector particular position anchor point convex hull manifold reflects nature relation manifold margin planes general fraction manifolds intersect margin hyperplanes manifolds support manifolds system nature support varies characterized dimension span intersecting set conv cone see fig shifted polar cone denoted defined convex set points given cone zero figure conic decomposition margin margin given random minimized found gaussian polar cone cone anchor point projection convex hull margin planes support manifolds call touching manifolds intersect margin hyperplane anchor point support dimension anchor point boundary extreme fully supporting manifolds completely reside margin hyperplane characterized case parallel translation vector hence points support vectors anchor point case overlap unique point interior conv obeys self consistent equation namely balances contribution manifolds zero orthogonal components case smooth convex hulls strongly convex manifold support configurations exist types manifolds also partially supporting manifolds whose convex hull intersection margin hyperplanes consist dimensional faces associated anchor points reside inside intersecting face instance implies lies edge whereas implies lies planar convex hull determining dimension support structure arises various explained conic decomposition kkt conditions also interpreted terms conic decomposition generalizes notion decomposition vectors onto linear subspaces null spaces via euclidean projection convex cone manifold defined cone illustrated fig simply conventional polar cone equation interpreted decomposition sum two euclidean nent vectors one component projection onto component located cone moreau decomposition theorem states two components perpendicular components need perpendicular obey position vector relation cones cone rise qualitatively different expressions contributions solution weight vector inverse capacity correspond different support dimensions mentioned particular supwhen lies inside port dimension hand manifold fully lies inside cone supporting numerical solution mean field equations solution mean field equations consists two stages first computed particular relevant contributions inverse capacity averaged gaussian distribution simple geometries ellipsoids first step may solved analytically however complicated geometries steps need performed numerically given first step involves determining solving quadratic programming problem qsip manifold may contain infinitely many points novel cutting plane method developed efficiently solve qsip problem see sec expectations computed sampling gaussian dimensions taking appropriate averages similar procedures mean field methods relevant quantities corresponding capacity quite concentrated converge quickly relatively samples following sections also show mean field theory compares computer simulations numerically solve maximum margin solution realizations manifolds given variety manifold geometries finding maximum margin solution challenging standard methods solving svm problems limited finite number input points recently developed efficient algorithm finding maximum margin solution manifold classification used method present work see sec manifold geometry longitudinal intrinsic coordinates section address capacity separate set manifolds related geometry particular shape within affine subspace since projections points manifold onto translation vector convenient parameterize affine basis vectors coordinates dimensional vector representation parameterization conof venient since constrains manifold variability coordinate first components longitudinal variable measuring distance manifold affine subspace origin write dimensional vectors lower case vectors denote vectors also refer intrinsic vector anchor point notation capacity written clear form vectors affine subspace obey capacity reduces gardner result since support function expressed min ttouch contributions capacity occur outside interior regime case thus support dimensions solution active satisfying equality condition outside interior regime fully supporting manifolds sufficiently negative anchor point obeys kkt equations resides interior conv fixed must negative enough tfs conv tfs arg max guaranteeing conv contribution regime capacity kkt relations written minimizes overlap resultant equation agrees previous section fix random vector consider qualitative change anchor point decreases interior manifolds sufficiently positive manifold interior margin plane corresponding support dimension although contributing inverse capacity solution vector useful associate anchor points manifolds defined closest point manifold margin plane arg since definition ensures continuity anchor point interior regime holds equivalently ttouch types supports using coordinates elucidate conditions different types support manifolds defined see finally values tfs ttouch manifolds partially supporting support dimension examples different supporting regimes illustrated figure effects size margin discuss effect changing manifold size imposed margin capacity geometry described change manifold size corresponds scaling every scalar small size manifold shrinks point center whose capacity reduces isolated points however case capacity may affected manifold structure even see section nevertheless underlying support structure simple small manifolds two support configurations manifold interior manifold becomes touching support dimension case small magnitude thus cases close gaussian vector probability configurations vanishes large size large size limit separating manifolds equivalent separating affine subspaces show appendix two rmain support structures ity manifolds fully supporting namely underlying affine subspaces parallel margin plane regime contributes inverse capacity amount regimes touching partially supporting angle affine subspace margin plane almost zero contribute amount inverse capacity combining two contributions obtain large sizes consistent large margin fixed implies larger increases probability supporting regimes increasing also shrinks magnitude according hence capacity becomes similar random points corresponding capacity given independent manifold geometry manifold centers theory manifold classification described sec iii require notion manifold center however understanding scaling manifold sizes parameter affects capacity center points manifolds scaled need defined many geometries center point symmetry ellipsoid general manifolds natural definition would center mass anchor points averaging gaussian measure adopt simpler definition center provided steiner point convex bodies expectation gaussian measure definition coincides center mass anchor points manifold size small furthermore natural define geometric properties interior touching fully supporting interior touching partially supporting fully supporting figure determining anchor points gaussian distributed vector onto convex hull manifold denoted show vector change decreases strictly convex manifold sufficiently negative vector obeys constraint hence configuration corresponds interior manifold support dimension intermediate values ttouch tfs violates constraints point boundary manifold maximizes projection manifold vector closest obeys finally larger values point interior manifold direction fully supporting square manifold interior touching regimes vertex square fully supporting regime anchor point interior collinear also partially supporting regime slightly tfs regime perpendicular one edges resides edge corresponding manifolds whose intersection margin planes edges manifolds terms centered manifolds manifolds shifted within affine subspace center orthogonal translation vector coincide means lengths defined relative distance centers origin intrinsic vectors give offset relative manifold center manifold anchor geometry capacity equation motivates defining geometrical measures manifolds call manifold anchor geometry manifold anchor geometry based statistics anchor points induced gaussian random vector relevant capacity statistics sufficient determining classification properties supporting structures associated maximum margin solution accordingly define manifold anchor radius dimension manifold anchor radius denoted defined mean squared length manifold anchor dimension given unit vector direction anchor dimension measures angular spread corresponding anchor point dimensions note manifold dimension obeys whenever ambiguity call manifold radius dimension respectively geometric descriptors offer rich description manifold properties relevant classification since depend general quantities averaged also reason manifold anchor geometry also depends upon imposed margin gaussian geometry seen small manifold sizes anchor points approximated conditions geometry simplifies shown fig gaussian vector point manifold first touches hyperplane normal translated infinity towards manifold aside set measure zero touching point unique point boundary conv procedure similar used define well known gaussian notation equals note small sizes touching point depend statistics determined shape conv relative center motivates defining simpler manifold gaussian geometry denoted subscript highlights dependence dimensional gaussian measure gaussian radius denoted measures mean square amplitude gaussian anchor point expectation gaussian gaussian dimension defined unit vector direction measures total variance manifold measures angular spread manifold statistics angle gaussian anchor point note figure gaussian anchor points mapping points showing relation point manifold touches hyperplane orthogonal manifolds shown circle ellipsoid polytope manifold values measure zero exactly perpendicular edge lie along edge otherwise coincides vertex polytope cases interior convex hulls otherwise restricted boundary important note even limit geometrical definitions equivalent conventional geometrical measures longest chord second order statistics induced uniform measure boundary conv special case ddimensional balls radius point boundary ball direction however general manifolds much smaller manifold affine dimension illustrated examples later recapitulate essential difference gaussian geometry full manifold anchor geometry gaussian case radius intrinsic property shape manifold affine subspace invariant changing distance origin thus scaling manifold global scale factor defined results scaling factor likewise dimensionality invariant global scaling manifold size contrast anchor geometry obey invariance larger manifolds reason anchor point depends longitudinal degrees freedom namely size manifold relative distance center hence need scale linearly also depend thus anchor geometry viewed describing general relationship signal center distance noise manifold variability classification capacity also note manifold anchor geometry automatically accounts rich support structure described section particular decreases statistics anchor points change concentrated boundary conv interior additionally manifolds strictly convex intermediate values anchor statistics become concentrated facets convex hull corresponding partially supported manifolds illustrate difference two geometries ellipsoid distribution theta manifold anchor gaussian gaussian gaussian gaussian figure distribution norm manifold anchor vectors ellipsoids distribution ball gaussian geometry peaked blue probability manifold anchor geometry red ellipsoids radii distribution gaussian geometry blue anchor geometry red corresponding distribution fig two simple examples ball ellipse cases consider distribution angle ball radius vectors parallel angle always zero manifold anchor geometry may lie inside ball fully supporting region thus distribution consists mixture delta function corresponding interior touching regions smoothly varying distribution corresponding fully supporting region fig also shows corresponding distributions two dimensional ellipsoid major minor radius gaussian geometry distribution finite support whereas manifold anchor geometry support also since need parallel distribution angle varies zero manifold anchor geometry concentrated near zero due contributions fully supporting regime section show gaussian geometry becomes relevant even larger manifolds labels highly imbalanced sparse classification geometry classification high dimensional manifolds general linear classification expressed depends high order statistics anchor vectors surprisingly analysis shows high dimensional manifolds classification capacity described terms statistics alone particularly relevant expect many applications affine dimension manifolds large specifically define manifolds manifolds manifold dimension large still finite thermodynamic limit practice find sufficient analysis elucidates interplay size dimension namely small needs high dimensional manifolds substantial classification capacity regime mean field equations simplify due self averaging terms involving sums components two quantity appears capacity approximated introduce manifold margin combined obtain capacity random points gain insight result note effective margin center mean distance point closest margin plane roughly mean denominator indicates margin needs scaled input norm appendix show also written namely classification capacity general high dimensional manifold well approximated balls dimension radius scaling regime implies obtain finite capacity regime effective margin needs order unity requires radius small scaling large scaling regime calculation capacity geometric properties particularly simple argued radius small components small hence gaussian statistics geometry suffice thus replace eqs respectively note scaling regime factor proportional next order correction overall capacity sincep small notably margin regime equal half gaussian mean width convex bodies support structure since manifold size small significant contributions arise interior touching supports beyond scaling regime small anchor geometry adequately described gaussian statistics case manifold margin large reduces used large margins assumed strictly convex high dimensional manifolds touching regime contributes significantly geometry hence capacity manifolds strictly convex partially supporting solutions also contribute capacity finally large fully supporting regimes contribute geometry case manifold anchor dimension approaches affine sion eqs reduce expected examples strictly convex manifolds ellipsoids family ellipsoids examples manifolds strictly convex strictly convex manifold smooth boundaries contain corners edges flats see appendix thus description anchor geometry support structures relatively simple reason anchor vectors correspond either interior touching fully supporting partial support possible ellipsoid geometry solved analytically nevertheless less symmetry sphere exhibits salient properties manifold geometry nontrivial dimensionality measure anchor points assume ellipsoids centered relative symmetry centers described sec pdcan parameterized set points components ellipsoid centers gaussian distributed zero mean variance orthonormal large limit radii represent principal radii ellipsoids relative center anchor points computed explicitly details appendix corresponding three regimes interior occurs ttouch ttouch resulting zero contribution inverse capacity touching regime holds ttouch tfs tfs finally fully supporting regime occurs tfs full expression capacity ellipsoids given appendix section focus interesting cases ellipsoids ellipsoids instructive apply general analysis high dimensional manifolds ellipsoids distinguish different size regimes assuming radii ellipsoid scaled global factor high dimensional regime due boundaries touching fully supporting transitions approximated ttouch tfs independent long large see tfs probability fully supporting vanishes discounting fully supporting regime anchor vectors given normalization factor see appendix capacity ellipsoids determined via manifold anchor radius anchor dimension scaling regime scaling regime radii small radius dimension equivalent gaussian geometry ellipso dimension anchor covariance matrix also compute covariance matrix anchor points matrix diagonal principal directions ellipsoid eigenvalues interesting compare well known measure effective dimension covariance matrix participation ratio given spectrum eigenvalues covariance matrix notation define generalized participation conventional ratio participation ratio uses whereas uses note invariant scaling ellipsoid global factor reflecting role fixed centers seen invariant scaling radii expected interesting compare gaussian geometric parameters statistics induced uniform measure surface ellipsoid case covariance matrix eigenvalues total variance equal contrast gaussian geometry eigenvalues covariance matrix proportional corresponding expression squared radius result induced measure surface ellipse anchor geometry even gaussian limit beyond scaling regime high dimensional ellipsoids become touching manifolds since ttouch tfs capacity small given eqs finally effective margin given scales global scaling factor although manifolds touching angle relative margin plane near zero figure bimodal ellipsoids ellipsoidal radii classification capacity function scaling factor blue lines full mean field theory capacity black dashed approximation capacity given equivalent ball circles simulation capacity averaged repetitions measured dichotomies per repetition manifold dimension function manifold radius relative fraction manifolds support dimension different values interior touching fully supporting small manifolds interior touching manifolds touching regime fraction fully qpsupporting manifolds predicted larger fully supporting transition tfs becomes order one probability fully supporting significant fig illustrate behavior high ellipsoids using ellipsoids bimodal distribution principal radii fig properties shown function overall scale fig shows numerical simulations capacity full mean field solution well spherical high dimensional approximation calculations good agreement showing accuracy mean field theory spherical approximation seen system scaling regime regime manifold dimension constant equals predicted dimension figure ellipsoids radii computed realistic image data svd spectrum taken readout layer googlenet class imagenet images radii scaled factor rri classification capacity function blue lines full mean field theory capacity black dashed approximation capacity ball theory ellipsoids circles simulation capacity averaged repetitions measured random dichotomies per repetition manifold dimension function manifold radius relative scaling factor function ticipation ratio manifold radius linear expected ratio close unity indicating scaling regime system dominated largest radii effective margin larger unity system becomes increasingly affected full affine dimensionality ellipsoid seen marked increase dimension well corresponding decrease rrm larger approaches fig shows distributions support dimension scaling regime interior touching regimes probability close fully supporting regime negligible increases beyond scaling regime interior probability decreases solution almost exclusively touching regime high values fully supporting solution gains substantial probability note capacity decreases approximately value substantial fraction solutions fully supporting case touching ellipsoids small angle margin plane assumed manifold affine subspace dimension finite limit large ambient dimension realistic data likely data manifolds technically full rank raising question whether mean field theory still valid cases investigate scenario computing capacity ellipsoids containing realistic distribution radii taken examples class images imagenet dataset analyzed svd spectrum representations images last layer deep convolutional network googlenet computed radii shown fig yield value order explore properties manifolds scaled radii overall factor analysis decay distribution radii gaussian dimension ellipsoid much smaller implying small manifolds effectively low dimensional geometry dominated small number radii increases becomes larger solution leaves scaling regime resulting rapid increase rapid falloff capacity shown fig finally approaching lower bound capacity expected agreement numerical simulations mean field estimates capacity illustrates relevance theory realistic data manifolds full rank convex polytopes ellipsoids family ellipsoids represent manifolds smooth strictly convex hand types manifolds whose convex hulls strictly convex section consider ddimensional ellipsoids prototypical convex polytopes formed convex hulls finite numbers points ellipsoid parameterized radii specified convex set manifold centered consists convex polytope finite number vertices vectors specify principal axes ellipsoids simplicity consider case balls radii equal concentrate cases balls case balls briefly described analytical expression capacity complex due presence contributions types supports address important aspects high dimensional solution balls new balls radius small scaling regime contributing solution touching solution increases solutions values occur support face convex polytope dimension increases probability distribution solution shifts larger values finally large two regimes dominate fully supporting probability partially supporting probability illustrate behavior balls radius affine dimension fig shows linear classification capacity function manifold approaches capacity isolated points numerical simulations demonstrate despite different geometry capacity polytope similar ball radius dimension scaling regime see much smaller despite fact polytope equal small various faces eventually interior polytope contribute anchor geometry see log scaling regime terms support structures scaling regime manifolds either interior touching intermediate sizes support dimension peaked intermediate value finally large manifolds polytope manifolds nearly fully supporting balls balls balls fraction words vertex polytope corresponding component largest magnitude see fig components gaussian random variables large maximum component tmax concentrated around log hence log much smaller result consistent fact gaussian mean width ball scales log since points norm effective margin given log order unity scaling regime regime given simple relation log fraction sign otherwise fraction balls scaling regime scaling regime case write solution subgradient embedding dimension support dimension embedding dimension support dimension embedding dimension supporting dimension support dimension figure separability balls linear classification capacity balls function radius blue mft solution black dashed spherical approximation circle full numerical simulations inset illustration ball manifold radius relative actual radius manifold dimension function small limit approximately log large close showing solution orthogonal manifolds sizes large distribution support dimensions manifolds either interior touching support dimension peaked distribution manifolds close fully supporting smooth nonconvex manifolds ring manifolds many neuroscience experiments measure responses neuronal populations continuously varying stimulus one small number degrees freedom prototypical example response neurons visual cortical areas orientation direction movement object population neurons respond object identity well continuous physical variation result set smooth manifolds parameterized single variable denoted describing continuous curves since general neural responses linear curve spans one linear dimension smooth curve convex endowed complex convex hull thus interesting consider consequences theory separability smooth curves simplest example considered case corresponds periodic angular variable orientation image call resulting curve ring manifold model neuronal responses smooth periodic functions parameterized decomposing neuronal responsesp fourier modes object represents mean population response object different components correspond different fourier components parameters preferred orientation angles corresponding neurons assumed evenly distributed simplicity assume orientation tuning neurons shape symmetric around preferred angle statistical assumptions analysis assume different manifolds randomly positioned oriented respect others ring manifold model implies mean responses independent random gaussian vectors also preferred orientation angles uncorrelated definition vectors obey norp malization thus object ring manifold closed smooth curve residing surface sphere radius simplest case ring manifold equivalent circle two dimensions however larger manifold convex convex hull composed faces varying dimensions fig investigate geometrical properties manifolds relevant classification function overall scale factor simplicity chosen striking feature small dimension scaling regime scaling roughly log logarithmic dependence similar ball polytopes increases increases dramatically log similarity ring manifold convex polytope also seen support dimension manifolds support faces dimension seen implying presence partially supporting solutions interestingly excluded indicating maximal face dimension convex hull face convex hull set points point resides subspace spanned pair real imaginary fourier harmonics ring manifolds closely related trigonometric moment curve whose convex hull geometrical properties extensively studied conclusion smoothness convex hulls becomes apparent distinct patterns support magnitude fourier component neural responses determined projecting onto basis cos sin cos sin uniform gular response support dimension support dimension support dimension figure linear classification ring folds uniform classification capacity function test samples details numerical simulations sec blue mean field theory black dashed spherical approximation black circles numerical simulations inset illustration ring manifold manifold dimension shows large limit showing orthogonality solution manifold radius relative scaling factor rrm function fact rrm becomes small implies manifolds fully supporting hyperplane showing small radius structure manifold dimension grows affine dimension log small scaling regime distribution support dimensions manifolds either interior touching support dimension peaked distribution truncated support dimensions fully supporting dimensions compare figs however see manifolds larger share common trends size manifold increases capacity geometry vary smoothly exhibiting smooth high capacity low radius dimension low capacity large radius dimension crossover occurs also examples demonstrate many cases size smaller crossover value manifold dimensionality substantially smaller expected naive second order statistics highlighting saliency significance anchor geometry manifolds sparse labels far assumed number manifolds positive labels approximately equal number manifolds negative labels section consider case two classes unbalanced number manifolds far less manifolds opposite scenario equivalent special case problem classification manifolds heterogenous statistics manifolds different geometries label statistics begin addressing capacity mixtures manifolds focus sparsely labeled manifolds mixtures manifold geometries theory manifold classification readily extended heterogeneous ensemble manifolds consisting distinct classes replica theory shape manifolds appear free energy term see appendix mixture statistics combined free energy given simply averaging individual free energy terms class recall free energy term determines capacity shape giving individual inverse critical load inverse capacity heterogeneous mixture average fractional proportions different manifold classes remarkably simple generic theoretical result enables analyzing diverse manifold classification problems consisting mixtures manifolds varying dimensions shapes sizes adequate classes differ geometry independent assigned labels general classes may differ label statistics sparse case studied geometry correlated labels instance positively labelled manifolds may consist one geometry negatively labelled manifolds may different geometry structural differences two classes affect capacity linear classification linear classifier take advantage correlations adding bias previously assumed optimal separating hyperplane passes origin reasonable two classes statistically however statistical differences label assignments two classes replaced bias chosen maximize mixture capacity effect optimizing bias discussed detail next section sparse labels scenarios sec manifolds sparse labels define sparsity parameter fraction manifolds corresponds balanced labels theory classification finite set random points known sparse labels drastically increase capacity section investigate sparsity manifold labels improves manifold classification capacity separating hyperplane constrained origin distribution inputs symmetric around origin labeling immaterial capacity thus effect sparse labels closely tied bias thus consider inequality constraints form define capacity general manifolds label sparsity margin bias next observe bias acts positive contribution margin population negative contribution population thus dependence expressed classification capacity zero bias hence equivalent capacity manifolds note similar mixtures manifolds actual capacity sparse labels given optimizing expression respect maxb following consider simplicity effect sparsity zero margin importantly large effect manifold geometry sparsely labeled manifolds much larger labels labels capacity ranges sparse manifolds upper bound much larger indeed manifolds expected capacity increases upon decreasing similar uncorrelated points potential increase capacity sparse labels however strongly constrained manifold size since manifolds large solution orthogonal manifold directions thus geometry manifolds plays important role controlling effect sparse labels capacity aspects already seen case sparsely labeled balls appendix summarize main results general manifolds sparsity size complex interplay label sparsity manifold size analysis yields three qualitatively different regimes low gaussian radius manifolds small extent manifolds noticeable dimension high similar previous analysis high dimensional manifolds find sparse capacity equivalent capacity sparsely labeled random points effective margin given maxb gardner theory noted equation noticeable effect moderate sparsity negligible effect since bias large dominates margin moderate sizes case equivalence capacity points breaks remarkably find capacity general manifolds substantial size well approximated equivalent balls sparsity dimension radius equal gaussian dimension radius manifolds namely surprisingly unlike nonsparse approximation equivalence general manifold balls valid high dimensional manifolds sparse limit spherical approximation restricted large another interesting result relevant statistics given gaussian geometry even small reason small bias large case positively labeled manifolds large positive margin fully supporting giving contribution inverse capacity regardless detailed geometry hand negatively labeled manifolds large negative margin implying far separating plane interior small fraction touching support fully supporting configurations negligible probability hence overall geometry well approximated gaussian quantities scaling relationship sparsity size analysis capacity balls sparse labels shows retains simple form see appendix depends scaled sparsity reason scaling follows labels sparse dominant contribution inverse capacity comes minority class capacity large hand optimal value depends balance contributions classes scales linearly needs overcome local fields spheres thus combining yields general sparsely labeled manifolds scaled sparsity note defined scaled sparsity rather yield smoother small regime similarly define optimal scaled bias qualitatively function roughly proportional proportionality constant depends extreme limit log obtain sparse limit log note sufficiently small gain capacity due sparsity occurs even large manifolds long large regime finally sufficiently large increases order smaller capacity small value depends detailed geometry manifold particular capacity manifold proaches demonstrate remarkable predictions present fig capacity three classes sparsely labeled manifolds balls ellipsoids ring manifolds cases show results numerical simulations capacity full mean field solution spherical approximation across several orders magnitude sparsity size function scaled sparsity example good agreement three calculations range furthermore drop increasing similar cases except overall vertical shift due different similar effect dimension balls regime moderate radii results different fall universal curve depends predicted theory small capacity deviates scaling dominated alone similar sparsely labeled points curves deviate spherical approximation true capacity revealed simulations full mean field rapidly decreases saturates rather limit ring manifolds majority minority figure classification balls sparse labels capacity balls function red blue solid lines mean field theory dotted lines approximation interpolating eqs details sec classification general manifolds sparse labels capacity ellipsoids first components equal remaining components function varied circles numerical simulations lines mean field theory dotted lines spherical approximation capacity ring manifolds gaussian spectrum details fig shows capacity ring manifolds whose components gaussian exp spherical approximation finally note choice parameters section entering scaled sparsity significantly different simply average radius thus agreement theory illustrates important role gaussian geometry sparse case discussed high moderate sparse regimes large bias alters geometry two classes different ways illustrate important aspect show fig effect sparsity bias geometry ellipsoids studied fig show evolution majority minority classes increases note despite fact shape manifolds manifold anchor geometry depends class membership sparsity levels measures depend margin small minority class seen fig minority class manifolds close fully supporting due large positive margin also seen distributions support dimension shown hand majority class manifolds mostly interior regime increases geometrical statistics two classes become similar seen majority minority classes converge zero margin value large majority minority support dimension support dimension minor major ellipsoids balls minor major figure manifold configurations geometries classification ellipsoids sparse labels analyzed separately terms majority minority classes radii ellipsoids histogram support dimensions moderate sparsity blue minority red majority manifolds histogram support dimensions high sparsity blue minority red majority manifolds manifold dimension function varied blue minority red majority manifolds manifold radius relative scaling factor function blue minority red majority manifolds vii summary discussion summary developed statistical mechanical theory linear classification inputs organized perceptual manifolds points manifold share label notion perceptual manifolds critical variety contexts computational neuroscience modeling signal processing theory restricted manifolds smooth regular geometries applies compact subset affine subspace thus theory applicable manifolds arising variation neuronal responses continuously varying physical variable sampled set arising experimental measurements limited number stimuli theory describes capacity linear classifier separate dichotomy general manifolds given margin universal set mean field equations equations may solved analytically simple geometries complex geometries developed iterative algorithms solve equations algorithms efficient converge fast involve solving variables single manifold rather invoking simulations full system manifolds embedded theory provides first time quantitative qualitative predictions binary classification realistic data structures however application real data may require extensions theory presented important ones currently subject going work include correlations present work assumed directions affine subspaces different manifolds uncorrelated realistic situations expect see correlations manifold geometries mainly two types one correlations correlations harmful linear separability another correlated variability directions affine subspaces correlated centers positive correlations latter form beneficial separability extreme case manifolds share common affine subspace rank union subspaces dtot rather dtot solution weight vector need lie null space smaller subspace work needed extend present theory incorporate general correlations generalization performance studied separability manifolds known geometries many realistic problems information readily available samples reflecting natural variability input patterns provided samples used estimate underlying manifold model using manifold learning techniques train classifier based upon finite training set generalization error describes well classifier trained finite number samples would perform test points drawn manifolds would important extend theory calculate expected generalization error achieved maximum margin solution trained point cloud manifolds function size training set geometry underlying full manifolds unrealizable classification throughout present work assumed manifolds separable linear classifier realistic problems load may capacity linear separation alternatively neural noise may cause manifolds unbounded extent tails distribution overlapping separable zero error several ways handle issue supervised learning problems one possibility nonlinearly map unrealizable inputs higher dimensional feature space via network nonlinear kernel function classification performed zero error design multilayer networks could facilitated using manifold processing principles uncovered theory another possibility introduce optimization lem allowing small training error example using svm complementary slack variables procedures raise interesting theoretical challenges including understanding geometry manifolds change undergo nonlinear transformations well investigating statistical mechanics performance linear classifier manifolds slack variables concluding remarks statistical mechanical theory perceptron learning long provided basis understanding performance fundamental limitations single layer neural architectures kernel extensions however previous theory considered finite number random points underlying geometric structure could explain performance linear classifiers large possibly infinite number inputs organized distinct manifolds variability due changes physical parameters objects statistical mechanical theory presented work explain capacity limitations linear classification general manifolds used elucidate changes neural representations across hierarchical sensory systems believe application theory corollary extensions precipitate novel insights perceptual systems biological artificial efficiently code process sensory information would like thank uri cohen ryan adams leslie valiant david cox jim dicarlo doris tsao yoram burak helpful discussions work partially supported gatsby charitable foundation swartz foundation simons foundation scgb grant nih human frontier science program project lee also acknowledges support national science foundation army research laboratory office naval research air force office scientific research department transportation appendix replica theory manifold capacity section outline derivation mean field replica theory summarized eqs define capacity linear classification manifolds maximal load high probability solution exists given points manifolds assume components drawn independently gaussian distribution zero mean variance binary labels randomly assigned manifold equal probabilities consider therp modynamic limit finite note geometric margin defined tance solution hyperplane given kwk however distance depends scale input vectors correct scaling kxk margin thermodynamic limit since adopted normalization correct scaling margin evaluation solution volume following gardner replica framework first consider volume solution space define signed projections vector solution ith weight separability constraints written hence volume written variance yields exp exp thus integrating variables yields logdetq exp heaviside step function support function defined min volume defined depends quenched random variables well known order obtain typical behavior thermodynamic limit need average log carry using replica trick hlog refers average natural need evaluate using fourier representation delta functions obtain used fact manifolds contribute factor proceed making replica symmetric ansatz order parameter saddle point one obtains limit exp performing average gaussian distribution components zero mean logdetq log used notation dhi exp thus exponential term written exp using transformation obtain exp exp completing square exponential using exp limit obtain log exp log exp combining terms write last factor exp log log represents constraints volume due normalization order parameter combining contributions classification constraints contribute dyi exp written fields qti note qti represents quenched random component due randomness thermal component due variability within solution space order parameter calculated via capacity limit overlap solutions become unity volume shrinks zero convenient define study limit limit leading order hlog first term contribution second term comes average gaussian distribution dimensional vector log independent given replacing integrals saddle point yields min min first factors written exp gardner theory entropic term thermodynamic limit capacity log vanishes capacity general manifold margin given finally note mean squared annealed variability fields due entropy solutions vanishes capacity limit see thus equation represents quantity annealed variability times remains finite limit appendix strictly convex manifolds general evaluate capacity strictly convex manifolds starting expression general manifolds strictly convex manifold point line segment connecting two points belongs interior thus boundary manifold contain edges flats spanning dimension except entire manifold therefore exactly contributions inverse capacity obeys ttouch tfs integrand tributes tfs ifold fully embedded case integrand reduces summary capacity convex manifolds written ttouch tfs given eqs respectively arg balls case balls radius hence ttouch tfs thus reduces hat capacity balls ttouch obeying inequality ttouch yielding touching regime anchor point given substituting numerator equation yields chi probability density function reproducing results furthermore approximated capacity points margin increase details parameter determined ellipsoidal constraint ellipsoids anchor points support regimes ellipsoids support function computed explicitly follows vector non zero support function minimized vector occurs boundary ellipsoid obeys equality constraint rsii evaluate differentiate respect lagrange multiplier enforcing constraint yielding regime contribution capacity given touching regime holds ttouch tfs vanishes anchor point point boundary ellipsoid antiparallel substituting value yields tfs fully supporting regime tfs implying center well entire ellipsoid fully supporting max margin solution case anchor point antiparallel interior point contribution capacity appendix limit large manifolds dimensional vector ellipsoid prinwhere cipal radii denote pointwise product given vector vectors determined analytic solution used derive explicit expressions different regimes follows interior regime interior regime resulting zero contribution inverse capacity anchor point given following boundary point ellipse given regime holds large size limit linear separation manifolds reduces linear separation random affine subspaces separating subspaces must fully embedded margin plane otherwise would intersect violate classification constraints however way large size manifolds approach limit subtle analyze limit note large condition small implies affine basis vectors except center direction either exactly almost orthogonal solution weight vector since follows almost antiparallel gaussian vector hence see elucidate manifold support structure note first ttouch hence fractional volume interior regime negligible statistics dominated embedded regimes fact fully embedded transition given tembed see fractional volume fully embedded regime contribution inverse capacity fore remaining summed probability touching partially embedded regimes therefore regimes regime contributes factor combining two contributions obtain large sizes consistent appendix high dimensional manifolds high dimensional ball discuss general manifolds high dimension focus simple case high dimensional balls capacity given substituting centered around yields stated implies finite capacity scaling regime hand implies used asymptote large reducing large analysis also highlights support structure balls large long fraction balls lie fully margin plane negligible implied fact tsf overall fraction interior balls whereas fraction touch margin planes despite fact fully supporting balls touching balls almost parallel margin planes hence capacity reaches lower bound finally large manifold limit discussed appendix realized tsf system either touching fully supporting probabilities respectively general manifolds analyze limit large dimensional general manifolds utilize self averaging terms involving sums components long second term vanishes yields note term numerator significant smaller case denominator order term integrand negligible hence cases write also ttouch hence obtain capacity average wrt gaussian evaluating involves calculations self consistent statistics anchor points calculation simplified high dimension particular reduces form intuition beyond factor clear stemming fact distance point margin plane scales norm hence margin entering capacity factor norm hence approximately constant independent deriving approximations used selfaveraging summations involving intrinsic coordinates full dependence longitudinal gaussian remain thus fact substituting denoting averaging would yield complicated expressions reason replace average quantities following potential dependence anchor radius dimension via however inspecting note two scenarios one small order case manifold radius small contribution small neglected argument case geometry replaced gaussian geometry depend second scenario order case order contribution negligible sparsity given appendix capacity balls sparse labels begin writing capacity random points sparse labels maxb optimizing yields following equation optimal bias given analyze equations various size regimes assuming small balls small radius capacity points unless dimensionality high thus large first note capacity sparsely labeled points maxb limit first equation reduces exp yielding optimal log inverse capacity dominated first term log capacity balls radius sparse labels noted optimal bias diverges hence presence order noticeable moderate log small large analyze equations limit small large assume sufficiently small optimal bias large contribution minority class inverse pacity dominated dominant contribution majority class deriving last equation used second integrals substantial value case integral integral combining two results yields following simple expression inverse capacity scaled sparsity optimal scaled bias given note affect capacity scaled sparsity capacity proportional log realistic regime small capacity decreases roughly proportionality constant depends shown examples fig finally sufficiently large order smaller case second term dominates contributes yielding capacity saturates shown fig noted however large approximations hold anymore james dicarlo david cox untangling invariant object recognition trends cognitive sciences jennifer bizley yale cohen perception nature reviews neuroscience kevin bolding kevin franks complementary codes odor identity intensity olfactory cortex elife sebastian seung daniel lee manifold ways perception science james dicarlo davide zoccolan nicole rust brain solve visual object recognition neuron ben poole subhaneil lahiri maithreyi raghu jascha surya ganguli exponential expressivity deep neural networks transient chaos advances neural information processing systems pages marc aurelio ranzato jie huang boureau yann lecun unsupervised learning invariant feature hierarchies applications object recognition computer vision pattern recognition cvpr ieee conference pages ieee yoshua bengio learning deep architectures foundations trends machine learning ian goodfellow honglak lee quoc andrew saxe andrew measuring invariances deep networks advances neural information processing systems pages charles cadieu hong daniel yamins nicolas pinto diego ardila ethan solomon najib majaj james dicarlo deep neural networks rival representation primate cortex core visual object recognition plos comput biol thomas cover geometrical statistical properties systems linear inequalities applications pattern recognition ieee transactions electronic computers elizabeth gardner space interactions neural network models journal physics mathematical general gardner maximum storage capacity neural networks epl europhysics letters vladimir vapnik statistical learning theory wiley new york sueyeon chung daniel lee haim sompolinsky linear readout object manifolds physical review stephen boyd lieven vandenberghe convex optimization cambridge university press ralph tyrell rockafellar convex analysis princeton university press moreau orthogonale espace hilbertien selon deux mutuellement polaires acad sci paris sueyeon chung uri cohen haim sompolinsky daniel lee learning data manifolds cutting plane method neural computation accepted branko geoffrey shephard convex polytopes bulletin london mathematical society giannopoulos milman rudelson convex bodies minimal mean width geometric aspects functional analysis pages springer roman vershynin estimation high dimensions geometric perspective sampling theory renaissance pages springer kanaka rajan abbott haim sompolinsky suppression chaos recurrent neural networks physical review ashok kameron decker harris richard axel haim sompolinsky abbott optimal degrees synaptic connectivity neuron jia deng wei dong richard socher kai imagenet hierarchical image database computer vision pattern recognition cvpr ieee conference pages ieee christian szegedy wei liu yangqing jia pierre sermanet scott reed dragomir anguelov dumitru erhan vincent vanhoucke andrew rabinovich going deeper convolutions proceedings ieee conference computer vision pattern recognition pages kye faces two qubit separable states convex hulls trigonometric moment curves arxiv preprint zeev smilansky convex hulls generalized moment curves israel journal mathematics monasson properties neural networks storing spatially correlated patterns journal physics mathematical general lopez schroder opper storage lated patterns perceptron journal physics mathematical general sam roweis lawrence saul nonlinear dimensionality reduction locally linear embedding science joshua tenenbaum vin silva john langford global geometric framework nonlinear dimensionality reduction science amari naotake fujita shigeru shinomoto four types learning curves neural computation sebastian mirta gordon statistical mechanics learning soft margin classifiers physical review
| 9 |
journal latex class files vol august deep learning histochemical scoring system breast cancer tissue microarray jan jingxin liu bolei chi zheng yuanhao gong jon garibaldi daniele soria andew green ian ellis wenbin zou guoping qiu methods stratifying different molecular classes breast cancer nottingham prognostic index plus uses breast cancer relevant biomarkers stain tumour tissues prepared tissue microarray tma determine molecular class tumour pathologists manually mark nuclei activity biomarkers microscope use assessment method assign histochemical score tma core however manually marking positively stained nuclei time consuming imprecise subjective process lead discrepancies paper present deep learning system directly predicts automatically innovative characteristics method inspired process pathologists count total number cells number tumour cells categorise cells based intensity positive stains system imitates pathologists decision process uses one fully convolutional network fcn extract nuclei region tumour second fcn extract tumour nuclei region convolutional neural network takes outputs first two fcns stain intensity description image input acts decision making mechanism directly output input tma image additional developing deep learning framework also present methods constructing positive stain intensity description image handling discrete scores numerical gaps whilst deep learning widely applied digital pathology image analysis best knowledge first system takes tma image input directly outputs clinical score present experimental results demonstrate predicted model high statistically significant correlation experienced pathologists scores discrepancy algorithm pathologits par pathologists although still long way clinical use work demonstrates possibility using deep learning techniques automatically directly predicting clinical scores digital pathology images index immunohistochemistry diaminobenzidine convolutional neural network breast cancer ntroduction breast cancer heterogeneous group tumours varied genotype phenotype features recent research gene expression profiling gep suggests divided distinct molecular tumour groups personalised management often utilizes robust commonplace technology immunohistochemistry ihc tumour molecular profiling diaminobenzidine dab based ihc techniques stain target antigens detected biomarkers brown colouration positive blue colouration negative hematoxylin see example images determine biological class tumour pathologists mark nuclei activity biomarkers microscope give score based liu zou qiu college information engineering shenzhen university china qiu also school computer science university nottingham zheng ningbo yongxin optics ltd zhejiang china gong computer vision laboratory eth zurich switzerland garibaldi school computer science university nottingham soria department computer science university westerminster green ellis faculty medicine health sciences university nottingham united kingdom qiu corresponding author qiu quantitative assessment method called modified histochemical scoring tissue samples stained different biomarkers combined together determine biological class case clinical decision making choose appropriate treatment number available treatment options according biological class tumour instance one methods stratifying different molecular classes nottingham prognosis index plus npi uses breast cancer relevant biomarkers stain tumour tissues prepared tissue microarray tma tissue samples stained biomarkers given histochemical score scores together determine biological class case therefore one important pieces information molecular tumour classification tumour region occupies tma section calculated based linear combination percentage strongly stained nuclei ssn percentage moderately stained nuclei percentage weakly stained nuclei according equation score wsn msn ssn final score numerical value ranges thus histochemical assessment tma journal latex class files vol august fig top example images extracted digital tma slides red circle contains one tma core stained brown colours indicate positive blue colours indicate negative bottom schematic illustration traditional manual procedure needs first count total number nuclei number strongly stained moderately stained weakly stained tumour nuclei respectively final calculated according based following information total number cells number tumour cells stain intensity distributions within tumour cells clinical practice diagnosis requires averaging two experienced pathologists assessments manually marking positively stained nuclei obviously time consuming process visual assessment tma subjective problem discrepancy issue repeatability nature method strongly stained moderately stained weakly stained definitions strong moderate weak precise subjective makes even difficult ensure well consistency increasing application clinicopathologic prognosis computer aided diagnosis cad systems proposed support pathologists decision making key parameters tissue image assessment include number tumour cells positive staining intensities within cells total number cells image classify positively stained pixels stain intensity methods colour deconvolution perform mathematical transformation rgb image widely used separate positive stains negative stains numerous approaches proposed cell nuclei detection segmentation literature histopathology image analysis perform various quantification steps still little attempt perform assessment image directly paper ask question possible develop cad model would directly give highlevel assessment digital pathological image like experienced pathologist would example give directly attempt answer question propose deep learning system directly predicting breast cancer tma images see fig instead pushing raw digital images neural network directly follow similar process pathologists use estimation first construct stain intensity nuclei image sini contains nuclei pixels corresponding stain intensity information stain intensity tumour image siti contains tumour nuclei pixels corresponding stain intensity information sini siti block irrelevant background pixels retain useful information calculating two relevant images fed convolutional neural network two input pipelines finally merged one pipeline give output best knowledge first work attempts develop deep learning based tma processing model directly outputs histochemical scores present experimental results demonstrate predicted model high statistically significant correlation experienced pathologists scores discrepancy algorithm pathologists par pathologists although still perhaps long way clinical use work nevertheless demonstrates possibility automatically scoring cancer tma based deep learning elated orks researchers proposed various analysis methods histopathological images pixellevel positive stain segmentation pham adapted yellow channel cmyk model believed strong correlation dab stain ruifrok presented brown image calculated based mathematical transformation rgb image yao employed hough forest mitotic cell detection combination generalized hough transform random decision trees shu proposed utilizing morphological filtering seeded watershed overlapping nuclei segmentation cad systems also developed tubule detection breast cancer glandular structure segmentation etc development deep learning techniques various deep neural network based cad models journal latex class files vol august published deep convolutional networks deeper architectures used build complex models result powerful solutions used residual network human epithelial type cell segmentation classification aggnet novel aggregation layer proposed mitosis detection breast cancer histology images google brain presented multi scale cnn model aid breast cancer metastasis detection lymph nodes deep system proposed detection metastatic cancer whole slide images camelyon grand challenge shah presented first completely model integrated numerous biologically salient classifiers invasive breast cancer prognosis symmetric fully convolutional network proposed ronneberger microscopy image segmentation digital pathology relative new compared type medical imaging mri deep learning one powerful machine learning techniques emerged recent years seen widespread applications many areas yap investigated three deep learning models breast ultrasound lesion detection moeskops introduced single cnn model triplanar input patches segmenting three different types medical images brain mri breast mri cardiac cta combination image representation unsupervised candidate proposals proposed automatic lesion detection breast mri existing cad frameworks directly follow assessment criteria extracting quantitative information digital images masmoudi proposed automatic human epidermal growth factor receptor assessment method assemble algorithm colour pixel classification nuclei segmentation cell membrane modelling bar filter used membrane isolation colour decomposition trahearn established registration process ihc stained wsi scoring thresholds defined dab stain intensity groups tumour region nuclei detected two different detectors recently zhu proposed train aggregation model based deep convolutional network patient survival status prediction roblem ethod immunohistochemical assessment formulated model maps input images input space label space given input image label assigned according quantitative information positive staining intensity number tumour cells total number cells image traditional assessment methods least three unsolved issues pathologists cad systems firstly positive staining intensity needs categorized four classes unstained weak moderate strong however standard quantitative criterion classifying dab stain intensity thus two pathologists often classify staining intensity two fig examples challenging cases quantitative measurement biomarkers based visual assessment variety stain intensities unclear staining overlapping nucleus size differences different type nucleus different categories two different intensities category furthermore human visual system may pay attention strongly stained regions often surrounded variety staining intensities may also affect assessment results secondly instance counting important parameter assessment nevertheless human computer still deal difficulty counting overlapping cells well moreover variability appearance different types nucleus heterogeneous staining complex tissue architectures make individually segmenting challenging problem thirdly apparent size differences tumour nuclei normal nuclei affect quantitative judgement tumour nuclei assessment examples challenging cases illustrated fig tackle problem mentioned propose develop convolutional neural network cnn based cad framework biomarker assessment tma images instead using cnn feature extractor low level processing cell segmentation developed system directly predicts biomarker score innovative characteristic method inspired process pathologists count total number nuclei number tumour nuclei categorise tumour nucleus based intensity positive stains complete system illustrated fig one fully convolutional network fcn used extract nuclei region acts step counting nucleus capture foreground information another fcn used extract tumour nuclei region acts step counting tumour nucleus mimic process categorising tumour nuclei based positive stain intensities derive stain intensity image together outputs two fcns presented another deep learning network acts decision making mechanism directly output input tma image stain intensity description although various dab stain separation methods proposed work studied stain intensity description grouping since formal definitions boundaries stain intensity groups journal latex class files vol august original image dab channel image stain intensity discription image fig comparison different images generated process stain intensity description highlighted subimage contains strongly stained nuclei strong moderate weak previous works used manually defined thresholds classification segment positive stains stain group previous works set single threshold idab separate positively stained tissues however shown deeply stained positive nuclei dark light pixel values dab channel image since strongly stained pixels significantly broader hue spectrum furthermore illustrated dab channel value correspond different pixel colours also clear order separate positive stain brown colour negative stain blue colour dab channel thresholds set based luminance values paper use luminance adaptive lamt method developed authors classify positively stained pixels specifically transformed pixel idab divided equal intervals according luminance idab idab idab fig visualization pixel colours images along luminance axis colour deconvolution dab axis work propose directly use luminance values image describe staining intensity instead setting artificial intensity category boundaries original rgb image first transformed stain component image idab iother using colour deconvolution iod stain matrix composed staining colours equal stained images iod optical density converted image calculated according law iod spectral radiation intensity typical rgb camera dab channel image idab three colour deconvolution output channels used describes dab stain according chroma difference lower upper boundary respectively luminance interval luminance image original rgb image calculated according rec transformed pixels thresholded different values according luminance instead single threshold threshold assigned follows argmax cdab stain label separated positive stain negative stain need find way describe stain intensity already seen pixel values idab describe biomarker stain intensity propose use scheme described assign stain intensity values pixels ila idab positive idab negative ila stain intensity description image idea positive stain pixels ila luminance component original image journal latex class files vol august order preserve morphology positive nuclei negative stain pixels ila higher value strongly stained pixels darker blue colour lower value weakly stained pixels lighter blue colour order separate positive negative pixel values clearly add offset negatively stained pixels negative stain pixels high positive stain pixels low value positive negative pixels clearly separated ila therefore larger ila weaker stain smaller ila stronger stain ila equal positive stain pixel way obtained image gives continuous description stain intensity image instead setting artificial boundaries separate different degrees stain intensity continuous description stain intensity see note pixel values final image normalized range nuclei tumour maps discussed important information pathologists use come number nuclei number tumour nuclei tma image therefore need extract two pieces information use two separate fcns one segmenting nucleus segmenting tumour nucleus segment tumour region use manually labelled tumour tma images train fcn segmenting general nuclei detects tumour nuclei utilize transfer learning strategy train another fcn general nuclei detection training data obtained three different datasets immunofluorescence iif stained cell dataset warwick hematoxylin eosin stained colon cancer dataset tma images since three image sets stained different types biomarker transform colour image grayscale training training mixed image set could help reduce overfitting limited medical dataset boost performance robustness general nuclei detection network tumour nuclei detection network use symmetric shape network architecture skip connection high resolution features contracting path combined output upsampling path allows network learn high resolution contextual information loss function designed according dice coefficient lmask predicted pixel ground truth prediction framework overview prediction framework illustrated consists three stages nuclei segmentation tumour segmentation stain intensity description constructing stain intensity nuclei image sini stain intensity tumour image siti predicting final histochemical score region attention convolutional neural network ramcnn rationale architecture follows number nuclei number tumour nuclei stain intensity tumour nuclei useful information predicting therefore first extract information rather setting artificial boundaries categories stain intensity retain continuous description stain intensity information useful predicting presented deep cnn estimate input image contrast many work literature whole image thrown cnn regardless region useful purpose detail first stage described section illustrated input tma image processed tumour detection network output binary image mask marking tumour nuclei part tumour nuclei otherwise general nuclei detection network output another binary image mask marking tumour nontumour nuclei part nuclei otherwise colour deconvolution stain intensity labelling operation equation produce stain intensity description image ila second stage construct sini siti multiplying nuclei mask image tumour mask image stain intensity description image ila sin ila sit ila hence background pixels zero region interests roi retained sini siti necessary information preserved histochemical assessment removing background retaining roi enable convolutional layers focus foreground objects significantly reduce computational costs improve performance proposed deep regression model dual input channels architecture shown table two inputs correspond sini siti respectively input size parameters two individual branches updated independently extracting cell tumour features respectively without interfering two pipelines merged one two convolutional layers prediction loss function prediction defined fig illustration value lla corresponding stain intensity red dot lines thresholds stain intensity groups journal latex class files vol august fig overview proposed prediction framework input tma image first processed two fcns extract tumour cells cells tumour produce two mask images input image also processed colour deconvolution positive stain classification output stain intensity description image two mask images used filter irrelevant information stain intensity description image useful information fed deep convolutional neural network prediction input tma image number augmented image number fig top graph original dataset label histogram bottom augmented label histogram layer input maxpooling maxpooling maxpooling maxpooling input filter dimensions table architecture region attention multichannel convolutional neural network lscore sin sit ram fram sin sit estimated score generated ground truth xperiments esults dataset dataset used experiment contains tma images breast adenocarcinomas set image contains one whole tma core tissues cropped sample one patient stained three different nuclei activity biomarkers pgr original images captured high resolution optical magnification resized pixels dataset manually marked two experienced pathologists based common practice tma core pathologists give percentage nuclei different stain intensity levels calculate using final label hscore determined averaging two pathologists scores difference two pathologists smaller dataset available authors request training general nuclei detection network transform warwick colon adenocarcinoma images grayscale green channel extracted journal latex class files vol august fig examples intermediate images automatics prediction pipeline left right original rgb image luminance labelled stain intensity image nuclei mask image tumour mask image respectively cell dataset cell images iif stained gray value inversed data label augmentation typical medical imaging applications dataset sizes relatively small developing deep learning based solutions common practice augment training dataset training training images general nuclei detection network tumour detection network augmented randomly cropping input samples dataset rotation random angles randomly shifting image horizontally vertically within image height width performed augment training set shown top row distribution label original dataset unbalanced labels far samples others furthermore one biggest problems limited number samples values discrete discontinuous many gaps two data also values tma image score given pathologists quantitative therefore image score means value around values vicinity also suitable labelling image order solve ambiguity issue introduce distributed label augmentation dla inspired work traditional regression method given dataset pairs instance one single finite class label space label size paper label augmented one instance associated ber labels formally dataset described set labels augmented label number sampled repeatedly based probability density function following gaussian distribution exp mean equal stanps dard deviation thus original tma image consequentially image augmented training set ground truth labels assigned repeatedly sampling according augmented label histogram shown bottom row implementation details network architecture tumour nuclei detection general nuclei detection models input size filter size tumour detection net half narrower general cell detection net networks use rectified linear unit relu activation function convolutional layer final cell tumour region maps predicted using sliding window leave cross validation strategy used ramcnn model training means round testing randomly sample tmas testing tmas training images explained previously training set augmented via rotation shift images firstly resized fed ramcnn set generate distribution ground truth label augmentation also add dropout predicted score predicted score journal latex class files vol august pathologist score nap predicted score pathologist score nnp predicted score predicted score pathologist score pathologist score pathologist score fig scatter plots predicted scores different models pathologists manual scores experimental results shows examples intermediate images automatic prediction pipeline seen luminance labelled stain intensity image marks sharp distinction positive negative stains shows maximum posteriori map classifier based luminance adaptive lamt method reliably separate positive dab stains variety images also shows stain intensity labelling strategy preserve morphology nuclei separate positive negative stains retaining continuous description positive stain intensity shows training curves dice coefficient general nuclei detection network tumour nuclei detection network respectively networks converged epochs respectively nuclei mask images see show deep convolutional network trained mixed datasets using transfer learning successfully detect nuclei dataset tumour segmentation network able identify tumour region normal tissues worth noting ground truth masks two detection networks different nucleus warwick colon cancer images labelled circular masks uniform size tumour region masks pixel level labelled therefore final predicted maps generated two networks nucleus different addition found mask dilation become evident increase dab stain intensity results discussions dice coefficient one possible reason strong homogeneous stain makes nuclei texture edge feature difficult extract dice coefficient layers two fully connected layers rates respectively regression network optimized adam initial learning rate training test epoch training test epoch fig training results general nuclei detection network tumour nuclei detection network evaluate performance proposed ramcnn two relevant images sini siti compare model two traditional single input pipeline cnns region attention cnn takes original rgb tma image shape input output prediction investigate effect multicolumn architecture combine sini siti two channel image input architectures single pipeline see also calculate using based nuclei area percentage nap nuclei number percentage nnp specifically luminance labelled stain intensity description image ila first calculated according journal latex class files vol august fig example tma images extracted different groups description section thresholds utilized categorizing pixels different stain intensity groups nap method predicted calculated according percentages area different stain intensity groups nnp employs nih imagej tool cell segmentation technique cell detection detected cells classified unstained weak moderate strong groups using thresholds calculation model nap nnp human mae value table performance comparison different regression models last line human difference hscores given two pathologists pathologists mae mae humans slightly higher humans humans higher humans humans machine humans fig illustrates scatter plots model predicted scores pathologists scores predicted scores nap lower ground truth lower end nnp predicted scores lower ground truth higher end predicted scores higher ground truth two methods affected several processing components including stain intensity thresholds nuclei segmentation accuracy proposed framework gives accurate prediction results compared traditional single pipeline cnn demonstrating imitating pathologists process keeping useful information effective approach paper mean absolute error mae standard deviation correlation coefficient predicted average two pathologists used evaluation metrics reference also calculate mae hscores given two pathologists original diagnosis data results shown seen nap based prediction gives highest mae large deviations cross validation followed nnp framework achieves lowest prediction error traditional cnn setting proposed sini siti input gives second lowest prediction error verifies effectiveness proposed approach filtering irrelevant pixels retain relevant information sini siti deep learning based methods outperform nap nnp large margin investigate statistical significance automatically predicted correlation predicted pathologists scores value also calculated correlation pathologists scores predicted value means strong evidence null hypothesis interesting observe difference predicted average two pathologists mae par difference two discussions paper introduced system predicted investigate reason scoring discrepancy proposed algorithm pathologists firstly compare prediction results different biomarkers shown proposed framework gives best accuracy three biomarker images performances slightly different different biomarkers expected different markers stain tissues differently although difference large whether useful train separate network different biomarkers something worth investigating future biomarker tma nap nnp pgr table comparing mae different methods three different biomarkers see algorithms perform differently across dataset divide tma images groups according pathologists scores example tma images group illustrated group count journal latex class files vol august nap nnp fig comparison performances different methods different groups number tmas absolute error smaller larger respectively results different methods shown seen low group traditional methods nap nnp give accurate predicted scores cnn based methods found low score tmas unstained weakly stained shown accurate predictions nap nnp indicate predefined threshold separating unstained weak see fig compatible pathologists criteria deep learning based methods set stain intensity thresholds explicitly performances across six groups relatively even accuracies nap nnp decrease rapidly increase shown stain intensity image complexity increase directly affect performance traditional methods result also indicates stain intensity thresholds moderate strong classes see fig less compatible pathologists criteria furthermore large coefficients moderate strong stain see would magnify errors area nuclei segmentation nap nnp respectively three deep learning based methods give worse results groups fewer images group indicates importance large training data size addition uneven distribution original dataset may also affect predicted accuracy analyse tmas individually investigate effect image quality proposed algorithm found tmas tissues clearly stained cellular structure clear without severe overlap see algorithm give accurate prediction hand poor image quality causes errors images easily fig examples accurately scored tmas proposed algorithm absolute errors generated smaller algorithms found three significant characteristics shown tma core contains large regions happens commonly strongly stained tissues blur regions directly affect performance nuclei segmentation well nuclei tumour detection accuracy also hinder final regression network extracting topological morphological information tissue folds see occurs thin tissue slice folds happen easily slide preparation especially tma slides would cause slide scanning furthermore tissue fold lightly stained image similar appearance tumour region darkly stained image hence segmentation accuracy colour deconvolution would greatly affected regions heterogeneity overlapping shown also affect automatic scoring performance stain heterogeneity gives rise large discrepancy stain intensity single nucleus nuclei overlapping adds difficulty three difficulties directly affect predicted results proposed method found large tmas contain one characteristics found low image quality tmas dataset exclude tma images average mae therefore future works need overcome issues order achieve high prediction performance solve problem heterogeneity overlapping adding corresponding images training set promote robustness one potential quality assurance methods addition deep learning based scoring system journal latex class files vol august fig examples sources big scoring discrepancy algorithm pathologist focus tissue folds heterogeneity overlapping developed add nuclei number estimation function accurate assessment also necessary add function automated detection elimination regions assessment oncluding emarks paper developed deep learning framework automatic assessment breast cancer tmas experimental results show automatic assessment tma feasible predicted model high correlation hscores given experienced pathologists show discrepancies deep learning model pathologits par pathologists identified image focus tissue fold overlapping nuclei three major sources error also found major discrepancies pathologists machine predictions occurred images high value findings suggested future research directions improving accuracy eferences rakha soria green lemetre powe nolan garibaldi ball ellis nottingham prognostic index plus modern clinical decision making tool breast cancer british journal cancer vol perou jeffrey van rijn rees eisen ross pergamenschikov williams zhu lee distinctive gene expression patterns human mammary epithelial cells breast cancers proceedings national academy sciences vol nielsen parker leung voduc ebbert vickery davies snider stijleman reed comparison intrinsic subtyping immunohistochemistry clinical prognostic factors estrogen positive breast cancer clinical cancer research soria garibaldi ambrogi green powe rakha macmillan blamey ball lisboa methodology identify consensus classes clustering algorithms applied immunohistochemical data breast cancer patients computers biology medicine vol green powe rakha soria lemetre nolan barros macmillan garibaldi ball identification key clinical phenotypes breast cancer using reduced panel protein biomarkers british journal cancer vol mccarty miller cox konrath mccarty estrogen receptor analyses correlation biochemical immunohistochemical methods using monoclonal antireceptor archives pathology laboratory medicine vol goulding pinder cannon pearson nicholson snead bell elston robertson blamey new immunohistochemical antibody assessment estrogen receptor status routine tissue samples human pathology vol ruifrok quantification immunohistochemical staining color translation automated analytical quantitative cytology international academy cytology american society cytology vol ruifrok johnston quantification histochemical staining color deconvolution analytical quantitative cytology histology vol irshad veillard roux racoceanu methods nuclei detection segmentation classification digital histopathology review current status future potential ieee reviews biomedical engineering vol kothari phan stokes wang pathology imaging informatics quantitative analysis images journal american medical informatics association vol pham morrison schwock iakovlev tsao hedley quantitative image analysis immunohistochemical stains using cmyk color model diagnostic pathology vol yao gall leistner van gool interactive object detection computer vision pattern recognition cvpr ieee conference ieee shu qiu kaye ilyas segmenting overlapping cell nuclei digital histopathology images engineering medicine biology society embc annual international conference ieee ieee basavanhally ganesan feldman tomaszewski madabhushi incorporating domain knowledge tubule detection breast histopathology using ocallaghan neighborhoods spie medical imaging vol international society optics photonics qiu shu ilyas novel polar space random field model detection glandular structures ieee transactions medical imaging vol shen specimen image segmentation classification using deep fully convolutional network ieee transactions medical imaging albarqouni baur achilles belagiannis demirci navab aggnet deep learning crowds mitosis detection breast cancer histology images ieee transactions medical imaging vol liu gadepalli norouzi dahl kohlberger boyko venugopalan timofeev nelson corrado detecting cancer metastases gigapixel pathology images arxiv preprint journal latex class files vol august wang khosla gargeya irshad beck deep learning identifying metastatic breast cancer arxiv preprint shah rubadue suster wang deep learning assessment tumor proliferation breast cancer histological images arxiv preprint ronneberger fischer brox convolutional networks biomedical image segmentation international conference medical image computing intervention springer yap pons ganau zwiggelaar davison automated breast ultrasound lesions detection using convolutional neural networks ieee journal biomedical health informatics moeskops wolterink van der velden gilhuijs leiner viergever deep learning medical image segmentation multiple modalities international conference medical image computing computerassisted intervention springer amit hadad alpert tlusty gur hashoul hybrid mass detection breast mri combining unsupervised saliency analysis deep learning international conference medical image computing intervention springer masmoudi hewitt petrick myers gavrielides automated quantitative assessment immunohistochemical expression breast cancer ieee transactions medical imaging vol hall javidian chen ganesan foran assessment human epidermal growth factor receptor immunohistochemical assay imaged histologic sections using membrane isolation algorithm quantitative analysis positive controls bmc medical imaging vol trahearn tsang cree snead epstein rajpoot simultaneous automatic scoring hormone receptors tumor areas whole slide images breast cancer tissue slides cytometry part xinliang jiawen feiyun junzhou wsisa making survival prediction whole slide pathology images cvpr brey lalani johnston wong mcintire duke patrick automated selection dablabeled tissue immunohistochemical quantification journal histochemistry cytochemistry vol haub meckel model based survey colour deconvolution diagnostic brightfield microscopy error estimation spectral consideration scientific reports vol liu qiu shen luminance adaptive biomarker detection digital pathology images procedia computer science vol rec studio encoding parameters digital television standard aspect ratios hobson lovell percannella saggese vento wiliem staining pattern recognition cell specimen levels datasets algorithms results pattern recognition letters vol sirinukunwattana raza tsang snead cree rajpoot locality sensitive deep learning detection classification nuclei routine colon cancer histology images ieee transactions medical imaging vol chen heng dcan deep networks accurate gland segmentation proceedings ieee conference computer vision pattern recognition liu luo loy tang pixels equal semantic segmentation via deep layer cascade gao xing xie geng deep label distribution learning label ambiguity ieee transactions image processing zhou zhang huang learning artificial intelligence vol kingma adam method stochastic optimization arxiv preprint schneider rasband eliceiri nih image imagej years image analysis nature methods vol wasserstein lazar asa statement pvalues context process purpose kothari phan wang eliminating artifacts histopathological images improved prediction cancer grade journal pathology informatics vol
| 1 |
construction lattice codes polar lattices dec yanfei yan ling liu cong ling member ieee xiaofu member ieee abstract paper propose new class lattices constructed polar codes namely polar lattices achieve capacity log snr additive white awgn channel ratio snr construction follows multilevel approach forney construct polar code level component polar codes shown naturally nested thereby fulfilling requirement multilevel lattice construction prove polar lattices sense error probability infinite lattice decoding vanishes fixed ratio vnr greater furthermore using technique source polarization propose discrete gaussian shaping polar lattice satisfy power constraint proposed polar lattices permit multistage successive cancellation decoding construction shaping explicit overall complexity encoding decoding log fixed target error probability index terms lattices discrete gaussian shaping lattice codes multilevel construction polar codes ntroduction structured code achieving capacity additive white awgn channel dream goal coding theory polar codes proposed provably achieve capacity binary memoryless symmetric bms channels considerable efforts extend polar codes general discrete memoryless channels nonbinary polar codes asymmetric channels largely theoretical attempt construct polar codes awgn channel given based nonbinary polar codes technique channel however still open problem construct practical polar codes achieve capacity awgn channel paper propose polar lattices fulfil goal based combination binary polar codes lattice codes work presented part ieee inform theory workshop itw laussane switzerland september part ieee int symp inform theory isit istanbul turkey july work yanfei yan ling liu supported china scholarship council work xiaofu supported national science foundation china yanfei yan ling liu cong ling department electrical electronic engineering imperial college london london cling xiaofu nanjing university posts telecommunications nanjing china xfuwu december draft lattice codes counterpart linear codes euclidean space existence lattice codes achieving gaussian channel capacity established using random coding argument rich structures lattice codes represent significant advantage multiterminal communications security distributed source coding see overview well known design lattice code consists two essentially separate problems awgn coding shaping awgn coding addressed notion lattices informally means fundamental volume lattice slightly greater noise sphere error probability infinite lattice decoding could made arbitrarily small recently several new lattice constructions good performance introduced hand shaping takes care finite power constraint gaussian channel shaping techniques include voronoi shaping lattice gaussian shaping despite significant progresses explicit construction lattice codes achieving capacity gaussian channel still open since paper submitted become aware work shows lda lattices achieve capacity ratio snr contributions paper settle open problem employing powerful tool polarization lattice construction novel technical contribution paper construction polar lattices proof follow multilevel construction forney trott chung level build polar code achieve capacity salient feature proposed method naturally leads set nested polar codes required multilevel construction compares favorably existing multilevel constructions extra efforts needed nest component codes gaussian shaping technique polar lattices awgn channel based source polarization able achieve capacity log snr multistage successive cancellation decoding given snr worth mentioning proposed shaping scheme practical implementation lattice gaussian shaping also improvement sense successfully remove restriction snr theorem source channel polarization employed construction resulting integrated approach sense error correction shaping performed one single polar code level worth pointing aspect may also independent interest lattices many applications network information theory aforementioned coding lattice gaussian shaping generating gaussian distribution lattice useful cryptography well theoretical practical aspects polar lattices addressed paper prove theoretical goodness polar lattices also give practical rules designing lattices december draft relation prior works paper built basis prior attempt build lattices polar codes significantly extends employing gaussian shaping aware contemporary independent work modulation follows multilevel coding approach known forney multilevel construction closely related multilevel coding main conceptual difference lattice coding coded modulation lattices infinite linear euclidean space linear structure lattices much desired many emerging applications network information theory purpose coordination paper may viewed explicit construction lattice gaussian coding scheme proposed shown gaussian shaping lattice approach different standard voronoi shaping involves lattice proposed gaussian shaping require lattice sparse superposition code also achieves gaussian channel capacity polynomial complexity however decoding complexity considerably higher polar lattice moreover requires random dictionary shared encoder decoder incurs substantial storage complexity comparison construction polar lattices explicit polar codes complexity quasilinear vanishing error probability log fixed error probability respectively following multilevel approach also possible obtain code modifying work however best knowledge reported literature resultant code would possess many useful structures lattice code organization notation rest paper organized follows section presents background lattice codes section iii construct polar latices based forney approach prove section propose gaussian shaping polar lattice achieve capacity section gives design examples simulation results random variables rvs denoted capital letters let denote probability distribution taking values set let denote entropy multilevel coding denote level realization denoted also use notation shorthand vector realization rvs similarly denote realization level level set denotes complement represents cardinality integer denotes set integers following notation denote independent uses channel channel combining splitting get combined channel subchannel denotes indicator function throughout paper use binary logarithm denoted log information measured bits december draft background attice oding definitions lattice discrete subgroup described assume generator matrix full rank vector quantizer associated arg ties resolved arbitrarily define modulo lattice operation mod voronoi region defined specifies decoding region voronoi cell one example fundamental region lattice measurable set fundamental region lattice volume fundamental region equal voronoi region given generally operation defined unique element obviously usual operation corresponds case theta series see defined paper mostly concerned block error probability lattice decoding probability independent identically distributed gaussian noise vector zero mean variance per dimension falls outside voronoi region lattice define vnr introduce notion lattices good awgn channel without power constraint definition lattices sequence lattices increasing dimension fixed lim fixed vnr greater goes worth mentioning insist exponentially vanishing error probabilities unlike poltyrev original treatment good lattices coding awgn channel polynomial decay error probability often good enough flatness factor lattice gaussian distribution gaussian distribution mean variance defined december draft convenience let given lattice define function note probability density restricted fundamental region actually probability density function pdf gaussian noise gaussian noise operation small effect aliasing becomes insignificant gaussian density approaches gaussian distribution large approaches uniform distribution phenomenon characterized flatness factor defined lattice max interpreted maximum variation uniform distribution flatness factor calculated using theta series define discrete gaussian distribution centered following discrete distribution taking values convenience write fig illustrates discrete gaussian distribution seen resembles continuous gaussian distribution defined lattice fact discrete continuous gaussian distributions share similar properties flatness factor small discrete gaussian distribution also sampled shifted lattice note relation namely shifted version following duality relation holds fourier transform gaussian distribution discrete gaussian distribution dual lattice fact relation used derive flatness factor flatness factor negligible discrete gaussian distribution lattice preserves capacity awgn channel theorem mutual information discrete gaussian distribution consider awgn channel input constellation discrete gaussian distribution arbitrary variance noise let average signal power snr let december draft fig discrete gaussian distribution discrete gaussian constellation results mutual information log snr per channel use statement theorem hold even lattice coset discrete gaussian distribution referred good constellation awgn channel negligible proved channel capacity achieved gaussian shaping lattice mmse lattice decoding aim use codebook lattice proper shift encoder maps information bits points obey lattice gaussian distribution since lattice points equally probable priori lattice gaussian coding apply map decoding proved map decoding equivalent mmse lattice decoding asymptotically equal mmse coefficient denotes minimum decoder shifted lattice construction sublattice induces partition denoted equivalence classes modulo order partition denoted equal number cosets call binary partition let lattice partition chain partition convention code selects sequence representatives cosets consequently partition binary partition codes binary codes december draft construction requires set nested linear binary codes suppose block length number information bits choose basis span lattice admits form addition carried fundamental volume lattice obtained construction given denotes sum rate component codes convenience often concerned lattice partition chain paper following example construction lattices constructed codes codes class linear block codes length codeword length information block minimum hamming distance conventionally codes denoted following relation member family lattices dimensional complex lattice dimensional real lattice example code formula lattice iii onstruction olar attices reviewed preceding section achieving channel capacity involves lattice forney gave single multilevel constructions lattices follow multilevel approach construct polar lattices bear mind order achieve capacity awgn channel noise variance concerned noise variance lattice fact recall variance equivalent noise mmse rescaling methodology also justified equivalence lemma next section see lemma forney construction revisited gaussian channel gaussian channel input operator receiver front end capacity channel noise variance log case lattice partition also known construction lattice dimension paper refer one cases generally construction see also chap give example lattices benchmark particularly connection codes polar codes advantage polar codes codes translate advantage polar lattices lattices december draft differential entropy noise log given lattice partition channel channel whose input restricted discrete lattice points translate capacity channel given log lattice partition chain key idea use good component code achieve capacity level construction construction total decoding error probability multistage decoding bounded achieve vanishing error probability make need choose lattice codes channels whose error probabilities also tend zero since logarithmic vnr satisfies log log log log log define log differential entropy gaussian noise note represents capacity channel due data processing theorem difference entropy gaussian noise gaussian noise total capacity loss component codes log since obtain upper log shown negligible compared two terms december draft since log represents poltyrev capacity right hand side gives upper bound gap poltyrev capacity bound equal decibels conversion binary logarithm logarithm approach poltyrev capacity would like log thus need negligible appendix prove following lemma lemma capacity channel bounded log log thus following design criteria top lattice negligible flatness factor bottom lattice small error probability component code code channel conditions essentially forney except impose slightly stronger condition top lattice top lattice satisfies reason require negligible achieve capacity gaussian channel become clear next section asymptotically error probability polar code length decreases may desire error probability polar lattice let decrease exponentially next lemma shows first two criteria satisfied growing log see appendix proof lemma consider partition chain number levels log sufficient achieve lemma mostly theoretical interest practical designs target error probability fixed small number levels suffice polar lattices shown channel symmetric optimum input distribution uniform since use binary partition input binary associate representative coset quotient group fact channel bms channel allows polar code achieve capacity let denote output awgn channel given let denote coset chosen assuming uniform input distribution conditional pdf channel input output mod given exp conditional pdf written somewhat different namely conditional pdf offset nevertheless two forms equivalent let offset december draft regularity symmetry capacity separability channel hold offset input fact offset due previous input bits would removed multistage decoder level means code level designed according reason fix prove channel degradation following lemma reason use form consistency case input sect one always let definition degradation consider two channels said stochastically degraded respect exists distribution proof following given appendix lemma consider binary lattice partition chain scale factor orthogonal matrix channel degraded respect channel recall basics polar codes let bms channel input alphabet output alphabet polar codes block codes length input bits let capacity given rate information bits indexed set rows generator matrix denotes kronecker product gives channel channel seen bit given proved grows approaches either channel completely noisy channel set completely noisy resp subchannels called frozen set resp information set one sets sends information bits within rule decoding defined otherwise definition bhattacharyya parameter symmetric channel given bms channel transition probability bhattacharyya parameter defined let denote block error probability binary polar code shown lim lim version lemma suggested anonymous reviewer december draft means fraction good channels therefore constructing polar codes equivalent choosing good indices however complexity exact computation bms channel continuous output alphabet appears exponential block length quantization method proposed transforms bms channel continuous output alphabet finite output alphabet also proposed approximation method construct polar codes efficiently bms channel combine two methods together order construct polar codes channel see details shown sufficient number quantization levels approximation error negligible computational complexity still logn component polar codes channels stack construction build polar lattice following lemma shows component codes nested guarantee multilevel construction creates lattice consider two rules determine component codes theoretical practical purposes respectively one capacity rule select channel indices according threshold mutual information rule namely error probability level select channel indices according threshold bhattacharyya parameter advantage rule based bhattacharyya parameter leads upper bound error probability reason use rule practical design well known two rules converge block length goes infinity nesting relation consequence lemma lemma either capacity rule rule component polar codes built multilevel construction nested proof firstly consider rule lemma bms channel degraded version subchannel also degraded respect let threshold codewords generated submatrix whose rows indexed information set information sets two channels respectively given due fact construct polar codes submatrix therefore lemma channel level always degraded respect channel level consequently consider capacity rule nesting relation still holds select channel indices according threshold mutual information lemma bms channel degraded version december draft awgn goodness threshold bhattacharyya parameter block error probability polar code decoding made arbitrarily small increasing block length also capacity loss diminishes therefore following theorem theorem suppose negligible construct polar lattice binary lattice partition chain nested polar codes block length log error probability multistage decoding bounded logarithmic vnr bounded achieve poltyrev capacity fixed sense log remark worth pointing theorem requires mild conditions condition easily satisfied properly scaling top lattice practice target error probability fixed small constant namely scale log thus essential condition finite however capacity loss negligible investigate performance polar lattices following analysis polar codes given proved polar codes need polynomial block length respect gap capacity known scaling exponent lower bound gap constant depends upper bound gap constant depends block error probability given later scaling factor improved thus gap poltyrev capacity polar lattices log corresponding block error probability rpb constant depends assuming equal error probabilities component polar codes since fixed gap poltyrev capacity polar lattices also scales polynomially dimension comparison optimal bound lattices given log log opt finite dimensions precise exponential error bound lattices constructed random linear codes given thus given scaling exponent optimum random lattices smaller polar lattices result consistent fact polar codes require larger block length random codes achieve rate error probability december draft olarization aussian haping achieve capacity gaussian channel apply gaussian shaping polar lattice however appears difficult directly section apply gaussian shaping top lattice instead friendly implementation motivated theorem implies one may construct lattice code good constellation precisely one may choose top lattice whose mutual information negligible gap channel capacity bounded theorem construct multilevel code achieve capacity show strategy equivalent implementing gaussian shaping polar lattice purpose employ recently introduced polar codes asymmetric channels asymmetric channels multilevel lattice coding theorem choose good constellation flatness factor negligible let binary partition chain labelled bits induces distribution whose limit corresponds example shown fig case shaping constellation points actually sufficient since total probability points rather close probability bit labelling fig lattice gaussian distribution associated labelling probability given coset indexed bits example chain rule mutual information december draft obtain channels given denote coset indexed according channel transition pdf channel given exp exp exp exp exp recall mmse coefficient distribution unless general asymmetric input means negligible finite power number levels need large following lemma determines large order achieve channel capacity proof found appendix lemma log log mutual information bottom level moreover using first levels incurs capacity loss remark condition log log theoretical practice small constant different capacity negligible see example next section polar codes asymmetric channels since component channels asymmetric need polar codes asymmetric channels achieve capacity fortunately polar codes binary memoryless asymmetric bma channels introduced recently definition bhattacharyya parameter bma channel let bma channel input output let denote input distribution channel transition probability respectively bhattacharyya parameter channel defined note definition definition uniform following lemma shows adding observable output increase lemma conditioning reduces bhattacharyya parameter let december draft proof xxq follows inequality let input output vector independent uses simplicity denote distribution pxy following property polarized random variables well known theorem polarization random variables lim lim lim lim lim lim bhattacharyya parameter asymmetric models originally defined distributed source coding duality channel coding source coding also used construct polar codes bma channels actually bhattacharyya parameter single source without side information bhattacharyya parameter bma channel related symmetrized channel aim use symmetrization technique creates symmetrized channel bma channel following lemma implicit make explicit lemma symmetrization let channel input output built asymmetric channel shown fig suppose input uniformly distributed holds symmetrized channel note definition symmetrized channel slightly different conventional symmetric channel since condition input distribution imposed december draft fig relationship asymmetric channel symmetrized channel proof equalities follow dependent independent following theorem connects bhattacharyya parameter bma channel symmetrized channel denote combining channels uses respectively theorem connection bhattacharyya parameters let uniform input output vectors respectively let bhattacharyya parameter subchannel equal subchannel position construct polar codes bma channel define frozen set information set symmetric polar codes follows frozen set information set theorem bhattacharyya parameters symmetrized channel asymmetric channel however channel capacity capacity obtain real capacity input distribution needs adjusted polar lossless source coding indices small removed information set symmetrized channel proportion part name remaining set information set asymmetric channel bits uniformly distributed made independent information bits name set frozen set order generate desired input distribution remaining bits determined december draft fig polarization symmetric asymmetric channels bits call shaping set process depicted fig formally define three sets follows frozen set information set shaping set find sets one use theorem calculate known technique symmetric polar codes note computed similar way one constructs symmetrized channel actually binary symmetric channel cross probability construction equivalent implementing shaping polar code symmetrized channel besides construction decoding also converted symmetric polar code means decoding result equals thus decoding polar code treated decoding polar code given clearly decoding complexity asymmetric channel also log summarize observation following lemma lemma decoding asymmetric channel let realization previous estimates likelihood ratio given denotes transition probability subchannel bits chosen according also calculated using treating independent variable remove however order compatible polar lattices modify scheme bits uniformly distributed bits still december draft chosen according expectation decoding error probability still vanishes following theorem extension result theorem give proof appendix completeness theorem consider polar code following encoding decoding strategies bma channel encoding sending codeword index set divided three parts frozen set information set shaping set defined encoder places uniformly distributed information bits fills uniform random sequence shared encoder decoder bits generated mapping family randomized mappings yields following distribution probability probability decoding decoder receives estimates according rule encoding decoding message rate arbitrarily close expectation decoding error probability randomized mappings satisfies consequently exists deterministic mapping practice share mapping encoder decoder let access source randomness using seed pseudorandom number generators multilevel polar codes next task construct polar codes achieve mutual information levels construction preceding subsection readily applicable construction first level demonstrate construction levels take channel second level example also bma channel input output side information channel transition probability shown construct polar code second level propose following procedure step construct polar code bms channel input vector output vector uniformly distributed step regarded output distribution becomes marginal distribution consider polarized random variables according theorem december draft fig first step polarization construction second level polarization gives three sets shown fig similarly prove three sets defined follows frozen set information set shaping set step treat side information encoder given choices restricted since generally correlated fig removing bits almost deterministic given obtain information set distribution input becomes conditional distribution process shown fig precisely indices divided three portions follows give formal statement procedure following lemma lemma first step polarization obtain three sets let denote set indices whose bhattacharyya parameters satisfy proportion asymptotically given limn december draft fig second step polarization construction second level removing obtain true information set formally three sets obtained follows frozen set information set shaping set proof firstly show proportion set goes define slightly different set suppose constructing asymmetric polar code channel difficult find limn theorem furthermore lemma immediately therefore difference definitions lies denoting unpolarized set lim lim result limn limn secondly show lemma get difference definitions lies observe union would remove condition accordingly also found proportion goes summarize main results following theorem december draft theorem coding theorem multilevel polar codes consider polar code following encoding decoding strategies channel second level channel transition probability shown encoding sending codeword index set divided three parts frozen set information set shaping set encoder first places uniformly distributed information bits frozen set filled uniform random sequence shared encoder decoder bits generated mapping form family randomized mappings yields following distribution probability probability decoding decoder receives rule estimates based previously recovered according note probability calculated efficiently treating already decoded decoder level outputs asymmetric channel encoding decoding message rate arbitrarily close expectation decoding error probability randomized mappings satisfies consequently exists deterministic mapping proof theorem given appendix obviously theorem generalized construction polar code channel level result construct difference side information changes polar code achieves rate arbitrarily close vanishing error probability omit proof sake brevity achieving channel capacity far constructed polar codes achieve capacity induced asymmetric channels levels since sum capacity component channels nearly equals mutual information since choose good constellation log snr constructed lattice code achieve capacity gaussian channel summarize construction following theorem theorem choose good constellation negligible flatness factor negligible theorem construct multilevel polar code log log snr message rate approaches log snr error probability multistage decoding bounded december draft remark simple generate transmitted codeword proposed scheme let transmitted codeword drawn proof lemma know probability choosing point outside interval negligible sufficiently large implies exists one point interval probability close therefore one may simply transmit mod modulo operation applied range next show multilevel polar coding scheme equivalent gaussian shaping coset polar lattice translate fact polar lattice exactly constructed corresponding symmetrized channels recall channel bma channel input distribution clear lemma transition probability symmetrized channel exp exp note difference asymmetric channel symmetrized channel priori probability comparing channel see symmetrized channel equivalent channel since common terms front sum completely cancelled calculation likelihood summarize foregoing analysis following lemma lemma equivalence lemma consider multilevel lattice code constructed constellation gaussian channel noise variance symmetrized channel derived asymmetric channel equivalent channel noise variance thus resultant polar codes symmetrized channels nested polar lattice noise variance also multistage decoding performed signal lemma since frozen sets polar codes filled random bits rather zeros actually obtain coset polar lattice shift accounts effects random frozen bits finally since start would obtain without coding since construction obtain discrete gaussian distribution remark analysis shows proposed scheme explicit construction lattice gaussian coding introduced applies gaussian shaping lattice coset note condition even sum hence likelihood ratio one takes mod uses december draft fig channel capacity partition chain curve partition channel translate curve discrete bms approximation based method quantization levels negligible theorem condition imposed construction polar lattice section iii theorem always possible scale top lattice become negligible theorem thus theorem holds snr meaning removed condition snr required theorem moreover good constellation form shift used practice constellation taking values proposed construction holds verbatim remark lemma power discrete gaussian distribution never greater remark shaping method proposed section gaussian shaping performed bottom lattice however requires negligible hold general esign xamples section give design examples polar lattices based one partition chain without power constraint design follows rule multistage decoding applied since complexity decoding log overall decoding complexity log design examples without power constraint consider lattice partition construct multilevel lattice one needs determine number levels lattice partitions actual rates according target error probability reason condition snr stringent condition imposed flatness factor namely negligible december draft fig polar lattice two levels given noise variance guidelines given section effective levels achieve target error probability actual rate close either therefore one determine number effective levels help capacity curves fig example given noise variance indicated straight line fig one may choose partition two levels component codes indeed suggested multilevel construction multistage decoding shown fig level set code generators chosen matrix standard deviation noise give example length target error probability since bottom level lattice decoder target error probability middle level fig channel capacity middle level top level capacity goal find two polar codes approaching respective capacities block error probabilities channels found first polar code code second polar thus sum rate component polar codes implying capacity loss meanwhile factor therefore rate losses level logarithmic vnr given log fig shows simulation results example seen estimate close actual gap simulation indicates performance component codes important multilevel lattice gap poltyrev capacity largely due capacity losses component codes recall channel first level degraded respect one second level according lemma two polar codes construction turn nested thanks density evolution upper bound block error probability polar december draft fig block error probabilities polar lattices length multistage decoding comparison polar lattices lattices also presented lattices constructed codes partition level changing rule base hamming weight capacity rule channel polarization seen performance polar lattices significantly improved ldpc polar bound ser fig ser lattices dimension around code finite length calculated numerically according plot upper bound block error probability polar lattice fig quite tight performance comparison competing lattices approaching poltyrev capacity dimension around shown fig terms symbol error rate ser simulation curves lattices ser defined average error probability coordinates lattice codeword commonly used literature curve ldpc lattice plotted normalized block error probability december draft obtained corresponding papers note theoretical minimum gap poltyrev capacity dimension among four types lattices compared ldpc lattice weakest performance three similar performance dimension difference within contrast polar lattice lda lattice analytic results ldlc available therefore less understood theory lda lattice slightly better performance polar lattice expense higher decoding complexity log ldpc codes employed assuming would rquire complexity log design examples power constraint satisfy power constraint use discrete lattice distribution shaping mutual information level different snrs shown fig see partition five levels enough achieve awgn channel capacity snr ranging note number levels required design lattice awgn capacity mutual information snr fig channel capacity level function snr december draft shaping set information set frozen set level index set proportion fig proportions shaping set information set frozen set level snr lower bound rate awgn capacity bound bound bound bound bound bound snr fig lower bounds rates achieved polar lattices block error probability block lengths level estimate lower bound code rate block error probability done calculating upper bound block error probability polar code using bhattacharyya parameter target error probability assignments bits information shaping frozen sets different levels shown fig snr fact nearly uniform need shaping first two levels levels actually correspond december draft lattice third level good bits information bits contrast fifth level mostly shaping since message rate already small adding another level clearly would contribute overall rate lattice code finally lower bounds rates achieved polar lattices various block lengths shown fig note gap channel capacity diminishes increases onclusions paper constructed polar lattices approach capacity gaussian channel construction based combination channel polarization source polarization without shaping constructed polar lattices gaussian shaping polar lattice deals power constraint technically involved overall scheme explicit efficient featuring complexity acknowledgments authors would like thank associate editor anonymous reviewers helpful comments ppendix roof emma proof definition flatness factor thus differential entropy gaussian noise bounded log log log log log therefore bounded log second inequality follows fact log loge log ppendix roof emma proof purpose assume azn bzn scaling parameters estimated note partition chains always possible bottom lattice take form bzn one may simply extend partition chain lead upper bound december draft firstly note flatness factor made arbitrarily small scaling top lattice see recall lemma scaling factor let dual lattice corollary exp exp exp exp exp sufficiently small let hence fixed secondly union bound error probability bottom lattice apply chernoff bound want leads fixed binary lattice partition thus conclude log log log ppendix roof emma proof lattice partition chain scale channel channel multiplying output channel since scaling factor orthogonal matrix gaussian noise dimension still independent noise variance per dimension increased scaling therefore channel stochastically equivalent channel larger noise variance design examples channel gaussian noise variance equivalent channel gaussian noise variance channel noise variance per dimension equivalent channel noise variance per dimension task prove channel noise variance degraded respect channel noise variance see construct intermediate channel input operation receiver front end depicted fig noise variance channel given per december draft dimension property mod mod mod find concatenated channel consisting channel noise variance followed intermediate channel stochastically equivalent channel noise variance proof completed according definition mod mod mod mod intermediate channel fig relationship channels regarding degradation denotes channel input denote independent additive gaussian noises variances respectively clearly two channels noise variances described channel respectively property modulo operation channel equivalent channel concatenated channel consisting channel intermediate channel ppendix roof emma proof convenience consider partition chain proof extended case sandwiching partition reduces case level selected coset written clearly subset let denote two lattice points smallest norm set without loss generality assume observe gaussian distribution variance find positive integer making probability exp actually need large instance probability larger assume constant december draft interval simultaneously two points outside exp exp exp exp exp exp represents integers means probability choosing goes zero large therefore point interval lies outside without loss generality assume two cosets corresponding respectively exp exp exp exp exp exp exp represents integers since obtain exp exp exp exp assume exp get plog log denotes binary entropy function relationship finally log log exp exp two positive constants therefore log log december draft ppendix roof heorem proof let denote set pairs decoding error occurs bit block decoding error event given according encoding scheme codeword appears probability expectation decoding error probability random mapping expressed define probability distribution variational distance bounded kqu kqu kqu december draft equality follows equation inequality relative entropy inequality holds pinsker inequality kqu kqu ppendix roof heorem proof let denote set triples decoding error occurs bit block decoding error event given according encoding scheme codeword appears probability expectation decoding error probability random mapping expressed define probability distribution variational distance bounded december draft inequation follows equation first summation following fashion proof theorem prove according result coding scheme level already since write clearly one one mapping immediately therefore second summation kqu kqu rest part proof follows fashion proof theorem finally eferences channel polarization method constructing codes symmetric memoryless channels ieee trans inform theory vol july telatar polarization arbitrary discrete memoryless channels proc ieee inform theory workshop itw taormina italy sahebi pradhan multilevel channel polarization arbitrary discrete memoryless channels ieee trans inform theory vol park barg polar codes channels ieee trans inform theory vol mori tanaka polar codes using codes algebraic geometry codes proc ieee inform theory workshop itw dublin ireland honda yamamoto polar coding without alphabet extension asymmetric models ieee trans inform theory vol mondelli hassani urbanke achieve capacity asymmetric channels corr vol online available http abbe telatar polar codes multiple access channel ieee trans inform theory vol december draft abbe barron polar coding schemes awgn channel proc ieee int symp inform theory isit russia july erez zamir achieving log awgn channel lattice encoding decoding ieee trans inform theory vol ling belfiore achieiving awgn channel capacity lattice gaussian coding ieee trans inform theory vol ling luzzi belfiore semantically secure lattice codes gaussian wiretap channel ieee trans inform theory vol nazer gastpar harnessing interference structured codes ieee trans inform theory vol zamir shamai erez nested codes structured multiterminal binning ieee trans inform theory vol june zamir lattice coding signals networks cambridge cambridge university press poltyrev coding without restictions awgn channel ieee trans inform theory vol mar sadeghi banihashemi panario lattices construction decoding analysis ieee trans inform theory vol pietro boutros brunel integer lattices based construction proc ieee inform theory workshop itw lausanne switzerland pietro boutros new results construction lattices based sparse matrices proc ieee int symp inform theory isit istanbul turkey july sommer feder shalvi lattice codes ieee trans inform theory vol apr forney wei multidimensional introduction figures merit generalized cross constellations ieee sel areas vol aug kschischang pasupathy optimal nonuniform signaling gaussian channels ieee trans inform theory vol may pietro boutros lda lattices without dithering achieve capacity gaussian channel corr vol online available http forney trott chung coset codes multilevel coset codes ieee trans inform theory vol may micciancio regev reductions based gaussian measures proc ann symp found computer science yan ling construction lattices polar codes proc ieee inform theory workshop itw lausanne switzerland yan ling polar lattices meets forney proc ieee int symp inform theory isit istanbul turkey seidl schenk stierstorfer huber multilevel modulation ieee trans vol wachsmann fischer huber multilevel codes theoretical concepts practical design rules ieee trans inform theory vol july joseph barron least squares superposition codes moderate dictionary size reliable rates capacity ieee trans inform theory vol may fast sparse superposition codes near exponential error probability ieee trans inform theory vol conway sloane sphere packings lattices groups second edition new york kositwattanarerk oggier construction related constructions lattices linear codes int workshop coding cryptography wcc forney coset introduction geometrical classification ieee trans inform theory vol december draft telatar rate channel polarization ieee int symp inform theory isit seoul korea july cover thomas elements information theory new york wiley korada polar codes channel source coding dissertation ecole polytechnique lausanne tal vardy construct polar codes ieee trans inform theory vol pedarsani hassani tal telatar construction polar codes proc ieee int symp inform theory isit russia july hassani alishahi urbanke scaling polar codes ieee trans inform theory vol guruswami xia polar codes speed polarization polynomial gap capacity ieee annual symp foundations computer science focs goldin burshtein improved bounds finite length scaling polar codes ieee trans inform theory vol ingber zamir feder finite dimensional infinite constellations ieee trans inform theory vol mar source polarization proc ieee int symp inform theory isit austin usa july mori tanaka performance polar codes construction using density evolution ieee comm vol july december draft
| 7 |
approximate correlation clustering using queries nir anup ragesh dec technion haifa department computer science engineering indian institute technology abstract ashtiani nips introduced framework clustering ssac learner allowed make samecluster queries specifically model query oracle answers queries form given two vertices belong optimal cluster many clustering contexts kind oracle queries feasible ashtiani showed usefulness query framework giving polynomial time algorithm clustering problem input dataset satisfies separation condition ailon extended work approximation setting giving efficient algorithm small dataset within ssac framework work extend line study correlation clustering problem correlation clustering graph clustering problem pairwise similarity dissimilarity information given every pair vertices objective partition vertices clusters minimise disagreement maximises agreement pairwise information given input problems popularly known mindisagree maxagree problems mindisagree maxagree versions problems number optimal clusters exist polynomial time approximation schemes ptas mindisagree maxagree approximation guarantee small running time polynomial input parameters exponential get significant running time improvement within ssac framework cost making small number queries obtain algorithm small running time polynomial input parameters also also give upper lower bounds number queries lower bound based exponential time hypothesis eth note existence efficient algorithm mindisagree ssac setting exhibits power queries since polynomial time algorithm polynomial even possible classical nir ailon acknowledges generous support isf grant number anup bhattacharya acknowledges support tcs fellowship iit delhi ragesh jaiswal acknowledges support grant email address nailon email addresses anupb rjaiswal setting due conditional lower bounds conditional lower bound particularly interesting establishes lower bound number cluster queries ssac framework also establishes conditional lower bound running time algorithm mindisagree introduction correlation clustering graph clustering problem given similarity dissimilarity information pairs vertices input graph vertices edges labeled similar positive dissimilar negative clustering objective partition vertices clusters edges labeled positive remain within clusters negative edges across clusters however information may inconsistent objective example may exist vertices edges labeled positive whereas edge labeled negative case possible come clustering vertices would agree edge labels objective correlation clustering come clustering minimises disagreement maximises agreement edge labels given input minimisation version problem known mindisagree minimises sum number negative edges present inside clusters number positive edges going across clusters similarly maximisation version known maxagree objective maximise sum number positive edges present inside clusters number negative edges going across clusters unlike clustering correlation clustering restriction number clusters formed optimal clustering number optimal clusters given problems known mindisagree maxagree respectively bansal gave constant approximation algorithm mindisagree ptas maxagree subsequently charikar improved approximation guarantee mindisagree showed mindisagree results correlation clustering complete graphs known general graphs least hard minimum problem since mindisagree additional assumptions introduced better results example studied mindisagree input noisy comes model given part input giotis guruswami gave ptas mindisagree recently works case flavour polynomial time algorithms problems designed stability assumptions ashtiani considered one stability assumption called introduced active learning ssac framework within framework gave probabilistic polynomial time algorithm datasets satisfy property specifically ssac framework involves query oracle answers queries form given two vertices belong optimal cluster query oracle responds answer answers assumed consistent fixed optimal solution framework studied query complexity polynomial time algorithms datasets satisfying property ailon extended work study query complexity bounds ssac framework small without stability assumption dataset gave almost matching upper lower bounds number queries problem ssac framework work study mindisagree ssac framework optimal clustering clusters give upper lower bounds number queries correlation clustering also give upper bounds maxagree algorithm based ptas giotis guruswami mindisagree algorithm giotis guruswami involves random sampling subset vertices considers possible ways partitioning clusters every clusters rest vertices greedily every vertex assigned cluster maximizes agreement edge labels main result following theorem giotis guruswami every ptas mindisagree running time log since giotis guruswami considered possible ways partitioning subset clusters running time exponential dependence make simple observation within ssac framework overcome exponential dependence making queries oracle basic idea randomly sample subset vertices partition optimally clusters making queries oracle note making queries one partition optimally clusters subset partitioned optimal clustering key step needed analysis giotis guruswami follow algorithm analysis mindisagree main result mindisagree ssac framework obtain similar results maxagree theorem main result upper bound let randomized uses ssac framework mindisagree log log log log queries runs time outputs solution high probability complement upper bound result providing lower bound number queries ssac framework efficient algorithm mindisagree lower bound result conditioned exponential time hypothesis eth hypothesis lower bound result implies number queries depended number optimal clusters main result respect query lower bound given follows theorem main result lower bound given exponential time hypothesis eth holds exists constant approximation algorithm mindisagree ssac framework runs polynomial time makes polyklog queries exponential time hypothesis following statement regarding hardness problem exponential time hypothesis eth exist algorithm decide whether formula clauses satisfiable running time note query lower bound result simple corollary following theorem prove theorem exponential time hypothesis eth holds exists constant algorithm mindisagree requires poly log time lower bound statement may independent interest already known mindisagree result addition understanding hardness correlation clustering problem given query upper bound result making simple observations algorithms giotis guruswami lower bound results may regarded primary contribution work first give lower bound results next section upper bound results section however start discussing results brief discussion related works related works numerous works clustering problems semisupervised settings balcan blum proposed interactive framework clustering use queries framework given abritrary clustering query oracle specifies cluster split clusters merged awasthi developed local clustering algorithm uses queries one versus queries clustering studied voevodski oracle query returns distances points authors provided clustering close optimal clustering queries instances satisfying stability property fomin gave conditional lower bound cluster editing problem also stated decision version correlation clustering problem editing problem given graph budget integer objective decide whether transformed union clusters disjoint cliques using edge additions deletions assuming eth showed exists algorithm decides time whether transformed union cliques using adjustments edge additions deletions clear whether exact reduction modified approximation preserving reduction obtain results similar mazumdar saha studied correlation clustering problem similar setting edge similarity dissimilarity information assumed coming two distributions given input studied cluster recovery problem ssac framework gave upper lower bounds query complexity lower bound results information theoretic nature however interested approximate solutions correlation clustering problem query lower bounds section obtain lower bound number queries fptas within ssac framework needs make problem mindisagree derive conditional lower bound minimum number queries exponential time hypothesis eth assumption conditional lower bound results based eth found prove following main theorem section theorem exponential time hypothesis eth holds exists constant algorithm mindisagree requires poly log time theorem gives proof theorem proof proof theorem let assume exists makes polyklog queries considering possible answers queries picking best solution one solve problem poly log time contradicts theorem remaining section give proof theorem first state eth hypothesis lower bound results derived assuming hypothesis hypothesis exponential time hypothesis eth exist algorithm decides whether formula clauses satisfiable running time since would like obtain lower bounds approximation domain need gap version eth hypothesis following version pcp theorem would useful obtaining gap version eth theorem dinur pcp theorem constants exists reduction takes formula clauses input produces one formula mpoly log clauses satisfiable satisfiable every clause formula exactly literals unsatisfiable val variable appears clauses val maximum fraction clauses satisfiable assignment hypothesis follows eth theorem useful analysis hypothesis exists constants following holds exist algorithm given formula clauses variable appearing clauses distinguishes whether satisfiable val runs time better poly log lemma given trivially follows dinur pcp theorem lemma hypothesis holds hypothesis give reduction gap version problem gap version problem problem instance consists set clauses containing exactly literals clause said satisfied assignment iff least one two literals clause true nae stands equal instance define maximum fraction clauses satisfied equal sense assignment note different val equal maximum fraction clauses satisfied usual sense first reduce lemma let polynomial time reduction given instance clauses variable appearing clauses produces instance clauses val val variable appears clauses proof construct following manner every variable introduce two variables use iff every reduction every clause literals introduce following four nae clauses index say variable positive form hand example clause following four clauses note property lemma holds due construction property argue satisfying assignment assignment variables per rule iff satisfying assignment nae sense every literal makes clause true two corresponding copies satisfies four clauses nae sense property prove contrapositive suppose assignment variables satisfies least fraction clauses argue assignment variables per rule iff satisfies least fraction clauses first note every set clauses created single clause either satisfied four satisfied whatever assignment variable let number clauses satisfied let number clauses satisfied implies note clauses satisfied corresponding clause satisfied respect assignment per rule iff since least one pairs opposite values fraction clauses satisfied least lemma let polynomial time reduction given instance clauses variable appearing clauses produces instance clauses variable appears max clauses proof every clause construct following four clauses let call introducing new variables property trivially holds construction every satisfying assignment way set clause variables every four clauses corresponding clause satisfied property holds show property using contraposition consider assignment satisfies least fraction clauses let denote number per assignment clauses satisfied implies implies note four clauses satisfied corresponding clause satisfied assignment variables implies assignment makes least fraction clauses true come following hypothesis holds given hypothesis holds crucial analysis hypothesis exists constants following holds exist algorithm given formula clauses variable appearing clauses distinguishes whether runs time better poly log lemma given follows easily lemmas lemma hypothesis holds hypothesis give reduction gap version gap version monotone negative variables note nae equal property setting variables necessarily satisfy formula lemma let polynomial time reduction given instance clauses variable appearing clauses produces instance monotone clauses variable appears clauses proof construct following manner substitute positive literals variable negative literals new variables also every variable add following clauses tji uji vij tji uji vij tji uji vij new variables note way satisfy clauses let denote total number clauses also construction variable appears clauses proves property property follows fact satisfying assignment way extend assignment variables clauses satisfied new variables set make new clauses satisfied argue property using contraposition suppose assignment variables makes least fraction clauses satisfied first note also assignment makes least fraction clauses satisfied following clauses satisfied tji uji vij tji uji vij however flip one number clauses satisfied might lose clauses since variable appears clauses let number clauses corresponding original clauses satisfied assignment gives since completes proof lemma come following hypothesis holds given hypothesis holds hypothesis exists constants following holds exist algorithm given monotone formula clauses variable appearing clauses distinguishes whether runs time better poly log lemma follows easily lemma mentioned lemma hypothesis holds hypothesis provide reduction gap version monotone gap version bounded degree hypergraph lemma let exists polynomial time reduction given monotone instance clauses every variable appearing clauses outputs instance hypergraph vertices hyperedges bounded degree satisfiable clauses satisfiable would edges bichromatic proof reduction constructs hypergraph follows set vertices correspond set variables positive literals monotone instance set edges correspond set clauses literals therefore every hyperedge size resulting hypergraph since every variable appears clauses hypergraph bounded degree exists satisfying assignment every edge bichromatic hypergraph would fraction clauses satisfiable assignment edges bichromatic come following hypothesis holds given hypothesis holds hypothesis exists constants following holds exist algorithm given hypergraph vertices every vertex degree distinguishes whether bichromatic edges bichromatic runs time better poly log lemma follows easily lemma lemma hypothesis holds hypothesis give reduction hypergraph constant bounded degree correlation clustering instance complete graph use reduction given purposes lemma let reduction given hypergraph vertices vertex appears hyperedges outputs instance correlation clustering problem graph vertices edges edges labeled positive edges complete graph vertices labeled negative following holds cost optimal correlation clustering hyperedges optimal cost correlation clustering least constant come following hypothesis holds given hypothesis holds hypothesis exists constants following holds exist approximation algorithm mindisagree problem runs time better poly poly log lemma follows easily lemma given lemma hypothesis holds hypothesis finally proof theorem follows chaining together lemmas algorithms maxagree mindisagree ssac framework section give algorithms maxagree mindisagree problems within ssac framework maxagree section discuss query algorithm gives maxagree problem algorithm discuss closely related algorithm maxagree giotis guruswami see algorithm maxag fact except changes section look extremely similar section given help mention idea algorithm point changes made within ssac framework obtain desired result algorithm giotis guruswami proceeds iterations given dataset partitioned equal parts ith iteration points assigned one clusters order cluster ith iteration algorithm samples set data points possible checks agreement point clusters suppose particular clustering ski agreement vertices maximised vertices clustered placing cluster maximises agreement respect ski trying possible expensive operation algorithm since running time becomes queries help instead trying possible make use queries find single appropriate ith iteration clustering matches hybrid clustering giotis guruswami running time ith iteration improves moreover number queries made iteration thus making total number queries theorem given details proof theorem given since trivially follows giotis guruswami see theorem theorem query algorithm querymaxag behaves follows input labelling edges complete graph vertices probability least algorithm querymaxag outputs clustering graph number agreements induced least optimal number agreements induced running time algorithm log moreover number queries made querymaxag log using simple observation see proof theorem get query algorithm gives guarantee ssac framework mindisagree section provide algorithm mindisagree small giotis guruswami provided algorithm mindisagree work extend algorithm make work ssac framework aid queries thereby improve running time algorithm considerably query algorithm closely based algorithm giotis guruswami fact except small crucial change algorithms begin discussing main ideas result giotis guruswami lemma theorem every algorithm mindisagree running time log algorithm giotis guruswami builds following ideas first discussion previous section know fptas within ssac framework maxagree therefore unless optimal value mindisagree small small complement solution maxagree would give valid solution mindisagree since small implies optimal value maxagree large means random vertex graph lot edges incident agree optimal clustering suppose given random subset vertices optimally clustered let assume sufficiently large since edges agreement optimal clustering would able assign vertices respective clusters greedily arbitrary assign number edges agree maximized giotis guruswami observed clustering vertices manner would work high probability vertices belong large clusters vertices small clusters may able decide assignments high probability carry greedy assignment vertices clusters filter clusters sufficiently large recursively run procedure union small clusters randomly sampled subset vertices giotis guruswami try possible ways partitioning clusters order partition optimally clusters ensures least one partitions matches optimal partition however exhaustive way partitioning imposes huge burden running time algorithm fact algorithm runs log time using access query oracle obtain significant reduction running time algorithm query oracle pairs vertices since query answers assumed consistent unique optimal solution optimal clustering vertices accomplished using queries kpartitioning sample consistent optimal follow remaining steps approximation analysis follows query algorithm completeness give modified algorithm figure let oracle take two vertices input return yes belong cluster optimal clustering otherwise main theorem giving approximation guarantee algorithm stated earlier proof follows proof similar theorem theorem giotis guruswami theorem let input labelling querymindisagree returns number disagreements within factor optimal readers familiar realise statement theorem slightly different statement similar theorem theorem querymindisagree input labeling yes output constants edges graph oracle return run querymaxag input accuracy obtain clusm log independently uniformly set pick sample size random replacement optimally cluster making queries oracle let let liu number edges agree nodes let arg maxi liu index cluster maximizes quantity cju cju let set large small clusters large small large let cluster clusters using recursive calls querymindisagree let clusm clustering obtained clusters return better clusm clusm algorithm query version algorithm giotis guruswami even though approximation analysis query algorithm remains algorithm giotis guruswami running time analysis changes significantly let write recurrence relation running time recursive algorithm let denote running time algorithm node graph supposed clustered clusters given precision parameter using results previous subsection running time step running time partitioning set given log steps would cost time recurrence relation running time may written log log log simplifies log far queries concerned write similar recurrence relation log simplifies log completes proof theorem conclusion open problems work give upper lower bounds number queries obtain efficient correlation clustering complete graphs ssac framework ashtiani lower bound results based exponential time hypothesis eth interesting open problem give unconditional lower bounds problems another interesting open problem design query based algorithms faulty oracles setting practical since many contexts may known whether two vertices belong optimal cluster high confidence mitzenmacher tsourakakis designed query based algorithm clustering query oracle similar model answers whether two vertices belong optimal cluster otherwise answers may wrong probability less use algorithm obtain mindisagree faulty oracle however needed stronger query model obtain good clusterings designing efficient algorithm mindisagree faulty oracle interesting open problem references nir ailon anup bhattacharya ragesh jaiswal amit kumar approximate clustering queries corr haris angelidakis konstantin makarychev yury makarychev algorithms stable problems acm sigact symposium theory computing pages specifically claim function call parameter rather done allow recursive call step made value precision parameter initial call change approximation analysis crucial running time analysis hassan ashtiani shrinu kushagra shai clustering samecluster queries nips pages pranjal awasthi balcan konstantin voevodski local algorithms interactive clustering icml pages balcan avrim blum clustering interactive feedback international conference algorithmic learning theory pages balcan avrim blum anupam gupta clustering approximation stability journal acm jacm maria florina balcan yingyu liang clustering perturbation resilience siam journal computing nikhil bansal avrim blum shuchi chawla correlation clustering machine learning moses charikar venkatesan guruswami anthony wirth clustering qualitative information journal computer system sciences irit dinur pcp theorem gap amplification acm fedor fomin stefan kratsch marcin pilipczuk pilipczuk yngve villanger tight bounds parameterized complexity cluster editing small number clusters journal computer system sciences ioannis giotis venkatesan guruswami correlation clustering fixed number clusters symposium discrete algorithm pages russell impagliazzo ramamohan paturi complexity journal computer system sciences russell impagliazzo ramamohan paturi francis zane problems strongly exponential complexity journal computer system sciences konstantin makarychev yury makarychev aravindan vijayaraghavan correlation clustering noisy partial information colt pages pasin manurangsi ratio approximating densest corr claire mathieu warren schudy correlation clustering noisy input symposium discrete algorithms pages arya mazumdar barna saha query complexity clustering side information arxiv preprint michael mitzenmacher charalampos tsourakakis predicting signed edges log queries arxiv preprint konstantin voevodski balcan heiko teng xia efficient clustering limited distance information conference uncertainty artificial intelligence pages
| 8 |
architecture environmental risk modelling faster robust response natural disasters dario christian schaerer daniele rigo european commission joint research centre institute environment sustainability ispra italy polytechnic school national university asuncion san lorenzo central paraguay politecnico milano dipartimento elettronica informazione bioingegneria milano italy sep abstract demands disaster response capacity european union likely increase impacts disasters continue grow size frequency resulted intensive research issues concerning information modelling multiple sources uncertainty geospatial support one forms assistance frequently required emergency response centres along hazard forecast event management assessment robust modelling natural hazards requires dynamic simulations array multiple inputs different sources uncertainty associated meteorological forecast calibration model parameters software uncertainty also derives data transformation models needed predicting hazard behaviour consequences hand social contributions recently recognized valuable collection mapping efforts traditionally dominated professional organizations architecture overview proposed adaptive robust modelling natural hazards following semantic array programming paradigm also include distributed array social contributors called citizen sensor strategy modelling modelling architecture proposes multicriteria approach assessing array potential impacts qualitative rapid assessment methods based partial open loop feedback control polfc schema complementing traditional accurate assessment discuss computational aspect environmental risk modelling using parallel paradigms high performance computing hpc platforms order implications urgency introduced systems keywords geospatial integrated natural resources modelling management semantic array programming warning system remote sensing parallel application high performance computing partial open loop feedback control corresponding author cite schaerer rigo architecture environmental risk modelling faster robust response natural disasters conference computational interdisciplinary sciences paraguay schaerer rigo architecture environmental risk modelling faster robust response natural disasters ccis introduction context pitfalls sciencepolicy interface europe experienced series particularly severe disasters recent years worrying potential impacts similar disasters future projected scenarios economy society climate change range flash floods severe storms western europe expected increasing intensity trend floods central europe volcanic ash clouds eyjafjallajkull eruption large forest fires portugal mediterranean countries biological invasions emerging plant pests diseases potential interact wildfires impact ecosystem services economy substantial uncertainties underlined recent highlights set context systemic changes key sectors overall may expected least persist next decades general trend demands resilience preparedness disaster response capacity likely increase impacts disasters continue grow size frequency even considering growing exposure societal factors aforementioned examples disturbances often characterised system feedbacks impacts may connect multiple natural resources system systems particular multifaceted context landscape ecosystem dynamics show intense interactions disturbances consequence classical disciplinary approaches might perfectly suitable may easily result unacceptable simplifications within broader context broad perspective also vital investigating future patterns scale adapting preparedness planning complexity uncertainty associated interactions along severity variety involved impacts urge robust holistic coordinated transparent approaches time complexity problems involved may force analysis enter region mathematization systems context formal control problem able establish effective interface trivial aspect easily recognised even considering peculiarities well known long time environmental data decision support systems entanglement growingly complex ict aspects infrequent characterisation several pitfalls may degrade usefulness process relatively intuitive poor mathematization simplistic approach might result failure subtle pitfalls may lie even appropriately advanced theoretical approach proposed schaerer rigo architecture environmental risk modelling faster robust response natural disasters ccis mathematization resist silo thinking temptations academic pressures force problem fashionable hot topics control theory robust approximations broad complexity may serve egregiously instead solutions oversimplified problems academic claims towards fully automated scientific workflows computational science maybe including capabilities computational models implementing mathematization kinds claims might easily prompt irony among experienced practitioners transdisciplinary modelling environment wstme research pandora box doubtful net advantages complex highly uncertain sensitive problems policy society wstme problems typically possibly never suitable full automation even family problems humans always part computational process also vital accountability aspects certain level autonomic computing capabilities might essential evolvability robustness wstme particular perhaps higher level semantic awareness computational models ability scale multiple dimensions arrays see next section potential pitfall illusion fully automating wstme domain applicability puristic academic silo although promising relatively simple case studies might intrinsically narrow climbing deal wicked problems typical complex environmental systems discussed pitfalls might deserve brief summary first perhaps risk solving wrong problem precisely neglecting key sources uncertainty unsuitable modelled within warmly supported solution given research group emergency operations risks providing myopic decision support emphasised suggesting inappropriate actions inaction missing precaution due potential overwhelming lack information potential chains impacts due lack computational resources decent perhaps even qualitative approximate rapid assessment overcoming pitfalls still open issue would like contribute debate proposing integrated use mitigation approaches focus general aspects modelling architecture computational science support order stakeholders citizens involved participatory information decision support system assimilates uncertainty precaution since silver bullet seems available mitigating intrinsic complexity uncertainty environmental risk modelling array approaches schaerer rigo architecture environmental risk modelling faster robust response natural disasters ccis integrated computational aspects explicitly connected supervision distributed interaction human expertise follows idea boundary classical management strategies natural resources hazards driven automatic control problem formulations minimize risk score function scenario modelling merely supporting understandable information sorry thing risk score function precisely defined fuzzy modelling management aspects may computationally intensive integration transdisciplinary problem integrated natural resources modelling management inrmm environmental risk modelling architecture figure illustrates general modelling conceptualization interactions among natural hazard behaviour related transdisciplinary impacts risk management control strategies taken account special focus many sources uncertainty leads robust modelling architecture based paradigm semantic array programming semap emphasis array input intermediate output array modules dealing arrays hazard models hjf dynamic information forecasts meteorology static parametrisation spatial distribution land cover considered multiplicity derives many sources uncertainty affect estimation implementation software modules building blocks hazard models hjf furthermore emergency modelling support lack timely accurate monitoring systems large spatial extents continental scale may imply noticeable level uncertainty affect possibly even location natural hazards geoparsing uncertainty peculiar information gap may mitigated integrating remote sensing satellite imagery distributed array social contributors citizen sensor exploiting mobile applications apps online social networks remote sensing citizen sensor designed cooperate complementing accurate often less timely geospatial information distributed alert notifications citizens might timely necessarily accurate safe integration implies supervision human expertise even task may supported automatic tools assessing evolution timespan tbegin tend certain hazard event associated array impacts may also complex particular array impacts often irreducible unidimensional quantity monetary cost schaerer rigo architecture environmental risk modelling faster robust response natural disasters ccis figure modular architecture environmental risk modelling based urgent hpc follows semantic array programming paradigm image adapted integrating inputs remote sensing meteo data citizen sensor analysis systems subject environmental risk natural resources management may naturally lead multi criteria control problems might benefit advanced machine learning techniques schaerer rigo architecture environmental risk modelling faster robust response natural disasters ccis mitigating involved huge computational costs indeed multiplicity modelling dimensions states controls arrays parameters scenarios arrays modules account software uncertainty may easily lead exponential increase required computational processes called curse dimensionality viable mitigation strategy might offered hpc tools urgent hpc order sample highdimensional modelling space proper method box nutshell context demands resilience preparedness disaster response capacity likely increase impacts disasters continue grow classical disciplinary approaches might perfectly suitable may result unacceptable simplifications broader context pitfalls mathematization systems contex formal control problem able establish effective interface academic silo thinking stop advertising oversimplification fit control theory hot topics although family problems humans always part computational process despite academic potential illusion fashionable full automation evolvability adapting models new emerging needs knowledge robustness supporting decision processes would still need higher level semantic awareness computational models ability scale multiple dimensions arrays multiplicity uncertainty complexity context boundary classical management strategies natural resources hazards scenario modelling fuzzy inrmm key aspect soundness relies explicitly considering multiple dimensions problem array uncertainties involved silver bullet seems available reliably attacking amount uncertainty complexity integration methods proposed mitigating integrated approach array programming easily managing multiplicity arrays hazard models dynamic input information static parametrisation distribute array social contributions citizen sensor abstract thus better scalable modularisation design structure interactions semantic array programming proposed consider also array uncertainties data modelling geoparsing software uncertainty array criteria assess potential impacts associated hazard scenarios unevenly available information emergency event may efficiently exploited means polfc schema demanding computations may become affordable emergency event appropriate parallelisation strategy within schaerer rigo architecture environmental risk modelling faster robust response natural disasters ccis semap simplify wstme modelling nontrivial static dynamic geospatial quantities semap paradigm generic module subject semantic checks sem postconditions invariants inputs outputs control problem associated unevenly available dynamic updates field measurements data related hazard emergency emergency manager may thus interested assessing best control strategy given set impacts associated costs approximately estimated rapid assessment currently available data approach implemented partial open loop feedback control polfc approach minimizing overall costs associated natural hazard event time onwards arg minu utu end cost linked corresponding impact assessment criterion polfc schema within semap paradigm may considered dynamic application system dddas finally emergency manager may communicate updated scenarios emergency evolution means geospatial maps executive summary information order stakeholders able assess updated pattern costs preferred control options critical communication constitutes interface must supportive possible designed exploit web map services wms top underpinning free software wstme may accessed normal browser specific apps concluding remarks nsf cyberinfrastructure council report reads hardware performance growing exponentially gate density doubling every months storage capacity every months network capability every months become clear increasingly capable hardware requirement discovery sophisticated software visualization tools middleware scientific applications created used interdisciplinary teams critical turning flops bytes bits scientific breakthroughs transdisciplinary environmental problems ones dealing complexity supporting emergency might appear seemingly intractable nevertheless approximate based computationally intensive modelling may offer new perspective least able support emergency operations qualitative schaerer rigo architecture environmental risk modelling faster robust response natural disasters ccis quantitative scenarios even partial approximate timely investigation potential interactions many sources uncertainty might help emergency managers base control strategies best available although typically incomplete sound scientific information context key aspect soundness relies explicitly considering multiple dimensions problem array uncertainties involved silver bullet seems available reliably attacking amount uncertainty complexity integration methods proposed inspired promising synergy array programming perfectly suited easily managing multiplicity arrays hazard models dynamic input information static parametrisation distribute array social contributions citizen sensor transdisciplinary nature complex natural hazards need unpredictably broad multifaceted readiness robust scalability may benefit disciplined abstract modularisation compose models design structure interactions two aspects define semantic array programming semap paradigm whose application extended geospatial aspects proposed consider also array uncertainties data modelling geoparsing software uncertainty array criteria assess potential impacts associated hazard scenarios unevenly available information emergency event may efficiently exploited means partial open loop feedback control polfc schema already successfully tested integrated approach promising evolution adaptive strategies demanding computations may become affordable emergency event appropriate parallelisation strategy within references sippel otto beyond climatological extremes assessing odds hydrometeorological extreme events europe change warming climate climatic change cirella natural hazard risk assessment management methodologies review europe linkov sustainable cities military installations nato science peace security series environmental security springer netherlands ciscar climate impacts europe integrated economic assessment impacts world international conference climate change effects potsdam institute climate impact research pik ciscar climate impacts europe jrc peseta project vol eur scientific technical research publ eur union dankers feyen climate change impact flood hazard europe assessment based climate simulations gaume compilation data european flash floods journal hydrology schaerer rigo architecture environmental risk modelling faster robust response natural disasters ccis marchi characterisation selected extreme flash floods europe implications flood risk management journal hydrology feser storminess north atlantic northwestern europe review quarterly journal royal meteorological society jongman increasing stress finance due large floods nature climate change self effects consequences large explosive volcanic eruptions philosophical transactions royal society mathematical physical engineering sciences swindles perspective volcanic ash clouds affecting northern europe geology gramling volcano rumbles scientists plan aviation alerts science allard state mediterranean forests fao schmuck forest fires europe middle east north africa publications office european union nijhuis forest fires burn nature boyd consequence tree pests diseases ecosystem services science venette summary international pest risk mapping workgroup meeting sponsored cooperative research program biological resource management sustainable agricultural systems international pest risk mapping workgroup meeting advancing risk assessment models invasive alien species food chain contending climate change economics uncertainty organisation economic development oecd maes mapping assessment ecosystems services analytical framework ecosystem assessments action biodiversity strategy publications office european union european commission communication commission european parliament council european economic social committee committee regions new forest strategy forests sector com final communication commission council european parliament european commission commission staff working document accompanying document communication commission european parliament council european economic social committee committee regions new forest strategy forests sector commission staff working document final barredo major flood disasters europe natural hazards barredo upward trend normalised windstorm losses europe natural hazards earth system science evans predictive ecology systems approaches philosophical transactions royal society london series biological sciences phillis kouikoglou hierarchy biodiversity conservation problems ecological modelling langmann role climate forcing volcanic sulphate volcanic ash advances meteorology gottret white assessing impact integrated natural resource management challenges experiences ecology society schaerer rigo architecture environmental risk modelling faster robust response natural disasters ccis hagmann success factors integrated natural resource management lessons practice ecology society zhang scaling issues environmental modelling wainwright mulligan eds environmental modelling finding simplicity complexity wiley estreguil forest landscape europe pattern fragmentation connectivity eur scientific technical research jrc turner disturbance landscape dynamics changing world ecology van westen remote sensing gis natural hazards assessment disaster risk management bishop remote sensing giscience geomorphology vol treatise geomorphology elsevier urban crucial step toward realism responses climate change evolving metacommunity perspective evolutionary applications baklanov environmental risk assessment modelling scientific needs expected advancements ebel davitashvili eds air water soil quality modelling risk impact assessment nato security science series springer netherlands steffen anthropocene global change planetary stewardship ambio white value coordinated management interacting ecosystem services ecology letters rigo software uncertainty integrated environmental modelling role semantics open science geophys res abstr rigo exp behind horizon reproducible integrated environmental modelling european scale ethics practice scientific knowledge freedom research submitted lempert may new decision sciences complex systems proceedings national academy sciences suppl rammel managing complex adaptive systems perspective natural resource management ecological economics van der sluijs uncertainty dissent climate risk assessment perspective nature culture rigo architecture adaptive robust modelling wildfire behaviour deep uncertainty ifip adv inf commun technol guariso web server collection distribution environmental data kosmatin fras mussio crosilla podobnikar eds bridging gap isprs workshop ljubljana february collection abstracts ljubljana guariso werthner environmental decision support systems horwood halsted press guariso decision support systems water management lake como case study european journal operational research integrated participatory water resources management theory elsevier guariso page eds computer support environmental impact assessment proceedings ifip working conference computer support environmental impact assessment cseia como italy october casagrandi guariso impact ict environmental sciences citation analysis environmental modelling software schaerer rigo architecture environmental risk modelling faster robust response natural disasters ccis cole interoperability crisis human factors organisational processes tech royal united services institute sterman models wrong reflections becoming systems scientist system dynamics review weichselgartner kasperson barriers interface toward global environmental change research global environmental change bainbridge ironies automation automatica rigo toward open science european scale geospatial semantic array programming integrated environmental modelling geophysical research abstracts stensson jansson autonomous technology sources confusion model explanation prediction conceptual shifts ergonomics russell dealing ghosts managing user experience autonomic computing ibm systems journal anderson making autonomic computing systems accountable problem human computer interaction database expert systems applications proceedings international workshop ieee kephart chess vision autonomic computing computer van der sluijs uncertainty monster interface four coping strategies water science technology frame wicked messy clumsy frameworks sustainability environment planning government policy mcguire silvia effect problem severity managerial organizational capacity agency structure intergovernmental collaboration evidence local emergency management public administration review bea new approach risk implications risk management adams hester errors systems approaches international journal system systems engineering larsson decision evaluation response strategies emergency management using imprecise assessments journal homeland security emergency management innocenti albrito reducing risks posed natural hazards climate change need participatory dialogue scientific community policy makers environmental science policy ravetz science precaution futures rigo integrated natural resources modelling management minimal redefinition known challenge environmental modelling excerpt call shared research agenda toward scientific knowledge freedom maieutike research initiative rigo semantic array programming environmental modelling application mastrave library int congress environmental modelling software managing resources limited plant rigo semantic array programming mastrave introduction semantic computational modelling corti fire news management context european forest fire information system effis proceedings quinta conferenza italiana sul software geografico sui dati geografici liberi gfoss day sheth citizen sensing social signals enriching human experience internet computing ieee schaerer rigo architecture environmental risk modelling faster robust response natural disasters ccis zhang emergence social community intelligence computer adam spatial computing social media context disaster management intelligent systems ieee fraternali putting humans loop social computing water resources management environmental modelling software exp image geometry correction daily forest fire progression map using modis active fire observation citizens sensor ieee earthzine submitted bosco robust modelling landslide susceptibility regional rapid assessment catchment robust fuzzy ensemble ifip adv inf commun technol leo dynamic data driven ensemble wildfire behaviour assessment case study ifip adv inf commun technol ackerman heinzerling pricing priceless analysis environmental protection university pennsylvania law review gasparatos embedded value systems sustainability assessment tools implications journal environmental management rigo programming efficient management reservoir networks proceedings modsim international congress modelling simulation vol model simul soc australia new zealand injecting dynamic data dddas forest fire behavior prediction lecture notes computer science cencerrado support urgent computing based resource virtualization lecture notes computer science joshimoto implementations urgent computing production hpc systems procedia computer science rigo bosco architecture framework integrated soil water erosion assessment ifip adv inf commun technol rigo living forest biomass carbon stock robust fuzzy ensemble ipcc tier maps europe ifip adv inf commun technol model large wildfire behaviour prediction europe procedia computer science castelletti design water reservoir policies based inflow prediction mcinerney developing forest data portal support decision making ieee sel top appl earth obs remote sens bastin web services forest data analysis monitoring developments eurogeoss ieee earthzine free open source software underpinning european forest data centre geophysical research abstracts national science foundation cyberinfrastructure council cyberinfrastructure vision century discovery tech nsf national science foundation altay green research disaster operations management european journal operational research adaptive system forest fire behavior prediction computational science engineering cse ieee international conference ieee
| 5 |
emergent failures cascades power grids statistical physics perspective tommaso nesti bert zwart cwi amsterdam alessandro zocca sep california institute technology consider complex networks line failures occur indirectly line flows influenced fluctuating input nodes prime example power grid power generated renewable sources examine propagation emergent failures small noise regime combining concepts statistical physics physics power flow particular characterize rigorously explicitly configuration inputs responsible failures cascades analyze propagation failures often type cascading failures complex networks received lot attention recent years despite proposing different mechanisms cascade evolution common feature works cascade assumed triggered external initiating event contingency initial contingency attack chosen either deliberately targeting vulnerable crucial network component thus aiming analysis uniformly random order understand average reliability network distinction led insight complex networks resilient random attacks vulnerable targeted attacks attacks random deterministic lead direct failure target network line node regime parallel statistical physics allows determine likely configuration power injections leads failures possibly cascades results explicit yield fundamental insights way cascades occur though focus power grid approach applicable network fluctuations node inputs trigger line failures give detailed model description network represented connected graph nodes representing buses directed edges modeling transmission lines stochastic fluctuations net power injections around nominal values modeled multivariate gaussian random vector mean present work look networks line failures occur indirectly small fluctuations nodes inspiration drawn power grid potential vulnerability fluctuations renewable energy sources specifically look power grid static system nodes attacked globally indirectly form small fluctuations power injections due interplay network structure correlations power injections power flow physics fluctuations may cumulate line failures emerge possibly triggering cascades highly relevant due increasing penetration renewable energy sources modern power grids susceptibility weather conditions development poses several challenges design control networks intermittent highly correlated power generation causes random fluctuations line power flows possibly yielding outages cascading failures crucial importance develop appropriate physical models give fundamental insight emerging nature failures challenges analysis tackle use ideas statistical physics large deviations theory consider stochastic model network power injections similar proposed introduce positive real parameter describing magnitude noise focus convention net power injected node differently usual assumption fluctuations independent identically distributed allow heterogeneous standard deviations power injections various nodes well dependencies fluctuations different nodes choosing covariance matrix instrumental instance model positive correlations due geographical proximity wind turbines solar panels gaussianity consistent atmospheric physics section wind turbine statistics assuming mean power injection vector zero sum using approximation chapter line power flows matrix encodes power network topology weights modeling susceptances assumption mean power injection vector zero sum guarantee total net power injected network equal zero random vector minor technical issue easily resolved assuming total power injection mismatch distributed uniformly among nodes account matrix details see supplemental material view assumptions line power flows also follow multivariate gaussian distribution namely vector describes average line flows covariance matrix describes correlations line flows taking account correlations power injections encoded matrix created network topology due physics power flows kirchhoff laws via matrix line overloads absolute amount power flowing exceeds given line threshold assume overload immediately leads outage corresponding line henceforth refer simply line failure failure line cause global redistribution line power flows according kirchhoff laws could trigger failures cascades without loss generality assume matrix also contains information line thresholds vector normalized line power flows failure line described event consider scenario power grid operates average safely within limits assuming large fluctuations line flows lead failures correspondingly become rare events assess much network robust initiating failure identify vulnerable lines derive exponential decay probabilities single line failure events namely theory large deviations concerned precisely calculating exponential decay rare events probabilities usually referred rate functions entropy functions since line flows normalized interested evaluating rate function describing failure event line single point refer corresponding value failure decay rate thanks fact line power flow gaussian explicitly calculate see example lim log variance line flow thus small approximate probability emergent failure line exp first emergent failure large variance note accounts power injections variability correlations well much network possibly amplifies mitigates decay rates used identify lines susceptible system noise illustrated figure fig failure decay rates ieee test system likelihood line failures visualized using color gradient red lines vulnerable ones large deviations theory provides analytic tools give valuable insight understanding way specific rare event occurs conditionally failure line power injections configuration exhibits sensible large deviation mean characterized arg inf provided solution unique reads sign sign otherwise vector entry equal zeros elsewhere details supplemental material key finding emergent line failure occur due large deviations power injections neighboring nodes cumulative effect small unusual fluctuations power injections entire network summed power flow physics see figure approach allows differentiate different types line failures calculating line power flow profile corresponding likely power injections configuration leading failure line assess whether likely way failure line occur isolated failure line decay rate depends close normalized average power flow threshold joint failure multiple lines together exists line max exp min fig representation likely power injections configuration ieee test system leading failure green line nodes size adjusted proportionally deviations mean color describes whether deviations positive blue negative red flow unique remaining path opposite direction observation made rigorous showing redistribution coefficients every intuitive neighboring lines positive correlated power flows distant lines negative correlations power flows must sum zero kirchhoff law hence likely power injections configuration makes power flow line exceeding line threshold say becoming larger also makes power flows antipodal half network negative beyond line threshold becoming smaller power flow redistributes see figure resulting power injections configurae tion redistributes across altered network subgraph original graph line possible lines case joint failure removed possibly increasing stress remaining lines way redistribution happens governed power flow physics assume occurs instantaneously without transient effects power flow redistribution amounts compute new matrix linking power injections new power flows likely power flow configuration redistribution special case isolated failure say line enough calculate vector redistribution coefficients known line overload distribution factors lodf power system engineering indeed rethe likely power flow configuration distribution equal sign second term depends direction power flowed line overload occurred power flow configuration efficiently used determine lines fail high probability consequence original possibly joint failure checking line indices see supplemental material details rigorous probabilistic theory emergent cascading failures developed combining two ingredients statistical physics results describing likely power injections configuration leading first failure power flow redistribution network afterwards particular approach explains several qualitative features cascades power grids observed empirically prominent one propagation failures illustrate framework transparently explain phenomenon using ring network exactly two paths along power flow two nodes upon failure line power originally flowing line must failed fig left likely power injections leading failure black power flows blue right situation power flow redistribution three subsequent failures values blue failure propagation thus emerges interplay power flow configuration line failure network structure determines alternative paths along power could flow line failure figure shows example nonlocal failure propagation ieee test system fig power injection configuration ieee test system likely causes failure green line power redistribution causes red line fail likely power injection configuration leading emergent failure given line used combination power flow redistribution routines generate failures triggered initial scenario repeating procedure lines one obtain graph joint failures built using large deviations approach cliques influence graph maximal fully connected subgraphs used identify clusters cosusceptable lines lines statistically fail often cascade event table percentage joint failures emergent cascades average number failed lines stage stage emergent cascades classical cascades ieee test systems insightful statistics first two stages emergent cascading failures compare classical cascading failures obtained using nominal power injection values rather likely ones deterministic removal initial failing line numerical experiments line thresholds taken proportional average absolute power flow corresponding lines identity matrix shown table emergent cascades high percentage joint failures average number failures first cascade stage much larger one classical cascades one line removed first cascade stage furthermore expected total number failed lines second cascade stage significantly larger emergent cascades classical cascades lastly failures propagate emergent cascades average bit less far classical cascades illustrated statistics failure jumping distance table graph dec dcc dec dcc table average coefficient variation failure jumping distance stage emergent cascades classical cascades distance two lines measured shortest path endpoints approach also gives constructive way build influence graph directed edge connects lines failure line triggers simultaneously redistribution line figure shows example influence graph fig influence graph ieee test system black built using first two stages cascade realizations deeply different structure original network blue proposed viewpoint endogenous cascade failures important practical implications terms power system reliability likely power injection configurations leading possibly joint failures leveraged improve current safety criterion uses nominal values power injection configurations albert albert nakarado structural vulnerability north american power grid physical review feb albert statistical mechanics complex networks reviews modern physics albert jeong error attack tolerance complex networks nature jul berg natarajan mann patton gaussian turbulence impact wind turbine loads wind energy bienstock electrical transmission system cascades vulnerability siam philadelphia dec bienstock chertkov harnett chanceconstrained optimal power flow network control uncertainty siam review cetinay kuipers van mieghem topological investigation power flow ieee systems journal pages cohen erez havlin resilience internet random breakdowns physical review letters cohen erez havlin breakdown internet intentional attack physical review letters apr crucitti latora marchiori model cascading failures complex networks physical review apr crucitti latora marchiori rapisarda efficiency networks error attack tolerance physica statistical mechanics applications mar dobson carreras lynch newman complex systems analysis series blackouts cascading failure critical points chaos interdisciplinary journal nonlinear science jun heide greiner robustness networks cascading failures physical review may hines dobson eppstein dual graph random chemistry methods cascading failure analysis hawaii international conference system sciences pages ieee jan hines dobson rezaei cascading power outages propagate locally influence graph actual grid topology ieee transactions power systems kinney crucitti albert latora modeling cascading failures north american power grid european physical journal mirzasoleiman babaei jalili safari cascaded failures weighted networks physical review oct motter cascade control defense complex networks physical review letters motter lai attacks complex networks physical review powell power system load flow analysis mcgraw hill purchala meeus van dommelen belmans usefulness power flow active power flow analysis ieee power engineering society general meeting pages ieee sun mei interaction model simulation mitigation cascading failures ieee transactions power systems mar schaub lehmann yaliraki barahona structure complex networks quantifying relations flow redistribution network science apr soltan mazauric zussman analysis failures power grids ieee transactions control network systems jun stott jardim alsac power flow revisited ieee transactions power systems sun liu chen yuan error attack tolerance evolving networks local preferential attachment physica statistical mechanics applications jan touchette large deviation approach statistical mechanics physics reports jul wang scaglione thomas markovtransition model cascading failures power grids hawaii international conference system sciences pages ieee jan watts simple model global cascades random networks proceedings national academy sciences apr witthaut rohden zhang hallerberg timme critical links nonlocal rerouting complex supply networks physical review letters mar witthaut timme nonlocal failures complex supply networks single link additions european physical journal sep witthaut timme nonlocal effects countermeasures cascading failures physical review sep wood wollenberg sheble power generation operation control john wiley sons edition yang nishikawa motter vulnerability cosusceptibility determine size network cascades physical review letters supplemental material power grid model approximation model power grid network connected weighted graph nodes modeling buses edges representing transmission lines choosing arbitrary fixed orientation transmission lines network structure described incidence matrix definite otherwise denote weight edge corresponding susceptance transmission line convention set transmission line denote diagonal matrix defined diag network topology weights simultaneously encoded weighted laplacian matrix graph defined denote matrix entries equal one relation vector power injections phase angles induce network nodes written matrix form matrix acts global slack ensuring net power injection always identically zero exploiting eigenspace structure calculated literature instead commonly used another matrix calculated using inverse obtained deleting first row first column matrix method implicitly choosing average value zero reference nodes voltage phase angles classical one first node used reference setting phase angle equal zero remark two procedure equivalent one interested line power flows latter depend phase angle differences make use approximation commonly used transmission system analysis according real power flows related phase angles via linear relation view line power flow written linear transformation power injections convenient look normalized line power flow vector defined every line threshold define using nominal average power injection vector choosing tolerance parameter relation line power flows normalized power flows rewritten diagonal matrix diag view normalized power flows expressed terms power injections likely power injection configuration given event solution variational problem arg inf explicitly computed sign sign vector also seen conditional expectation random line power flow vector conditional failure event sign namely sign thus particular every sign cov var note case excluded compactness indeed case variational problem two solutions easily explained power flow line mean equally likely overload event occur likely power injection configurations trigger different previous proposition immediately yields large deviation principle also first line failure event reads following statement write stress dependence line power flows noise parameter line power flows corresponding power injection configuration calculated lim log min large deviation principles failure events indeed decay rate event least one line fails equal minimum decay rates failure line likely power injections configuration leads event arg proposition assume every sequence line power flows satisfies large deviations principle lim log proof proposition let sequence multivariate normal vectors let sequence partial sums setting immediately follows denote following section get log inf log lim log lim inf lim log lim optimizers problems easily computed respectively inf min inf inf easy prove ring network homogeneous line thresholds susceptances every general hence also case joint failures likely power flow configuration redistribution general written matrix constructed analogously considering altered graph instead next proposition shows enough look vector determine whether line survived first cascade stage fail jointly fail high probability power redistribution second cascade stage note trivially coefficient computed thus identities immediately follow proposition define following statement hold power flow redistribution lim log every line define collection lines fail jointly fek sign every exists lim log let cardinality note trivially always belongs denote graph obtained removing lines let focus first case isolated failure line case graph obtained removing line provided power injections remain unchanged power flows redistribute among remaining lines using concept resistance matrix approximation proven alternative paths still power flow node exists connected words occur scenario line bridge removal results disconnection original graph two components still connected graph power flows redistribution related original line flows network relation proof proposition denote large deviations theory readily follows lim log inf lim log inf define corresponding decay rates inf inf rewrite lim log therefore lim log notice feasible set minimization problem strictly contained problem implying recall denoted unique optimal solution corresponding line power flow vector let optimal solution define clearly feasible also problem case would optimal solution also thus uniqueness strictly convex leads contradiction since assumption construc tion exists hence conclude lim log proof case analogous statement
| 3 |
may submitted annals statistics arxiv norm singular subspace geometry applications joshua cape minh tang carey priebe johns hopkins university singular value matrix decomposition plays ubiquitous role throughout statistics related fields myriad applications including clustering classification dimensionality reduction involve studying exploiting geometric structure singular values singular vectors paper contributes literature providing novel collection technical theoretical tools studying geometry singular subspaces using norm motivated preliminary deterministic procrustes analysis consider general matrix perturbation setting derive new procrustean matrix decomposition together flexible machinery developed norm allows conduct refined analysis induced perturbation geometry respect underlying singular vectors even presence singular value multiplicity analysis yields perturbation bounds range popular matrix noise models meaningful associated statistical inference task discuss norm arguably preferred norm certain statistical settings specific applications discussed paper include problem covariance matrix estimation singular subspace recovery multiple graph inference novel procrustean matrix decomposition technical machinery developed norm may independent interest introduction background geometry singular subspaces fundamental importance throughout wide range fields including statistics machine work partially supported xdata program defense advanced research projects agency darpa administered air force research laboratory afrl contract darpa program administered afrl contract work also supported acheson duncan fund advancement research statistics johns hopkins university msc subject classifications primary secondary keywords phrases singular value decomposition perturbation theory spectral methods procrustes analysis statistics cape tang priebe learning computer science applied mathematics network science singular vectors eigenvectors together corresponding subspaces singular values eigenvalues appear throughout various statistical applications including principal component analysis covariance matrix estimation spectral clustering graph inference name singular subspaces geometry also studied random matrix theory literature come profound influence development statistical theory interest behavior random matrices phenomenon eigenvector delocalization well spectral behavior particular matrices undergoing random perturbation overview recent work spectral properties random matrices particular behavior eigenvectors random matrices see recent survey discussion random matrix theory come impact statistics see recent survey computational perspective optimization algorithms often concerned behavior singular vectors subspaces applications signal processing compressed sensing study algorithmic performance manifolds manifold learning especially grassmann stiefel manifolds motivates related interest collection procrustestype problems indeed procrustes analysis occupies established area within theoretical study statistics manifolds arises applications including diffusion tensor imaging shape analysis see extended treatment theoretical numerical aspects problems foundational results matrix theory literature concerning perturbation singular values singular vectors singular subspaces date back original work weyl davis kahan wedin among others indeed results form backbone much linear algebraic machinery since developed purposes statistical application inference see classical references treatment foundational results related historical developments overview paper contributes literature providing novel collection technical theoretical tools studying geometry singular subspaces respect subordinate vector norm matrices described focus alignment singular subspaces terms geometric distance measures collections norm singular subspace geometry singular vectors eigenvectors especially classical sin distance prove singular vector perturbation theorems low rank arbitrary rank matrix settings present main theoretical results quite generally followed concrete consequences thereof facilitate direct statistical applications specifically covariance matrix estimation singular subspace recovery multiple graph inference among advantages methods allow singular value multiplicity require population gap spirit theorem special case general framework recover strengthened version recent results wherein authors obtain norm perturbation bound singular vectors low rank matrices exhibiting specific coherence structure way beyond stated theorems paper results immediately yield analogous applications example robust covariance estimation involving random variables procrustes analysis complements recent study perturbation bounds singular subspaces considered tandem demonstrate setting one recovers nearly bounds particular problem yet another consequence work extend complement current spectral methodology graph inference embedding best knowledge obtain among estimation bounds multiple graph inference presence edge correlation setting precisely paper formulates analyzes general matrix decomposition aligned difference real matrices consisting orthonormal columns partial isometries stiefel matrices orthogonal given denotes orthogonal matrix focus limited particular nice choice corresponds optimal procrustes rotation sense made precise later results implications class related problems along matrix decomposition develop technical machinery subordinate vector norm matrices defined max together results allow obtain suite singular vector perturbation bounds rectangular matrices corresponding via additive perturbation framework singular value decomposition cape tang priebe norm provides finer uniform control entries matrix commonly encountered spectral frobenius norm presence additional underlying matrix perturbation structure norm may well greater operational significance preferred norm consider compressed sensing optimization literature example matrices exhibiting bounded coherence property sense form popular class matrices norm shown right choice norm encountered time time means pervasive either spectral frobenius matrix norm recently appeared study random matrices fraction matrix entries modified another recent use norm wherein clustering certain stochastic block model graphs according adjacency spectral embedding shown strongly universally consistent error among aims paper advocate widespread consideration norm sample application covariance matrix estimation proceeding briefly pause present application work methods estimating top singular vectors structured covariance matrix another result applications covariance matrix estimation presented section theorem denote random vector coordinates let independent identically distributed mean zero multivariate normal random column vectors positive semidefinite covariance matrix denote spectral decomposition unitary matrix singular values indexed order diag diag may thought representing signal spike singular values contains noise bulk singular values note largest singular values assumed distinct rather assumption simply requires singular value population gap let matrix row observations denote classical sample covariance matrix spectral decomposition given define difference true sample covariance matrices suppose exhibits bounded coherence sense norm singular subspace geometry denotes conventional notation similarly let denote conventional notation respectively let denote random orthogonal matrix corresponding optimal frobenius norm procrustes alignment discussion see section following performance guarantee estimating matrix top singular vectors theorem consider covariance section along suppose max log let var exists constant probability least log similar results hold generally random vector instead assumed distribution remark setting theorem one often case bound written simplified form log remark although theorem stated respect largest singular values covariance matrix analogous results may formulated collections sequential singular values remainder singular values min end see theorem theorem organization rest paper organized follows section establishes notation motivates use norm context procrustes problems presents perturbation model considered paper section collects general main results fall two categories matrix decompositions matrix perturbation theorems section demonstrates paper improves upon complements existing work literature way considering three statistical applications cape tang priebe specifically covariance matrix estimation singular subspace recovery multiple graph inference section offer concluding remarks sections contain technical machinery developed paper well additional proofs main theorems preliminaries notation paper vectors matrices assumed simplicity symbols used assign definitions denote formal equivalence quantity denotes general constant depending either parameter index may change line line unless otherwise specified positive integer let additionally let denote standard notation denote notation possibly underlying probabilistic qualifying statement similarly let denote conventional bigtheta notation respectively column vectors standard euclidean inner product denoted classical vector norms denoted kxkp maxi let denote set real matrices orthonormal columns denotes set orthogonal matrices rectangular matrix denote singular value decomposition svd singular values arranged nonincreasing order given diag paper makes use several standard consistent matrix norms namely denotes spectral normpof kakf denotes frobenius norm maxj denotes maximum absolute column sum maxi denotes maximum absolute row sum also consider matrix norm precisely vector norm matrices given kakmax maxi norm relations central focus paper vector norm matrices defined proposition establishes elementary fact norm corresponds maximum euclidean row norm matrix propositions catalog relationship several aforementioned commonly encountered matrix norms propositions though straightforward contribute machinery obtaining main results paper norm singular subspace geometry norm attractive quantity due part easily interpretable straightforward compute qualitatively speaking small values capture global rows uniform within row matrix behavior much way small values kmax stands contrast matrix norms capture global necessarily uniform matrix behavior example given observe kakf standard relations norms permit quantitative comparison relative magnitudes kmax particular relations quantities depend upon underlying matrix column dimension namely kakmax contrast relationship depends matrix row dimension proposition namely consideration dimensionality relations plays important role motivating approach prove new matrix perturbation results particular may case row dimension large example demonstrates bounding may preferred bounding matter bounding larger quantity kakf given discussion matrix norm relations also recall wellknown relation matrix norms allows interface frobenius norms particular matrix kakf rank pause note norm general matrices particular constrained behavior proposition together matrix multiplication standard properties common matrix spectral frobenius matrix substantial amount flexibility bounding matrix products passing norms reason host matrix norm bounds follow naturally matrix decomposition results section relative strength bounds depend upon underlying matrix model assumptions cape tang priebe singular subspaces procrustes let denote corresponding subspaces columns form orthonormal bases respectively classical matrix decomposition natural measure distance subspaces corresp matrices given via canonical principal angles section specifically singular values denoted indexed order canonical angles given main diagonal elements diagonal matrix diag review decomposition canonical angles see example extensive summary relationships sin distances specifically sin sin well various distance measures provided appendix paper focuses sin distance related distance measures geometrically notion distance corresponds discerning extent rotational angular alignment matrices corresponding subspaces analysis lends establishing distance measures generally given two matrices together set matrices norm general version procrustes problem given optimization problem inf bsk paper considers two specific instances inf inf emphasis former motivated insight respect latter case infimum achieved compactness together continuity specified norms therefore let denote procrustes solution dependence upon underlying matrices implicit context unfortunately neither procrustes problems admits analytically tractable minimizer general contrast instead switching frobenius norm one arrives classical orthogonal procrustes problem admit analytically tractable minimizer denote namely achieves inf norm singular subspace geometry singular value decomposition denoted solution given explicitly given observations therefore natural study surrogate quantities towards end sin distance procrustes problems related sense lemma sin sin sin sin alternatively detailed lemma one bound via sin manner providing clearer demonstration performance close performance namely sin sin sin loosely speaking says relative fluctuation spectral procrustes problem sin simply considering relationship similarly observe sin sin sin whereby lower bound suggests careful analysis may yield tighter upper bound meaningful settings wherein proceed link via perturbation framework established section subsequently added interpretation viewed perturbation structured setting formulate procrustean matrix decomposition section decomposing underlying matrices corresponding quantities sin sin together machinery norm careful analysis subsequently derive collection operationally significant perturbation bounds sections improve upon existing results throughout statistics literature cape tang priebe perturbation framework singular value decomposition rectangular matrices matrix shall denote true unobserved underlying matrix whereas represents observed perturbation unobserved additive error consider respective partitioned singular value decompositions given block matrix form matrices contain singular values diag remaining singular values main diagonal possibly padded additional zeros use character simplifying abuse notation employed notational consistency quantities defined analogously note framework employed generally example contains collection sequential singular values interest separated remaining singular values main results procrustean matrix decomposition variants section present matrix decomposition variants procedure deriving matrix decomposition based geometric viewpoint explained section theorem general rectangular matrix setting sections matrix admits decomposition norm singular subspace geometry moreover decomposition still holds replacing orthogonal matrices real matrices respectively analogous decomposition given replacing respectively ease reference state symmetric case theorem corollary absence positive assumption diagonal entries correspond eigenvalues corollary special case symmetric matrices theorem becomes remark reiterate note construction orthogonal matrix depends upon perturbed quantity depends upon error consequently unknown random assumed unknown random since make distinct singular value distinct eigenvalue assumption paper general quantity hope recover presence singular value multiplicity indeed viewed estimate orthogonal transformation specific choice natural given aforementioned motivation statistical inference applications often either invariant equivalent modulo orthogonal transformations given presence nonidentifiability example clustering rows equivalent clustering rows matrix consideration weaken strength applicability results practice also prove convenient work following modified versions theorem stated corollaries cape tang priebe corollary decomposition theorem rewritten corollary corollary equivalently expressed general perturbation theorems position obtain wide class perturbation theorems via unified methodology employing theorem variants norm machinery section geometric observations section remainder section devoted presenting several general perturbation theorems section subsequently discusses several specialized perturbation theorems tailored applications statistics let defined section let denote upper bounds respectively define analogously theorem baseline norm procrustes perturbation bound suppose sin sin sin following theorem provides uniform perturbation bound quantities corollary subsequently yields bound response theorem norm singular subspace geometry theorem general perturbation theorem rectangular matrices suppose max constants sin sin instead rank provided max constants bound still holds corollary uniform perturbation bound rectangular matrices suppose max constants applications section presents several applications matrix decomposition perturbation theorems norm machinery three statistical settings corresponding among others recent work respectively emphasize statistical application theorems well theorem obtained via individualized analysis within broader context unified methodology deriving perturbation bounds made clear proofs theorems statistical application considered paper demonstrate results strengthen complement extend existing work preparation first consider following structural matrix property introduced within context matrix recovery cape tang priebe definition definition let subspace dimension let orthogonal projection onto coherence standard basis defined maxkpu columns span subspace dimension natural abuse notation interchange underlying subspace case propositions allow equivalently write observe upper lower bounds achieved consisting standard basis vectors vectors magnitude respectively since orthonormal columns unit euclidean norm mass magnitude viewed describing accumulation mass collection orthonormal singular eigen vectors purposes assumption bounded coherence equiv incoherence discussed corresponds existence positive constant property arises naturally example random orthogonal matrix model corresponds recoverability low rank matrix via nuclear norm minimization sampling subset matrix entries study random matrices bounded coherence closely related delocalization phenomenon eigenvectors examples matrices whose row column spaces exhibit bounded coherence found study networks specifically difficult check property holds top eigenvectors edge probability matrices corresponding model balanced stochastic block model among others remark emphasize throughout formulation general results section never assumed matrix bounded coherence either factors rather working norm procrustes setting results consequently particularly strong interpretable combined additional structural matrix property norm singular subspace geometry singular vector perturbation bounds norms authors specifically consider low rank matrices distinct singular values eigenvalues whose unitary factors exhibit bounded coherence matrices theorems provide singular vector eigenvector perturbation bounds vector norm explicitly depend upon underlying matrix dimension within singular value perturbation setting section paper corollary formulates straightforward perturbation bound upon inspection operationally spirit theorem moreover note bound quantity immediately yields bound quantities kmax infw kmax thereby providing bounds perturbed singular vectors orthogonal transformation analogue sign flips distinct singular values similarly also observe controlling dependence one another follows union assumptions implicitly depending upon underlying matrix dimensions note perturbation bounds hold wider range model settings includes exhibiting singular value eigenvalue multiplicity symmetric matrices likewise improve upon theorem make explicit accordance notation theorem theorem let symmetric matrices rank spectral decomposition diag eigenvalues satisfy define min suppose pkeu kmax exists orthogonal matrix kmax pkeu kmax pku suppose exists positive constant exists orthogonal matrix kmax cape tang priebe theorem improvement theorem consider setting theorem permitted allow repeated eigenvalues suppose exists orthogonal matrix kmax suppose exists positive constant exists orthogonal matrix kmax theorems demonstrate refined analysis yields superior bounds respect absolute constant factors factors eigengap assumptions singular subspace perturbation random matrices section provide example interfaces results recent rateoptimal singular subspace perturbation bounds obtained consider setting wherein fixed matrix random matrix independent standard normal entries theorems imply setting high probability following bounds hold left right singular vectors respectively sin sin observe bound stronger sin sin latter quantity difficult control general eye towards latter quantity following theorem demonstrates analysis allows recover upper lower bounds terms sin differ factor cmax log general log additional assumption bounded coherence norm singular subspace geometry theorem let section rank suppose entries independent standard normal random variables exists constant probability least max log sin addition probability least log sin note lower bound sin always holds proposition lemma statistical inference random graphs study networks community detection clustering tasks central interest network alternatively graph consisting vertex set edge set may represented example adjacency matrix captures edge connectivity nodes network inhomogeneous independent edge random graph models adjacency matrix viewed random perturbation underlying often low rank edge probability matrix holds notation section matrix corresponds matrix corresponds matrix corresponds viewing matrix containing top eigenvectors estimate matrix top eigenvectors section theorems immediately apply methods related optimization problems random graphs employ spectral decomposition adjacency matrix matrixvalued functions thereof laplacian matrix variants example recent paper presents general dimensionreduction community detection framework incorporates spectral norm distance leading eigenvectors taken context recent work indeed wider network analysis literature paper complements existing efforts paves way expanding toolkit network analysts include procrustean norm machinery much existing literature networks graph models concerns popular stochastic block model sbm variants related cape tang priebe random dot product graph rdpg model first introduced subsequently developed series papers tractable flexible random graph model amenable spectral methods rdpg model graph eigenvalues eigenvectors closely related model generating latent positions particular top eigenvectors adjacency matrix scaled largest eigenvalues form estimator latent positions orthogonal transformation given existing rdpg literature results paper extend treatment norm procrustes matching graphs specifically bounds section imply version lemma unscaled eigenvectors require model parameter distinct eigenvalues procrustes analysis also suggests refinement test statistic formulation graph inference hypothesis testing framework also worth noting level generality allows consideration random graph matrix models allow edge dependence structure property see indeed moving beyond independent edge models represents important direction future work network science development statistical inference graph data definition random matrix said concentrated given trio positive constants unit vectors every exp remark proofs main theorems demonstrate importance bounding quantities kev perturbation framework section note satisfies concentrated property definition quantities easily controlled example union bounds discussion property holds large class random matrix models see network literature current active research directions include development random graph models exhibiting edge correlation development inference methodology multiple graphs purposes paper shall consider stochastic block model introduced omnibus embedding matrix multiple graphs introduced subsequently employed stochastic block model provides simple yet easily interpretable tractable norm singular subspace geometry model dependent random graphs omnibus embedding matrix provides framework performing spectral analysis multiple graphs leveraging graph dissimilarities similarities definition definition let denote set labeled nvertex simple undirected graphs two random graphs said sbm graphs abbreviated marginally sbm random graphs vertex set union blocks disjoint sets respective cardinalities block membership function denotes block block adjacency probabilities given symmetric matrix pair vertices adjacency independent bernoulli trial probability success random variables collectively independent except correlation following theorem provides guarantee estimating eigenvectors corresponding largest eigenvalues multiple graph omnibus matrix graphs independent best knowledge theorem first kind theorem let pair sbm graphs definition corresponding pair symmetric binary adjacency matrices let model omnibus matrix adjacency omnibus matrix given denotes matrix kronecker product matrix assignments denotes edge probability matrix cape tang priebe let rank therefore rank suppose maximum expected degree denoted satisfies section let denote matrices whose columns normalized eigenvectors corresponding largest eigenvalues respectively given diagonal matrices respectively probability asymptotically almost surely one log remark implicit dependence upon correlation factor theorem made explicit careful analysis constant factor probability statement present concern discussion summary paper develops flexible procrustean matrix decomposition variants together machinery norm order study perturbation singular subspaces geometry demonstrated widespread applicability framework results host popular matrix models namely matrices independent identically distributed entries section independent identically distributed rows section independent entries section neither independent identically distributed entries section emphasize application discussed paper underlying problem setting demands analysis terms formulation procrustean matrix decomposition use transition norms example using rectangular matrix notation paper recall assumption bounded coherence led importance product term section whereas case normal matrices section central term interest kev similarly context covariance matrix estimation theorem well theorem note discrepancies model specificity assumptions inspired different approaches deriving stated bounds moreover study directly translates via relation inf kmax inf ample open problems applications exist productive consider norm future paper details three specific applications namely norm singular subspace geometry singular vector estimation perturbation section singular subspace recovery perturbation section statistical estimation inference graphs section hope level generality flexibility presented paper facilitate widespread use norm statistics literature end invite reader apply adapt procrustean matrix decomposition purposes proofs proof procrustean matrix decomposition explain derivation matrix decomposition presented theorem proof theorem first observe matrices equivalently written respectively given block matrix formulation section next explicit correspondence resulting eqn along subsequent leftmultiplication matrix motivates introduction projected quantity write matrix shown small spectral norm lemma via proposition ignoring moment matrix represents geometric residual measure closeness matrix orthogonal matrix immediately clear control quantity given dependence perturbed quantity instead replace consider matrix block matrix form section one check together fact orthogonal projection hence idempotent follows introducing quantity yields cape tang priebe note lemma proposition terms comprising matrix product controlled submultiplicatively certain settings shall useful decompose two matrices note second matrix vanishes given earlier matrix assume additional control quantity rewrite matrix product terms corresponding residual quantity natural choice therefore incorporate orthogonal factor specifically introducing produces moving forward matrix becomes leading term interest gathering terms sides equations yields theorem corollaries evident given simply identity matrix proofs general perturbation theorems theorem proof theorem assumption implies since weyl inequality singular values theorem follows corollary together proposition lemma theorem proof theorem corollary consider decomposition norm singular subspace geometry subsequently applying proposition lemma yields sin similarly sin assumption max constants note assumption implies weyl inequality singular values thus combining observations bounds rearranging terms yields sin sin whereby first claim follows since rank matrix vanishes since identically zero corollary therefore becomes similarly removes need assumptions respect terms hence bound holds cape tang priebe corollary proof corollary theorem bound sin sin next wedin sin theorem together general matrix fact max assumption max max sin sin min using properties norm therefore kev kev max max max similarly max max max combining observations yields stated bound proof theorem proof theorem follows constant may change line line first adapting proof theorem symmetric norm singular subspace geometry positive matrices yields bound sin sin sin next collect several observations ken proposition sin theorem assumption implies bounded coherence assumption yields together positive constant theorems applied random vectors covariq ance matrix exists constant ken log probability least similarly applying theorems matrix random vectors covariance log probability least combining observations yields probability least rken log log log matrix consider bound ken rken kmax cape tang priebe hyk iyk hen denote random variable vector orlicz norms sup sup khy product gaussian random variables dis tribution particular term hyk iyk centered random variable independent identically distributed fixed upper bound orlicz norm random variable given terms orlicz norm remark namely khyk iyk iyk random vectors mean zero multivariate normal therefore var together observation var khy bernstein inequality proposition follows log ken combining observation hypotheses yields probability least log log log log log probability least hence norm singular subspace geometry proof theorem proof theorem specializing corollary symmetric case rank yields decomposition rewriting decomposition yields applying technical results sections yields bounds keu assumption symmetric therefore furthermore sin lemma sin theorem therefore keu assumption cape tang priebe therefore keu hence kek proof theorem proof theorem note rank implies matrix vanishes therefore rewriting corollary yields decomposition observe proposition lemma sin furthermore proposition lemma yield sin consider matrix observe columns centered multivariate normal random vectors covariance matrix follows row matrix centered multivariate normal random vector covariance matrix denotes identity matrix gaussian concentration applying union bound hypothesis log log probability least norm singular subspace geometry matrix argument implies entry hence arguments log log probability least hypothesis holds probability least hence setting rate optimal bounds given sin sin combining observations yields log log sin sin log assumption absence bounded coherence assumption sin max log max log sin hand provided tion bounded coherence log sin sin proof theorem proof theorem wish bound observe matrix vanishes since rank fact cape tang priebe together corollary implies bound bound weakened yield proceed bound terms right hand side inequality end straightforward calculation reveals max kai asymptotically almost surely maximum expected degree denoted satisfies hypothesis furthermore assumption implies asymptotically almost surely combining observations proof lemma result theorem yields relations sin worth noting relations provide bound underlying quantity interest next matrix consider bound note norm singular subspace geometry observe roles interchanged expansion sum independent bounded mean zero random variables taking values hence hoeffding inequality probability tending one log similarly matrix rku particular sum independent mean zero bounded random variables taking values another application hoeffding inequality probability almost one log note always holds assume bounded coherence hypotheses imply lemma behaves hence bounds sin log probability analysis yields cape tang priebe references zhidong bai jack silverstein spectral analysis large dimensional random matrices vol springer konstantinos benidis ying sun prabhu babu daniel palomar orthogonal sparse eigenvectors procrustes problem ieee international conference acoustics speech signal processing icassp rajendra bhatia matrix analysis gtm new york adam bojanczyk adam lutoborski procrustes problem orthogonal stiefel matrices siam journal scientific computing tony cai zongming yihong sparse pca optimal rates adaptive estimation annals statistics tony cai anru zhang perturbation bounds singular subspaces applications statistics preprint appear annals statistics emmanuel benjamin recht exact matrix completion via convex optimization foundations computational mathematics chen joshua vogelstein vince lyzinski carey priebe joint graph inference case study elegans chemical electrical connectomes worm yasuko chikuse statistics special manifolds vol springer science business media chandler davis william morton kahan rotation eigenvectors perturbation iii siam journal numerical analysis ian dryden alexey koloydenko diwei zhou statistics covariance matrices applications diffusion tensor imaging annals applied statistics ian dryden kanti mardia statistical shape analysis applications john wiley sons alan edelman arias steven smith geometry algorithms orthogonality constraints siam journal matrix analysis applications yonina eldar gitta kutyniok compressed sensing theory applications cambridge university press jianqing fan yuan liao martina mincheva large covariance estimation thresholding principal orthogonal complements journal royal statistical society series statistical methodology jianqing fan philippe rigollet weichen wang estimation functionals sparse covariance matrices annals statistics jianqing fan weichen wang yiqiao zhong eigenvector perturbation bound application robust covariance estimation donniell fishkind daniel sussman minh tang joshua vogelstein carey priebe consistent partitioning stochastic block model model parameters unknown siam journal matrix analysis applications john gower garmt dijksterhuis procrustes problems oxford university press paul holland kathryn blackmond laskey samuel leinhardt stochastic blockmodels first steps social networks roger horn charles johnson matrix analysis cambridge university press norm singular subspace geometry vladimir koltchinskii karim lounici new asymptotic results principal component analysis preprint elizaveta levina roman vershynin optimization via approximation community detection networks annals statistics jing lei alessandro rinaldo consistency spectral clustering stochastic block models annals statistics linyuan xing peng spectra random graphs electronic journal combinatorics vince lyzinski information recovery shuffled graphs via graph matching preprint vince lyzinski youngser park carey priebe michael trosset fast embedding jofc using raw stress criterion appear journal computational graphical statistics vince lyzinski daniel sussman minh tang avanti athreya carey priebe perfect clustering stochastic blockmodel graphs via adjacency spectral embedding electronic journal statistics sean rourke van wang random perturbation low rank matrices improving classical bounds preprint eigenvectors random matrices survey journal combinatorial theory series debashis paul alexander aue random matrix theory statistics review journal statistical planning inference carey priebe david marchette zhiliang sancar adali manifold matching joint optimization fidelity commensurability brazilian journal probability statistics elizaveta rebrova roman vershynin norms random matrices local global problems preprint karl rohe sourav chatterjee bin spectral clustering highdimensional stochastic blockmodel annals statistics mark rudelson roman vershynin delocalization eigenvectors random matrices independent entries duke mathematical journal stewart sun matrix perturbation theory academic press daniel sussman minh tang donniell fishkind carey priebe consistent adjacency spectral embedding stochastic blockmodel graphs journal american statistical association daniel sussman minh tang carey priebe consistent latent position estimation vertex classification random dot product graphs ieee transactions pattern analysis machine intelligence minh tang avanti athreya daniel sussman vince lyzinski youngser park carey priebe semiparametric hypothesis testing problem random graphs journal computational graphical statistics minh tang avanti athreya daniel sussman vince lyzinski carey priebe nonparametric hypothesis testing problem random dot product graphs appear bernoulli minh tang carey priebe limit theorems eigenvectors normalized laplacian random graphs preprint ulrike von luxburg tutorial spectral clustering statistics computing cape tang priebe wedin perturbation bounds connection singular value decomposition bit numerical mathematics hermann weyl das asymptotische verteilungsgesetz der eigenwerte linearer partieller differentialgleichungen mit einer anwendung auf die theorie der hohlraumstrahlung mathematische annalen jianfeng yao zhidong bai shurong zheng large sample covariance matrices data analysis cambridge university press stephen young edward scheinerman random dot product graph models social networks international workshop algorithms models webgraph tengyao wang richard samworth useful variant kahan theorem statisticians biometrika supplement supplementary material provide technical proofs pertaining norm singular subspace geometry modification theorem material plays essential role proofs main theorems technical tools norm consider vector norm matrices defined max let denote row following proposition shows corresponds maximum euclidean norm rows proposition max kai proof definition inequality together yield max kai since max max max max kai barring trivial case let denote standard basis vector index given arg kai noting define norm vector max max max kai establishes desired equivalence norm singular subspace geometry remark norm said subordinate respect vector norms since note however submultiplicative matrices general example proposition min proof first inequality obvious since max max max max second inequality holds application equality together vector norm relationship particular max max max max spectral norm symmetry remark relationship proposition sharp indeed second inequality take particular tall rectangular matrices spectral norm much larger norm proposition cape tang priebe proof subordinate property yields hence maximizing unit vectors yields equation contrast eqn follows inequality coupled fact vector norms dual one another explicitly max max max max max max proposition kav kav moreover need equal proof statement follows proposition submultiplicativity together observation contrast matrices exhibit singular subspace geometric bounds technical deterministic lemmas let denote corresponding procrustes solution section follows use fact sin chapter lemma let arbitrary following relations hold respect terms sin distance sin sin sin norm singular subspace geometry proof matrix represents residual orthogonally projecting onto subspace spanned columns note several intermediate steps computation yield hand proposition follows sin second matrix may viewed residual measure extent almost optimal rotation matrix optimal respect frobenius norm procrustes problem unitary invariance together interpretation canonical angles denoted cos yields kuu mini cos thus mini maxi sin mini maxi sin lemma quantity bounded follows sin sin sin moreover together lemma min sin sin cape tang priebe proof lower bound follows setting lemma together definition lemma together triangle inequality sin sin proof lemma establishes inf sin completes proof modification theorem prove modified version theorem stated terms sin rather sin although original theorem implies bound quantity sin able remove multiplicative factor depending rank proof approach combines original argument together classical results statement theorem proof interface notation section notation section theorem modification theorem let symmetric matrices eigenvalues respectively write fix assume min let let orthonormal columns satisfying xvj sin proof let diagonal matrices defined diag diag also define diag let diag observe since norm singular subspace geometry inequality due weyl corollary properties spectral norm summary finally application theorem follows sin combining two inequalities yields result department applied mathematics statistics johns hopkins university charles street baltimore maryland usa cep
| 10 |
study language usage evolution open source software siim karus harald gall university tartu estonia university zurich switzerland university zurich switzerland gall abstract use programming languages java open source software oss well studied however many popular languages xsl xml received minor attention paper discuss trends oss development observed considering multiple programming language evolution oss based revision data oss projects tracked evolution language usage artefacts documentation files binaries graphics files systems several different languages artefact types including java xml xsl makefile groovy html shell scripts css graphics files javascript jsp ruby phyton xquery opendocument files php etc used found amount code written different languages differs substantially findings summarized follows javascript css files often coevolve xsl java developers every second developer work xml generally observed significant increase usage xml xsl recent years found java hardly ever language used developer fact developer works different artefact types different languages project average categories subject descriptors software engineering distribution maintenance enhancement restructuring reverse engineering reengineering version control programming languages language classifications languages extensible language computing milieux history computing software people introduction lot effort put studying use procedural languages languages java even less common languages perl python ruby received fair share attention however looking statistics used languages language far common ones mentioned earlier strikes according tracks open source software oss repositories actively developed oss projects contain xml less contain html languages present less projects even xml also language lines code changed per month use xml oss projects however received considerable attention far xml language little meaning would interesting understand language used looking file types could investigate issue even general question languages file types used together therefore coevolving oss projects formulated address research question studied oss software repositories years study focused two levels file type couplings developer commit level developer level developers projects studied regarding language experience projects addressed following questions general terms management measurement documentation experimentation human factors languages languages artefacts commonly used oss development proportions many file types developer typically work usage patterns file types language usage consequence language expertise requirements developers changed observation period design keywords programming language open source software evolution software archives permission make digital hard copies part work personal classroom use granted without fee provided copies made distributed profit commercial advantage copies bear notice full citation first page copy otherwise republish post servers redistribute lists requires prior specific permission fee conference month city state country copyright acm commit level files appearing together commits studied addressed following questions patterns observed oss projects distinct dependencies languages artefact types commonly edited together dependencies file types used projects changed observation period additionally general level oss projects studied interested common languages artefact http table overview oss projects used study project name type period studied cocoon business commons business esb business httpd business zope business wsas business wsf business bibliographic desktop bizdev desktop dia desktop docbook desktop desktop exist desktop desktop desktop desktop gnucash desktop groovy desktop desktop subversion desktop tei desktop valgrind desktop types oss projects observations clearly show trends javascript css files often xsl almost every java developer every second developer works xml years significant increase xsl xml usage observed showing technological shifts due framework development paper organized follows section oss projects used study introduced described section details findings developers section discusses findings different types language usage oss projects threats validity outlined section related work discussed section conclude results give brief outlook onto future work dataset study development patterns dataset oss projects used projects split desktop type business server type projects nature projects offering business functionality web services considered business type projects projects mainly used desktop environments considered type desktop table shows periods studied number developers devs number different artefact types used number revs files projects used study number files stated table includes files including deleted art types files course projects present latest revision corresponding project projects chosen would represent wide spectrum development projects terms type duration development team size usage scenario whilst business type projects commons esb wsas wsf belong larger complex called bizdev bibliographic utilities openoffice rest projects mostly independent docbook represent documentation development tools exist groovy tei subversion natural language toolkit firebug valgrind projects software project development aids libraries httpd zope cocoon application development platforms gnucash accounting application dia diagramming solution better understand well dataset represents population dataset compared graphs publicly available cases usage displayed steep decrease usage share java presented sudden emergence strong yet longer growing presence share commits xml files increasing reached highest share file types used main difference dataset used data lower usage html dataset dataset used study accordingly exhibited higher share xml java compared data distribution figure distribution major file types worked projects per year project configuration build tools ant maven store project build configuration distribution subtypes shown figure major artefacts worked projects dataset different years shown figure identified classified major file types common file extensions repository archive audio awk binary command script css data dtd graphics groovy html java javascript jsp makefile manifest office extension opendocument openxml patch diff pdf perl php plaintext postscript project properties python resources rich text ruby sed shell script sql sqml tex wsdl xml xml schema xquery xsl languages present files files classified extensions however exceptions category plaintext includes files extensions files named readme install todo copying copyright authors license acknowledgements news notes changelog changes files contain project documentation plain text format category project contains xml files root element project files mostly used ides store category manifest contains files extension files named category properties contains files extension properties xml files root element properties files used java projects store application configuration category perl additionally contains extensionless text files begin category shell scripts additionally contains extensionless text files begin category sgml additionally includes catalog files every file belong one category example xsl xaml xhtml etc files counted xml files neither files included categories due exceptions files named belong category manifest another special general group files without figure proportion developers generating different types artefacts different years table abundant file types used together developers extensions includes folders due differences repositories present data data gathered may contains revision information february april years developers study habits developers find language usage sets commonly present projects extracted developer information revision data revision control systems cvs svn listed file types used developer analyzed data languages used identified plaintext files files without extensions mostly changes directory structure edited developer makefiles xml used developers making artefact types share third fourth position java files edited developers followed popularity project files html files surprisingly files used fewer developers xsl files considering xsl gained popularity lost said recent years active xsl developers developers ratio developers using different file types throughout study period shown figure note year figures account first quarter year data collection point popular artefact type used developers figure distribution project file subtypes dataset last years marks revisions explicit namespace figure additional languages used least commits developers table displays common artefacts commonly used developer common combination file types used developer java xml explained languages top four artefact types encountered among language pairs used developers plaintext xml popular second languages interestingly xsl developers modified xml files could caused xsl applied either xml files extensions xml xsl used transform documents created runtime received third party xml schema editors also active xml development cases followed wsdl archive file properties file javascript css developers xsl commonly seen together languages used web development html xml schema css javascript graphics files one keep mind making commits certain type files necessarily mean developer expertise responding field commits could deferred developers necessities solved help developers identification expertise complex task studied works like sets commonly languages along popularity languages allow identify major classes developers languages use three major classes defined popular languages developers java developers xml developers developers developers frequent users plaintext files used developers makefiles expected plaintext files commonly used document projects makefiles chosen technology control build process files without extensions modified developers explained decent folder structure fourth common language used developers shell scripts developers followed closely xml developers details abundant file types used developers seen table matrix shows developers using file type specified rows percentage developers also using file type specified column developers archive files also worked xsl files late developers worked written makefiles created types artefacts since less half developers written code dropping percentage developers using climbed steadily makefiles continuously become less popular dropping see figure commonly used language developers apart almost always makefiles commonly used file type plaintext see figure nevertheless popularity plaintext files slowly decreasing among developers use xml made strong impression since adoption reached developers could related widespread adoption xml standards xml replacing makefile based building environments java developers total java developers also worked xml files making xml popular language used together java second popular language used together java project files used java developers top three also includes files without extensions directory structure modifications used java developers next popular file types used significantly less see table usage java rise developers used last study period shown figure java developers use different types artefacts developers graph also displays java developers writing xsl less frequently used xsl become popular general explained xsl written developers focused xsl less java wrong assume popularity xml java projects mainly due project build files fact half files found java projects types also use binary files including files java developers dropped could result using separate library repositories instead files revision control repository figure additional languages used least commits java developers xml developers knowledge developing xml files also steady rise developers used period xml developers come different areas work variety different artefacts shown fact lots different artefact types used commits xml developers figure popular file type modified xml developers files without extensions used xml developers slowly losing popularity among xml developers since good practice files explicitly defined namespace used verify files however files specify namespace often encountered namespaces http http showing xml often used project domain specific languages confirmed popular root elements refentry either http namespace namespace specified root elements elementspec http namespace root elements common root element xml files used java project xml files files classified project category mostly contained files without explicit namespace see figure xsl commonly accompanied xml used steady developers since introduction another gain since first commit commits general tell file types used projects first commit made developer tells lot initial experience developers expect developers prefer file types languages familiar joining development team first commit also shows patterns developers get involved build contribution expected developer using different languages first commit needs understand project architecture build practices better developer starts changing lines single file number different types files developers first figure additional languages used least commits xml developers commit usually less four almost half commits single file type also trend using fewer files file types first commit towards end study period see figure makefile popular choices first commit language later years xml java taken lead number different file types present first commit number files developers first commit lower first commits files single type whilst commits made single file type analysis common file type combinations shows first commits included makefiles files without extensions contained file types common single file type commits files without extension java xml considering commits contained makefiles files without extensions contained two contained also files java xml files encountered commits contained common single type commit file types java xml html xsl accounted commits shows developers expand competences learning deploying new languages project however languages tightly coupled causing files different file types changed time developers started xml makefiles java rarely used one language first commit find file types languages used together files file types commit analyzed common file types committed together identified project types business desktop separately business type projects commonly encountered combination present business projects combination java xml commits made java files accompanied changes xml files changes files changes java files strongest bidirectional relation found javascript xsl files files types also cases change css file accompanied change xsl file much less frequent even case web file types changes css files accompanied changes javascript files cases one reason could projects studied xsl mainly used generating reports data presentations web applications usually results xsl used place writing html directly thus gets changed often user interface development testing hand rate less presentation type artefacts javascript css graphics files indicates business type projects dataset used xml business document transformations often used generating presentations commits contained multiple files type frequently commits files type graphics graphics commits commits xml schema php java binary xsl commits graphics files contained one graphics file xml schema files often changed along java xml files xml schema file similarly wsdl files changed xml java files often wsdl file web file types css javascript often committed xsl files kind means graphics developers xml schema developers likely work patches developers also found changes binary files files average accompanied changes files almost four file types commits java files accompanied average artefacts types could caused developers committing compiled files along source code frequently file types also include xml schema file types wsdl file types files usually cochanging types php file types file types java file types commonly encountered file types multiple file type commits order frequency java average present commits files type project commits figure number file types developer first commit year table artefact types commit together business type projects files type xml xsl files without extensions details business type projects seen table table shows many commits containing artefacts type listed row header contained artefacts type listed columns trends summary observed following language usage trends dataset java xml files coevolve often compared file language types whereas files rarely file type binary files xml java files cases wsdl files often java xml files javascript files xsl files xml schema files wsdl xml java files xsl files basically javascript files desktop type projects desktop projects historically common language dataset files representing languages file types related development commonly changed together example changes files accompanied changes files without extensions folders linux executables cases changes makefiles cases similar observation made graphics files committed together makefiles cases changes files extensions cases opposed business type projects changes java files accompanied changes xml files cases details cooccurrences seen table common pattern observed groovy files java files half cases graphics commits diversity artefacts file types committed graphic files average followed command scripts file types javascript file types binary files file types binary files committed together java xml files third cases independent file types java file types average xsl file types average multiple file type commits often contained files without extensions xml java xsl files multiple graphics files cases commits graphics files contained one graphics file file types often changed bulk multiple files commit commits binary php java files trends summary observed following language trends binary files java xml files files makefiles command scripts makefiles shell script xml files css files xml groovy java files javascript css xsl files ruby files xsl one major differences business server type projects desktop oss projects observed much lower java xml files either direction half likely business type projects hand css files cochanged xml files twice often desktop projects trends similar oss project types investigated threats validity threats validity work confounding selection bias generalisability table artefact types commit together desktop type projects confounding internal threat explanations given observations might event society changed characteristics developers languages used companies campaigns push technologies directly relate dataset makes relationships difficult impossible identify impact events might end attributed change could find correlation threat avoided built eclipse show likely different files changed together tool exclude files however aim visualise patterns emerging specific projects regarding files try describe file type artefact type level tool useful monitoring software development processes contrast paper explains general patterns spanning oss software projects selection threat internal bias external generalisability internal selection might biased towards certain projects motivation accept dataset studied somewhat elite collection projects single developer projects account half population oss projects validated representativeness dataset data provided found general characteristics datasets similar despite threat bias found differences artefact popularity rankings biggest difference observed popularity html code similarity gives high confidence generalisability representativeness results study dattero conducted survey looked differences developer gender discovered female developers likely work deprecated technologies also found female developers tend less experienced familiar languages opposed languages male developers familiar numbers similar findings however also saw average number different file types usually representing different technologies used developers decreased period studied related work idea studying addressed research far however studies often language specific rarely look different file types even studies encompassing multiple file types limited specific file types example zimmerman studied lines different files evolve project study limited textual files focused visualisation clustering files based change history different patterns evolution oss outlined nakakoji determined three main types oss types determine software evolves developers behave projects studied spanned types business projects largely gnucash bibliographic exist providing stable services confused soa study also shows projects development speed cycle along projects transform one type another used explain fluctuations language file type shares time seen study software repositories used studying various aspects software development like developer role identification core associate framework hotspot detection works complementary help developing better understanding sotware development process software also shown number size projects growing exponentially projects becoming diverse expanding new domains conclusions future work investigated revision data oss projects tracked evolution multiple programming language usage findings summarized language developer perspective first multiple programming language usage study confirmed data popular widely used language oss software projects xml followed java xml increased popularity steadily last decade lost high share various languages java among popular ones despite becoming popular years java able grow share significantly last years xsl maintained share last years commonly files usually type ranked order intensity java xml plaintext files makefiles pair file types business type projects studied javascript xsl rate measured common commits cases java xml files especially project specific types likely edited person java files project definition files based projects analyzed found xsl important generating user interfaces document transformations similar characteristics data available html code currently biggest analysed listing open source projects closest representing population feasible incorporate projects listed data would need analysed would exceed capabilities processing timely fashion acknowledgments research conducted visit first author software evolution architecture lab university zurich thank members lab valuable advice work also partially funded erdf via estonian centre excellence computer science references meier exist open source native xml database web database systems vol erfurt germany annual international workshop web databases erfurt germany oct alonso devanbu gertz expertise identification visualization cvs msr proceedings international working conference mining software repositories leipzig germany brunnert alonso riehle enterprise people skill discovery using tolerant retrieval visualization advances information retrieval european conference research ecir vol rome italy capiluppi lago morisio characteristics open source projects software maintenance reengineering proceedings seventh european conference benevento italy zimmermann kim zeller whitehead mining version archives lines msr proceedings international workshop mining software repositories shanghai china second developers found fewer file types used new developers first commits even though developers began experience multiple file types developers worked least five different file types period studied java developers worked xml files developers later years von klenze burch diehl exploring evolutionary coupling eclipse eclipse proceedings oopsla workshop eclipse technology exchange san diego california study languages used developers showed decreasing importance makefiles plaintext files developers importance xml increased almost language whilst document type definition language deprecated xml schema seem replace neither language implying standardised schemas preferred project specific ones nakakoji yamamoto nishinaka kishida evolution patterns software systems communities iwpse proceedings international workshop principles software evolution orlando florida characteristics developer language usage saw knowing multiple languages required developers developers must also understand different coding paradigms procedural languages often used rule template based extensible languages needed know code write makefiles increased variety languages used newer projects lack distinct leaders languages introduced need familiar development future work address better describe population including projects ideal dataset would dattero galup programming languages gender communications acm vol january ramaswamy mining cvs repositories understand project developer roles mining software repositories icse workshops msr fourth international workshop minneapolis usa thummalapenta xie spotweb detecting framework hotspots via mining open source repositories web proceedings international working conference mining software repositories leipzig germany deshpande riehle total growth open source open source development communities quality international federation information processing vol
| 6 |
deep hair matting mobile devices jan alex cheng edmund irina wenzhangzhi parham modiface university toronto reality emerging technology many application domains among beauty industry live virtual beauty products great importance paper address problem live hair color augmentation achieve goal hair needs segmented quickly accurately show modified mobilenet cnn architecture used segment hair instead training network using large amounts accurate segmentation data difficult obtain use crowd sourced hair segmentation data data much simpler obtain segmentations noisy coarse despite show system produce accurate hair mattes running fps ipad pro tablet segmentation matting augmented reality deep learning neural networks ntroduction image segmentation important problem computer vision multitude applications among segmentation hair live color augmentation beauty applications fig use case however presents additional challenges first unlike many objects simple shape hair complex structure realistic color augmentation coarse hair segmentation mask insufficient one needs hair matte instead secondly many beauty applications run mobile devices web browsers powerful computing resources available makes challenging achieve performance paper addresses challenges introduces system accurately segment hair fps mobile device line recent success convolutional neural networks cnns semantic segmentation hair segmentation methods based cnns make two main contributions first modern cnns run realtime even powerful gpus may occupy large amount memory target performance mobile device first contribution show adapt recently proposed mobilenets architecture hair segmentation fast compact enough used mobile device absence detailed hair segmentation ground truth train network noisy coarse data coarse segmentation result however insufficient hair color augmentation purposes realistic color augmentation accurate hair matte needed figure automatic hair matting coloring input image output hair matte produced method recolored hair second contribution propose method obtaining accurate hair mattes without need accurate hair matte training data first show modify baseline network architecture capacity capturing details next adding secondary loss function promotes perceptually appealing matting results show network trained yield detailed hair mattes using coarse hair segmentation training data compare approach simple guided filter show yields accurate sharper results evaluate method showing achieves accuracy running mobile device remainder paper discuss related work sec describe approach sec iii evaluate method sec conclude sec elated work similar work general image segmentation hair segmentation work divided two categories first category approaches uses features segmentation yacoob employ simple color models classify hair analogous method employed aarabi also making use facial feature locations skin information khan use advanced features random forests classification approaches however prove insufficiently robust applications spatially consistent segmentation results popular method formulate segmentation random field inference lee build markov random field figure fully convolutional mobilenet architecture hair segmentation image pixels huang build model superpixels instead wang alternative method overlapping image patches first segmented independently combined recently given success deep neural networks dnns many areas including semantic segmentation hair segmentation methods emerged guo aarabi use heuristic method mine highconfidence positive negative hair patches image train separate dnn per image used classify remaining pixels inspired recent success fully convolutional networks fcn semantic segmentation chai qin employ fcns hair segmentation due coarseness raw fcn segmentation results similar methods post process results using dense crfs additionally extra matting step obtain hair mattes finally propose cnn architecture generic image matting yielding results approach follows trend addressing several issues aforementioned methods build upon architecture single forward pass takes around even powerful gpu much longer mobile device adding dense crf inference matting futher increases moreover occupies approximately memory much mobile applications instead show adapt recently proposed compact mobilenets architecture segmentation yield matting results without expensive post processing methods finally may possible obtain detailed hair matting data using labeling techniques show train network without need data iii pproach section describes contributions detail firstly describe modifications original mobilenet architecture challenges obtaining training data hair segmentation secondly illustrate method hair matting without use matting training data fully convolutional mobilenet hair segmentation inspired first tried use modified network hair segmentation however forward pass network took seconds per frame network occupied memory incompatible mobile use case therefore use mobilenets instead faster compact modified original mobilenet architecture fully convolutional network segmentation name hairsegnet first remove last three layers avg pool softmax refer table next similar preserve fine details increase output feature resolution changing step size last two layers step size due use weights imagenet dilate kernels layers updated resolution scale factor original resolution namely kernels layers increased factor dilated kernels images still needed data using hair coloring app users manually mark hair getting data cheap resulting hair segmentation labels noisy coarse fig illustrates issue note image hair labeled sparsely image photograph pet submitted manually clean data keeping images human faces sufficiently good hair masks considerably faster marking hair scratch fixing incorrect segmentations figure training data hair segmentation top images bottom masks data noisy coarse images poor masks images layers increased factor dilated yields final resolution next build decoder takes cnn features input upsamples hair mask original resolution tried upsampling using transposed convolution layers saw gridding artifacts resulting masks therefore upsampling performed simplified version inverted mobilenet architecture stage upsample previous layer factor replicating pixel neighborhood apply separable depthwise convolution followed pointwise convolutions filters followed relu number filters large effect accuracy filters yielding slightly better performance based experiments see sec previous block repeated three times yielding output conclude adding convolution softmax activation output channels hair nonhair network trained minimizing binary cross entropy loss predicted ground truth masks full architecture illustrated fig resulting architecture considerably compact occupying importantly forward pass takes implemented tensorflow ipad pro using recently released optimized coreml library apple time reduced per frame training deep neural networks requires large amount data large datasets general semantic segmentation datasets much less popular hair segmentation moreover unlike objects like cars relatively simple shape hair shape complex therefore obtaining precise ground truth segmentation hair even challenging cope challenge use network imagenet entire network hair segmentation data nevertheless several thousands training hair matting second contribution show obtain accurate hair matting results solve matting problem using cnn name hairmattenet manner approach faces two challenges first need architecture capacity learn high resolution matting details network sec may suitable since results still generated incremental upsampling relatively layer secondly cnn needs learn hair matting using coarse segmentation training data address first issue adding skip connections layers encoder corresponding layers decoder similar many modern network architectures way shallower layers encoder contain weak features combined lowres powerful features deeper layers layers combined first applying convolution incoming encoder layers make output depth compatible incoming decoder layers three outer skip connections inner skip connection merging layers using addition resolution deepest encoder layer resolution taken skip connection second issue addressed adding loss function promotes perceptually accurate matting output motivated alpha matting evaluation work rhemann secondary loss measures consistency image mask edges minimized two agree specifically define gradient consistency loss mmag mmag normalized image mask gradients respectively mmag mask gradient magnitude loss added original binary cross entropy loss weight making overall loss wlc combination two losses maintains balance true training masks generating masks adhere image edges fig illustrates new architecture combination two loss functions figure fully convolutional mobilenet architecture hair matting skip connections added increase network capacity capturing high resolution detail gradient consistency loss added alongside standard binary cross entropy loss promote detailed matting results compare hairmattenet simple coarse segmentation mask hairsegnet guided filter qin used similar approach employed advanced matting method fast enough applications mobile devices guided filter filter linear runtime complexity image size takes process image ipad pro fig compares masks without filter former clearly capturing details individual hair strands becoming apparent however filter adds detail locally near edges mask cnn moreover edges refined masks visible halo around becomes even apparent hair color lower contrast surroundings halo causes color bleeding hair recoloring hairmattenet yields sharper edges fig captures longer hair strands without unwanted halo effect seen guided filter additional bonus hairmattenet runs twice fast compared hairsegnet taking per frame mobile device without need extra postprocessing matting step due use skip connections help capturing high resolution detail hairmattenet maintains original mobilenet encoder structure deepest layers resolution layers many depth channels become expensive figure hair segmentation matting input image hairsegnet hairsegnet guided filter hairmattenet process increased resolution resolution makes processing much faster compared resolution hairsegnet xperiments evaluate method three datasets first dataset consisting training validation testing images three subsets include original images flipped versions since target hair matting mobile devices data detecting face cropping region around based scale expected typical selfies compare method existing approaches evaluate two public datasets lfw parts dataset hair dataset guo aarabi former consists images training validation test images pixels labeled three categories hair skin background generated superpixel level latter consists images since contains images train use crowdsourced training data evaluating set make dataset consistent training data similar manner using face detection cropping adding flipped images well since cases faces detected resulting dataset consists images training done using batch size using adadelta method keras learning rate use regularization weight convolution layers depthwise convolution layers last convolution layer regularized set loss balancing weight threeclass lfw data hair class contributing gradient consistency loss train model epochs select best performing epoch using validation data training dataset takes hours nvidia geforce gtx gpu less hour lfw parts due much smaller training set size quantitative evaluation quantitative performance analysis measure performance iou accuracy averaged across test images measure consistency image hair mask edges also report gradient consistency loss eqn recall manual sec filtered images rather correcting masks result quality hair annotation still poor therefore prior evaluation data manually corrected test masks spending minutes per annotation yielded slightly better ground truth three variants method evaluated relabeled data table shows results three methods perform similarly ground truth comparison measures however hairmattenet clear winner gradient consistency loss category indicating masks adhere much better image edges lfw parts dataset report performance best performing method qin achieve mobile device use accuracy measure evaluation since measure used arguably especially since lfw parts annotated superpixel level ground truth may good enough analysis dataset guo aarabi report performance hnn best performing method dataset obtained similar performance reported authors performance model perf iou acc dataset hairsegnet hairsegnet hairmattenet lfw parts dataset hairmattenet hairmattenet guo aarabi dataset hairmattenet hnn table uantitative evaluation depth perf iou acc table ecoder layer depth experiments validation data qualitative evaluation evaluate method publicly available selfie images qualitative analysis results seen fig hairsegnet fig yields good coarse masks hairsegnet guided filter fig produces better masks undesirable blur around hair boundaries accurate sharpest results achieved hairmattenet fig failure mode guided filter postprocessing hairmattenet objects vicinity hair eyebrows case dark hair bright background light hair addition highlights inside hair cause hair mask hairmattenet especially apparent last three examples column network architecture experiments decoder layer depth using validation data experimented number decoder layer channels observed large effect accuracy table illustrates experiments number channels decoder channels yielding best results according measures experiments done using skip connections architecture fig without using gradient consistency loss input image size howard observed mobilenets perform better given higher image resolution given goal accurate hair matting experimented increasing resolution beyond highest resolution mobilenet trained imagenet image fig fig shows qualitative comparison masks inferred using hairmattenet figure qualitative evaluation input image hairsegnet hairsegnet guided filter hairmattenet images images results look accurate around hair edges longer hair strands captured long hair strand falling nose first image however issues mentioned previous section emphasized well hair mask bleeding regions inside mask becoming due hair highlights addition processing larger image significantly expensive ummary paper presented hair matting method performance mobile devices shown given noisy coarse data modified mobilenet architecture trained yield accurate matting results apply proposed architecture hair matting general applied segmentation tasks future work explore fully automatic methods training noisy data without need manual filtering addition explore improvements matting quality capturing longer hair strands segmenting light hair keeping hair mask homogeneous preventing bleeding regions maintaining performance mobile devices eferences howard zhu chen kalenichenko wang weyand andreetto adam mobilenets efficient convolutional neural networks mobile vision applications arxiv preprint sun tang guided image filtering tpami vol yacoob davis detection analysis hair tpami vol aarabi automatic segmentation hair images ism khan mauro leonardi semantic segmentation faces icip lee anguelov sumengen gokturk markov random field models hair face segmentation chen papandreou kokkinos murphy yuille semantic image segmentation deep convolutional nets fully connected crfs iclr koltun efficient inference fully connected crfs gaussian edge potentials nips price cohen huang deep image matting cvpr simonyan zisserman deep convolutional networks image recognition arxiv preprint coreml https rhemann rother wang gelautz kohli rott perceptually motivated online benchmark image matting cvpr kae sohn lee augmenting crfs boltzmann machine shape priors image labeling cvpr zeiler adadelta adaptive learning rate method arxiv preprint chollet keras https figure network resolution comparison huang narayana towards unconstrained face recognition cvprw ieee wang lao compositional exemplarbased model hair segmentation accv wang tang good parts hair shape modeling cvpr guo aarabi hair segmentation using heuristicallytrained neural networks tnnls long shelhamer darrell fully convolutional networks semantic segmentation cvpr chai shao weng zhou autohair fully automatic hair modeling single image tog vol qin kim manduchi automatic skin hair masking using fully convolutional networks icme
| 1 |
efficient diverse ensemble discriminative kourosh meshgi shigeyuki oba shin ishii graduate school informatics kyoto university kyoto japan nov oba ishii abstract ensemble discriminative tracking utilizes committee classifiers label data samples turn used retraining tracker localize target using collective knowledge committee committee members could vary features memory update schemes training data however inevitable committee members excessively agree large overlaps version space remove redundancy effective ensemble learning critical committee include consistent hypotheses differ covering version space minimum overlaps study propose online ensemble tracker directly generates diverse committee generating efficient set artificial training artificial data sampled empirical distribution samples taken target background whereas process governed shrink overlap classifiers experimental results demonstrate proposed scheme outperforms conventional ensemble trackers public benchmarks typical ensemble state conventional update partial update diversified update figure version space examples ensemble classifiers hypotheses consistent previous labeled data represents different classifier version space next time step models updated new data boxed updating data tend make hypothesis overlapping random subsets training data given hypotheses update without considering rest data hypotheses cover random areas version space random subsets training data plus artificial generated data proposed trains hypothese mutually uncorrelated much possible encouraging cover unexplored area version space introduction one popular approaches discriminative tracking utilizes classifier perform classification task using object detectors pipeline several samples obtained frame video sequence classified labeled target detector information used classifier closed feedback loop approach advantages overwhelming maturity object detection literature terms accuracy speed yet struggles keep target evolution rises issues proper strategy rate extent model update adapt object appearance changes methods update decision boundary opposed object appearance model generative trackers fections target detection model update throughout tracking manifest accumulating errors essentially drifts model real target distribution hence leads target loss tracking failure imperfections caused labeling noise selflearning loop sensitive schemes improper update frequency assumption target distribution equal weights training samples misclassification sample due drastic target transformations visual artifacts occlusion model errors degrades target localization accuracy also confuses classifier trained erroneous label typically classifier retrained using output earlier tracking episodes loop amplitudes training noise classifier accumulate error time problem amplifies tracker lacks forgetting mechanism unable obtain external scaffolds researchers believe necessity teacher train classifier inspired use ensemble tracking disabling updates occlusions label verification schemes break loop using auxiliary classifiers ensemble tracking framework provides effective frameworks tackle one challenges frameworks loop broken labeling process performed leveraging group classifiers different views subsets training data memories main challenge ensemble methods decorrelate ensemble members diversify learned models combining outputs multiple classifiers useful disagree inputs however individual learners training data usually highly correlated see figure contributions propose diversified ensemble discriminative tracker dedt object tracking construct ensemble using various subsamples tracking data maintain ensemble throughout tracking possible devising methods update ensemble reflect target changes keeping diversity achieve good accuracy generalization addition breaking loop avoid potential drift ensemble applied framework auxiliary classifier however avoid unnecessary computation boost accuracy tracker effective data exchange scheme required demonstrate learning ensembles randomized subsets training data along artificial data diverse labels framework achieve superior accuracy paper offers following contributions propose novel ensemble update scheme generates necessary samples diversify ensemble unlike model update schemes ignore correlation classifiers ensemble method designed promote diversity propose framework accommodates short memory mixture effective collaboration classification modules optimized data exchange modules borrowing concept active learning literature note different elaborated method two classifiers cast weighted vote label target pass samples struggle figure schematic system proposed tracker dedt labels obtained sample using homogeneous ensemble classifiers committee samples committee highest disagreement upon uncertain samples queried auxiliary classifier different type classifier location target estimated using labeled target member ensemble updated random subset uncertain samples generating diversity set sec ensemble diversified yielding effective ensemble one learn however tracker ensemble passes disputed samples auxiliary classifier trained data periodically provide effect memory resistant abrupt changes outliers label noise evaluation results dedt dataset demonstrates competitive accuracy method compared tracking prior work ensemble tracking using linear combination several weak classifiers different associated weights proposed seminal work avidan following study constructing ensemble boosting online boosting boosting multiinstance boosting led enhancement performance ensemble trackers despite popularity boosting demonstrates low endurance label noise alternative techniques bayesian ensemble weight adjustment proposed alleviate shortcoming recently ensemble learning based cnns gained popularity researchers make ensembles cnns shares convolutional layers different loss functions output feature map repeatedly subsampling different nodes layers fully connected layers cnn build ensemble furthermore proposed exploit power ensembles feature adjustment ensembles addition ensemble members ensemble diversity empirically ensembles tend yield better results significant diversity among models zhou categorizes diversity generation heuristics manipulation data samples based sampling approaches bagging boosting manipulation input features online boosting random subspaces random ferns random forests combining using different layers neurons interconnection layout cnns iii manipulation learning parameter manipulation error representation literature also suggests fifth category manipulation error function encourages diversity ensemble classifier selection based fisher linear discriminant training data selection principled ordering training examples reduce cost labeling lead faster increases performance classifier therefore strive use training examples based usefulness avoid using including noisy ones outliers may result higher accuracy starting easiest examples curriculum learning pruning adversarial excluding misclassified samples next rounds training sorting samples training value proposed approaches literature however common setting active learning algorithm selects training examples label step highest gains performance view may require focus learning hardest examples first example following criteria highest uncertainty active learner select samples closest decision boundary labeled next concept useful visual tracking measure uncertainty caused bags samples active learning ensembles qbc one popular active learning approaches constructs committee models representing competing hypotheses label samples defining utility function ensemble disagreement entropy method selects informative samples queried oracle collaborating classifier form query optimization process built upon randomized component learning algorithm qbc involves gibbs sampling usually interactable situations extending qbc use deterministic classifiers different subsets data construct ensemble abe mamitsuka proposed practical set hypotheses consistent data called version space selecting informative samples labeled qbc attempts shrink version space however committee hypotheses effectively samples version space consistent hypotheses productive sample images tiny imperceptible perturbations fool classifier predicting wrong labels high confidence lection end crucial promote diversity ensemble qbag qboost algorithms classifiers trained random subsets similar dataset degrade diversity ensemble reducing number necessary labeled samples unified sample learning feature selection procedure reducing sampling bias controlling variance improvements active learning provides discriminative trackers moreover using diversity data diversify committee members promoting classifiers unique misclassifications samples active learning employed promote diversity ensemble tracking detection definition tracker tries determine state target frame finding transformation previous state formulation tracker employs classifier separate target background realized evaluating possible candidates expected target candidate whose appearance resembles target usually considered new target state finally classifier updated reflect recent information end first several samples obtained transformation ytj previous target state ytj sample indicates location ytj frame image patch contained sample evaluated classifier scoring function calculate score sjt score utilized obtain label sample typically thresholding score sjt otherwise serves lower upper thresholds respectively finally target location obtained comparing samples classification scores obtain exact target state sample highest score selected new target ytj argmax sjt subset samples labels used classifier model hxt set samples labels model update function defines subset samples tracker considers model update ensemble discriminative tracker employs set classifiers instead one classifiers hereafter called committee represented typically homogeneous independent popular ensemble trackers utilize majority voting committee utility function sjt sign used label samples model classifier updated independently meaning committee members trained similar set samples common label diverse ensemble discriminative tracker propose diverse ensemble tracker composed diverse ensemble classifiers committee memory object detector serves auxiliary classifier information exchange channel governed active learning allows effective diversification ensemble improving generalization tracker accelerating convergence distribution target appearance leveraged complementary nature memory auxiliary tracker facilitate effective model update one way diversify ensemble increase number examples disagree upon using bagging boosting construct ensemble fix sample set ignores critical need diversity data randomly sampled shared data distribution however committee member exists set samples distinguish committee members one way obtain samples generate training samples artificially differ maximally current ensemble diversified ensemble covers larger areas version space space consistent hypotheses samples current frame however radical update ensemble may render classifier susceptible drastic target appearance changes abrupt motion occlusions case given nature target classifier adapt rapidly target changes yet keep memory target target goes got occluded known dilemma addition samples ensemble unanimous external teacher maybe deemed required amend shortcomings auxiliary classifier utilized label samples ensemble dispute upon classifier samples less frequently ensemble realizing means appearance object may change significantly negative sample current frame looks similar positive example previous frames longer memory tracker active query optimization employed query label informative samples auxiliary classifier observed effectively balance equilibrium tracker well figure presents schematic proposed tracker formalization approach committee comes solid vote sample sample labeled accordingly however committee disagrees sample label queried auxiliary classifier sign sjt sjt otherwise sjt derived uncertain samples list defined sjt committee members updates using proposed mechanism using uncertain samples finally maintain memory slower update rate auxiliary classifier updated every frames samples algorithm summarizes proposed tracker diversifying ensemble update model updates construct diverse ensemble either replace weakest oldest classifier ensemble creates new ensemble iteration former lacks flexibility adjust rate target change latter involves high level computation redundancy alleviate shortcomings create ensemble first frame update frame keep memory target diversify improve effectiveness ensemble diversifying update procedure follows members ensemble updated random subsets size uncertain data make adept handling samples generate temporary ensemble note certain samples committee unanimous label adding training set committee classifiers redundant input committee models auxiliary model input target position previous frame output target position current frame sample transformation ytj calculate committee score sjt sjt sample label uncertain sign else sign sjt uniformly resample data calculate prediction error calculate empirical distribution samples draw samples calculate class membership probability set labels samples calculate new prediction error diversity sets applied mod target transformation ytj argmax sjt calculate target position algorithm diverse ensemble discriminative tracker label prediction original ensemble calculated labels given whole tracker composed ensemble auxiliary classifier prediction error obtained empirical distribution training data calculated govern creation artificial data iterative process committee members samples drawn assuming attribute independence given sample class membership probabilities temporary ensemble calculated labels sampled distribution probability selecting label inversely proportional temporary ensemble prediction set artificial samples diverse labels called diversity set committee member classifier temporary ensemble updated obtain diverse ensemble calculate prediction error update increases total prediction error ensemble artificial data rejected new data generated denotes step function returns iff argument otherwise procedure creates samples member committee distinguish members ensemble using contradictory label therefore improving ensemble diversity accepts using artificial data improves ensemble accuracy implementation details several parameters system number committee members parameters sampling step number samples effective search radius holding time auxiliary classifier larger values results temporary committee higher degree overlap thus less diverse whereas smaller values tend miss latest changes target larger number artificial samples result diversity ensemble reduce chance successful update lowering prediction error ensemble parameters tuned using simulated annealing optimization set implementation used lazy classifiers hog feature ensemble reused calculations caching mechanism accelerate classification method empirical distribution data gaussian distribution determinedly estimating mean standard variation given training set hog addition localize target samples positive ensemble scores considered one highest sum confidence scores selected next target position auxiliary classifier detector features detector dictionary parameters lazy classifier thresholds rest parameters except adjusted control speed tracker adjusted using dedt achieved speed fps pentium ghz implementation cpu table quantitative evaluation trackers different visual tracking challenges using auc success plot first second third best methods shown color data available author website attribute def occ ipr opr tld strk meem muster staple srdcf ccot table comparison trackers based success rates iou first second third best methods shown color tld strk meem muster staple srdcf ccot success avg fps regions respectively also compare trackers success rate conventional thresholds iou effect diversification demonstrate effectiveness proposed diversification method compare dedt tracker two different versions tracker firs version dedtbag ensemble classifiers updated uniformpicked subsets uncertain data step section version committee members updated artificially generated data steps section three algorithms use samples update classifiers addition overall performance tracker measure diversity ensemble using elaborated statistically independent classifiers expectation classifiers tend classify sample correctly positive values commit errors different samples negative ensemble classifiers averaged statistics pairs classifiers experiments perform benchmark videos along partial subsets dataset distinguishing attribute evaluate tracker performance different situations attributes illumination variation scale variation occlusions occ deformation def motion blur fast motion ipr rotation opr low resolution background clutter defined based biggest challenges tracker may face throughout tracking comparison used success precision plots area curve provides robust metric comparing tracker performances result algorithms reported average five independent runs precision plot compares number frames tracker certain pixels displacement whereas overall performance tracker measured area surface success plot success tracker time determined normalized overlap tracker target estimation ground truth also known iou exceeds threshold success plot graphs success tracker different values threshold calculated length sequence denotes area region stands intersection union qav number cases classifier classified sample foreground classifier detected background etc figure effect diversification procedure employed proposed tracker figure illustrates effectiveness diversification mechanism contrast merely generating data update classifiers uninformed subsamples data experiment results dedt qav dedt qav qav concluded steps proposed diversification crucial maintain accurate diverse ensemble qav qav shows diversity better random diversity obtain however reveals merely using artificial data without samples gathered tracker provide enough data accurate model update effect using artificial data first look using synthesized data train ensemble keep track real object may seem proper experiment look closest patch real image frame video synthesized sample use diversity data end frame dense sampling frame performed hog image patches calculated closest match generated sample using euclidean distance selected obtained tracker referred performance compared original dedt figure shows use computationallyexpensive version algorithm improve performance significantly however noted generating adversarial samples ensemble diversity data individual committee members expected increase accuracy ensemble yet scope current research may considered future direction research figure activeness effect labeling thresholds performance proposed algorithm semble auxiliary classifier chance interpret figure prudent note forces ensemble label samples without assistance auxiliary classifier increasing ensemble starts query highly disputed samples auxiliary classifier desired design value increases excessively ensemble queries even slightly uncertain samples auxiliary classifier rendering tracker prone labeling noise classifier addition tracker loses ability update rapidly case abrupt change target appearance location leading degraded performance tracker extreme case tracker reduces single object detector modeled auxiliary classifier information exchange one way form querying informative labels auxiliary classifier way labeled samples committee certain samples observed exchange essential construct robust accurate tracker moreover data exchange breaks loop also manages equilibrium tracker view lower values correspond tracker higher values make conservative comparison figure effect using artificial data versus real data employed proposed tracker effect activeness labeling thresholds control activeness data exchange committee auxiliary classifier therefore allowing ensemble get assistance collaborator implementation two values treated independently sake argument assume figure compares effects different values also random data exchange scheme labeler gets label sample establish fair comparison popular discriminative trackers according recent large benchmark recent literature selected tld strk meem muster staple srdcf ccot figure presents success precision plots dedt along trackers sequences shown plot dedt usually keeps localization error pixels table presents area curve success plot sequences subcategories focusing certain challenge visual tracking shown dedt figure quantitative performance comparison proposed tracker dedt trackers using success plot top precision plot bottom competitive precision compared ccot employs deep feature maps performs better rest investigated trackers dataset performance dedt comparable ccot case illumination variation deformation rotation motion blur superior performance handling background clutter indicates effectiveness target background detection flexibility accommodating rapid target changes former attributed effective ensemble tracking latter known effect combining long memory observed handling extreme rotations ensemble heavily relies auxiliary tracker although brings superior performance category better representation ensemble model may reduce reliance tracker auxiliary proposed algorithm shows performance scenario compared trackers srdcf ccot although provide localization targets able keep tracking finding highlights importance research dcf trackers finally qualitative comparison dedt versus trackers presented figure conclusion figure sample tracking results evaluated algorithms several challenging video sequences sequences red box depicts dedt trackers blue ground truth illustrated yellow dashed box top bottom sequences shaking basketball soccer drastic illumination changes scaling rotations background clutter noise severe occlusions study proposed diverse ensemble discriminative tracker dedt maintains diverse committee classifiers label samples queries disputed labels informative memory auxiliary classifier generating artificial data diverse labels intended diversify ensemble classifiers efficiently covering version space increasing generalization ensemble result improve accuracy addition using concept labeling updating stages tracker label noise problem decreased updating classifiers informative samples using diverse committee turn problem equal weights samples addressed good approximation target location acquired even without dense sampling active learning scheme also manages balance memory recalling label memory memory clear label due forgetting label insufficient data also reduces dependence tracker single classifier auxiliary classifier yet breaking selflearning loop avoid accumulative model drift result experiment benchmark demonstrates competitive tracking performance proposed tracker compared next step investigate strategies detect generate challenging samples ensemble adversarial samples ensemble accelerate model construction especially rapidly changing scenarios references abe mamitsuka query learning strategies using boosting bagging icml avidan support vector tracking pami avidan ensemble tracking pami babenko yang belongie visual tracking online multiple instance learning cvpr bai sclaroff betke monnier randomized ensemble tracking iccv bai tang robust tracking via weakly supervised ranking svm cvpr bengio louradour collobert weston curriculum learning icml bertinetto valmadre golodetz miksik torr staple complementary learners tracking proceedings ieee conference computer vision pattern recognition pages beygelzimer dasgupta langford importance weighted active learning proceedings annual international conference machine learning pages acm cohn ghahramani jordan active learning statistical models journal artificial intelligence research collins liu leordeanu online selection discriminative tracking features pami dalal triggs histograms oriented gradients human detection computer vision pattern recognition cvpr ieee computer society conference volume pages ieee danelljan bhat khan felsberg eco efficient convolution operators tracking arxiv preprint danelljan hager shahbaz khan felsberg learning spatially regularized correlation filters visual tracking iccv pages danelljan robinson khan felsberg beyond correlation filters learning continuous convolution operators visual tracking eccv torre black robust principal component analysis computer vision iccv felzenszwalb girshick mcallester ramanan object detection discriminatively trained partbased models pami gall yao razavi van gool lempitsky hough forests object detection tracking action recognition pami goodfellow shlens szegedy explaining harnessing adversarial examples arxiv preprint grabner grabner bischof tracking via boosting bmvc volume page grabner leistner bischof boosting robust tracking eccv han sim adam branchout regularization online ensemble tracking convolutional neural networks proceedings ieee international conference computer vision pages hare saffari torr struck structured output tracking kernels iccv yang lau wang yang visual tracking via locality sensitive histograms cvpr pages henriques caseiro martins batista exploiting circulant structure kernels eccv pages springer hong chen wang mei prokhorov tao tracker muster cognitive psychology inspired approach object tracking cvpr kalal mikolajczyk matas pami kiani galoogahi fagg lucey learning correlation filters visual tracking arxiv kiani galoogahi sim lucey correlation filters limited boundaries cvpr kristan matas leonardis felsberg visual object tracking challenge results iccvw krogh vedelsby neural network ensembles cross validation active learning advances neural information processing systems kuncheva whitaker measures diversity classifier ensembles relationship ensemble accuracy machine learning lampert peters active structured learning object detection pages springer lapedriza pirsiavash bylinskii torralba training examples equally valuable arxiv leistner saffari bischof miforests multipleinstance learning randomized trees eccv lin yang yan new visual tracking challenge pami wang dong yan liu zha active sample learning feature selection unified approach arxiv preprint porikli convolutional neural net bagging online visual tracking computer vision image understanding issaranon forsyth safetynet detecting rejecting adversarial examples robustly arxiv melville mooney constructing diverse classifier ensembles using artificial training examples ijcai volume pages melville mooney diverse ensembles active learning proceedings international conference machine learning page acm meshgi oba ishii active discriminative tracking using collective memory mva meshgi oba ishii robust discriminative tracking via avss nam baek han modeling propagating cnns tree structure visual tracking arxiv preprint oron levi avidan locally orderless tracking ijcv oza online bagging boosting smc rao yao bai qiu liu online random ferns robust visual tracking pattern recognition icpr international conference pages ieee saffari leistner godec bischof robust boosting priors eccv saffari leistner santner godec bischof random forests iccvw salaheldin maher helw robust tracking diverse ensembles random projections proceedings ieee international conference computer vision workshops pages salti cavallaro stefano adaptive appearance modeling video tracking survey evaluation ieee tip santner leistner saffari pock bischof prost parallel robust online simple tracking cvpr settles active learning morgan claypool publishers seung opper sompolinsky query committee colt pages acm tang brennan zhao tao using support vector machines iccv vezhnevets barinova avoiding boosting overfitting removing confusing samples ecml vijayanarasimhan grauman active visual category learning ijcv visentini kittler foresti classifier selection adaptive object tracking mcs pages springer wang ouyang wang stct sequentially training convolutional networks visual tracking proceedings ieee conference computer vision pattern recognition pages wang hua han discriminative tracking metric learning eccv pages lim yang online object tracking benchmark cvpr pages ieee zhang sclaroff meem robust tracking via multiple experts using entropy minimization eccv zhang song visual tracking via online weighted multiple instance learning zhang zhang yang compressive tracking eccv pages springer zhang zhang yang robust object tracking via active feature selection ieee csvt zhou ensemble methods foundations algorithms crc press
| 1 |
feb inverse prior optimal posterior contraction multiple hypothesis testing ray bai malay ghosh university florida march abstract study problem estimating sparse unknown mean vector entries corrupted gaussian white noise bayesian framework continuous shrinkage priors expressed normal densities popular obtaining sparse estimates article introduce new fully bayesian prior known inverse igg prior prove posterior distribution contracts around true near minimax rate mild conditions process prove sufficient conditions minimax posterior contraction given van der pas necessary optimal posterior contraction show igg posterior density concentrates rate faster horseshoe sense classify true signals also propose hypothesis test based thresholding posterior mean taking loss function expected number misclassified tests show test procedure asymptotically attains optimal bayes risk exactly illustrate simulations data analysis igg excellent finite sample performance estimation classification keywords phrases normal means problem sparsity nearly black vectors posterior contraction multiple hypothesis testing heavy tail shrinkage estimation malay ghosh email ghoshm distinguished professor department statistics university florida ray bai email graduate student department statistics university florida introduction normal means problem revisited suppose observe random observation setting large sparsity common phenomenon unknown mean vector nonzero model primarily interested separating signals noise giving robust estimates signals simple framework basis number problems image reconstruction genetics wavelet analysis johnstone silverman example wish reconstruct image millions pixels data pixels typically needed recover objects interest genetics may tens thousands gene expression data points significantly associated phenotype interest instance wellcome trust confirmed seven genes association type diabetes applications demonstrate sparsity fairly reasonable assumption shrinkage priors shrinkage priors widely used obtaining sparse estimates priors typically take form density positive reals scalemixture densities typically contain heavy mass around zero posterior density heavily concentrated around however also retain heavy enough tails order correctly identify prevent overshrinkage true signals shrinkage priors comprise wide class shrinkage priors priors take form global shrinkage parameter shrinks origin local scale parameters control degree individual shrinkage examples priors include bayesian lasso park casella horseshoe prior carvalho prior strawderman berger neg prior griffin brown prior bhattacharya generalized double pareto gdp family armagan prior bhadra three parameter beta normal tpbn mixture family introduced armagan generalizes several shrinkage priors tpbn family places beta prime density also known inverted beta prior positive constants examples priors fall tpbn family include horseshoe prior strawdermanberger prior gamma neg prior priors studied extensively context sparse normal means estimation many authors shown posterior distribution priors contracts near minimax rate past posterior contraction results relied tuning estimating global parameter achieve rate either priori specified specific rate decay van der pas ghosh chakrabarti estimated data empirical bayes placing prior van der pas van der pas bhattacharya moving beyond framework van der pas provided conditions posterior distribution shrinkage prior form achieves minimax posterior contraction rate provided posteriori independent result quite general covers wide variety priors including normalgamma prior griffin brown lasso prior bhadra thorough discussion optimal posterior contraction given section addition robust estimation often interested identifying true signals entries within essentially conducting simultaneous hypothesis tests assuming true model mixture density bogdan studied risk properties large number multiple testing rules specifically bogdan considered symmetric loss function taken expected total number misclassified tests imposing regularity conditions induce sparsity bound type type error probabilities away zero one bogdan arrived simple closed form asymptotic bayes risk loss termed asymptotically bayes optimal risk sparsity abos risk provided necessary sufficient conditions number classical multiple test procedures benjamini hochberg procedure could asymptotically match abos risk provided true generated point density thorough discussion decision theoretic framework presented section testing rules induced shrinkage priors specifically priors also studied decision theoretic framework assuming come model datta ghosh showed thresholding rule based posterior mean horseshoe prior could asymptotically attain abos risk multiplicative constant ghosh generalized result general class shrinkage priors form including distribution tpbn family gdp family priors ghosh chakrabarti later showed thresholding rule class priors could even asymptotically attain abos risk exactly bhadra also extended rule prior showing testing rule based prior could asymptotically attain abos risk multiplicative constant aforementioned papers global parameter treated either tuning parameter decays zero set empirical bayes estimate van der pas article introduce new fully bayesian shrinkage prior goal twofold observed vector entries would like achieve robust estimation robust testing rule identifying true signals tackle problems introduce new shrinkage prior known inverse igg prior igg prior number attractive theoretical properties extremely mild conditions specification appropriate hyperparameters igg posterior density able attain near minimax contraction rate work differs existing literature several notable ways first igg special case tpbn prior density however show achieve near minimax posterior contraction simply specifying dependent hyperparameters rather tuning estimating shared global parameter prior therefore fall framework theoretical results differ many existing results based priors moreover prior example shrinkage prior necessarily satisfy conditions optimal contraction given van der pas thus proving conditions necessary posterior contraction finally justify use igg showing posterior concentrates rate faster known bayes estimator including horseshoe densities sense addition show testing rule classifying signals asymptotically achieves optimal bayes risk exactly previously ghosh chakrabarti demonstrated testing rules based priors could asymptotically attain optimal bayes risk exactly result required tuning estimating global parameter igg prior avoids placing appropriate values dependent upon sample size hyperparameters instead organization paper follows section introduce igg prior show mimics traditional shrinkage priors placing heavy mass around zero also establish various concentration properties igg prior characterize tail behavior crucial establishing theoretical results section discuss behavior posterior igg prior show class sparse normal mean vectors posterior distribution igg prior contracts around true near minimax rate mild conditions moreover posterior concentrates faster rate known bayes estimator section introduce thresholding rule based posterior mean demonstrate asymptotically attains abos risk exactly section present simulation results demonstrate igg prior excellent performance estimation classification finite samples finally section utilize igg prior analyze prostate cancer data set notation use following notations rest paper let two sequences real numbers indexed sufficiently large write denote lim inf lim sup denote exists constant independent cbn provided sufficiently large write moreover sufficiently large positive constant independent write abnn write thus throughout paper also use denote standard normal random variable cumulative distribution function probability density function respectively inverse igg prior suppose observed task estimate vector consider putting prior form ind denotes beta prime density scale mixture prior special case tpbn family priors global parameter fixed one easily sees posterior mean given using simple transformation variables also see posterior density shrinkage factor proportional exp clear amount shrinkage controlled shrinkage factor appropriately chosen one obtain sparse estimates example obtain standard density distinguish work previous results note beta prime density rewritten product independent inverse gamma gamma densities reparametrize follows figure marginal density igg prior hyperparameters comparison shrinkage priors prior marginal density density dir specified prior bayesian hierarchy ind noted rate parameter could replaced positive constant representation gives important intuition behavior igg prior namely small values places mass around zero proposition shows marginal distribution single igg prior singularity zero proposition endowed igg prior marginal distribution unbounded singularity zero proof see appendix proposition gives insight choose hyperparameters namely see small values igg prior induce sparse estimates shrinking observations zero illustrate section tails igg prior still heavy enough identify signals significantly far away zero figure gives plot marginal density igg prior figure shows small value igg singularity zero igg prior also appears slightly heavier mass around zero shrinkage priors maintaining tail robustness section provide theoretical argument shows shrinkage profile near zero igg indeed aggressive previous known bayesian estimators concentration properties igg prior consider igg prior given allow hyperparameter allowed vary namely allow even mass placed around zero also fix lie interval emphasize hyperparameter depends rewrite prior ind rest paper label particular variant igg prior iggn prior described section shrinkage factor plays critical role amount shrinkage observation section characterize tail properties posterior distribution demonstrates iggn prior shrinks estimates zero still heavy enough tails identify true signals following results assume iggn prior theorem exi proof see appendix corollary fixed theorem fix exi proof see appendix corollary fixed theorem fixed theorem fix exp proof see appendix corollary fixed every fixed corollary fixed every fixed since corollaries illustrate observations shrunk towards origin iggn prior however corollaries demonstrate big enough posterior mean assures tails igg prior still sufficiently heavy detect true signals use concentration properties established theorem provide sufficient conditions posterior mean posterior distribution iggn prior contract around true minimax rate section concentration properties also help construct multiple testing procedure based section posterior behavior igg prior sparse normal vectors nearly black sense suppose observe let denote subset given say sparse nearly black let true mean vector seminal work donoho showed estimator corresponding minimax risk respect norm denoted given inf sup log throughout paper denotes expectation respect distribution effectively states presence sparsity estimator loses logarithmic factor ambient dimension penalty knowing true locations zeroes moreover implies need number replicates order true sparsity level consistently estimate order performance bayesian estimators compared frequentist ones say bayesian point estimator attains minimax risk order constant sup log examples potential choices include posterior median posterior mean johnstone silverman posterior mode pertains particular point estimate fully bayesian interpretation say posterior distribution contracts around true rate least fast minimax risk sup log every hand another seminal paper ghosal showed posterior distribution contract faster minimax rate log qnn around truth hence optimal rate contraction posterior distribution around true must minimax optimal rate multiplicative constant words use fully bayesian model estimate nearly black normal mean vector minimax optimal rate benchmark posterior distribution capture true ball squared radius log multiplicative constant subsequent section first prove iggn prior satisfy conditions posterior contraction given van der pas shrinkage priors provide sufficient conditions rate decay iggn prior posterior mean attains minimax risk posterior distribution contracts minimax rate order hold iggn prior true sparsity level must known however unknown iggn prior still attain concentration rates minimax posterior contraction igg prior shrinkage priors priors priori independent van der pas gave sufficient conditions posterior contracts minimax rate given variety densities known satisfy conditions including horseshoe priors lasso normalgamma prior inverse gaussian prior first demonstrate igg prior prior fail satisfy conditions proceeding provide conditions achieve minimax posterior contraction rate first restate theorem van der pas proposition van der pas suppose observed assume prior form suppose let arbitrary positive sequence tending suppose following conditions scale prior hold write function uniformly regular varying exist constants depend suppose constants suppose constant let qnn log qnn let constant log assume csn conditions sup log posterior distribution prior contracts minimax contraction rate briefly summarize condition proposition assumes posterior recovers nonzero means optimal rate ensuring tails decay faster exponential rate condition ensures puts finite mass values finally condition describes decay away neighborhood zero one easily checks igg prior satisfies first two conditions appropriately chosen necessarily satisfy third one show lemma lemma suppose observe suppose sequence prior density scale term igg prior hyperparameters fails satisfy condition proposition proof see appendix lemma shows iggn prior satisfy conditions given van der pas however show theorem igg posterior fact contract minimax rate provided appropriate rate decay placed therefore shown conditions given proposition sufficient minimax posterior contraction sparse normal means problem necessary next study mean square error mse posterior variance igg prior provide upper bound results assume true belongs set nearly black vectors defined suitably chosen rate upper bounds equal multiplicative constant minimax risk utilizing bounds also show posterior distribution iggn prior able contract around rates since priors independently placed denote resulting vector posterior means ith individual posterior mean therefore bayes estimate squared error loss theorem gives upper bound mean squared error theorem suppose let denote posterior mean vector mse satisfies sup log log provided proof see appendix minimax result donoho also lower bound sup log choice therefore leads upper bound mse order log multiplicative constant based observations immediately following corollary corollary suppose known set conditions theorem sup log corollary shows posterior mean igg prior performs well point estimator able attain minimax risk possibly multiplicative constant although igg prior include point mass zero proposition corollary together show pole zero igg prior mimics point mass well enough heavy tails ensure large observations next theorem gives upper bound total posterior variance corresponding iggn prior theorem suppose prior conditions theorem total posterior variance satisfies log sup log provided proof see appendix proven theorems ready state main theorem concerning optimal posterior contraction theorem shows igg competitive popular priors like globallocal shrinkage priors considered ghosh chakrabarti prior considered bhattacharya denote posterior mean vector theorem suppose suppose true sparsity level known prior qnn sup log sup log every proof straightforward application markov inequality combined results theorems leads follows markov inequality combined result theorem theorem shows mild regularity conditions posterior distribution igg prior contracts around true mean vector corresponding bayes estimates least fast minimax risk since posterior contract around truth faster rate log ghosal posterior distribution igg prior conditions theorem must contract around true minimax optimal rate multiplicative constant remark conditions needed attain minimax rate posterior contraction quite mild namely require need make assumptions size true signal size sparsity level comparison castillo van der vaart showed prior gaussian slab contracts rate log qnn bhattacharya showed given dir prior prior posterior contracts around minimax rate provided provided log iggn prior removes restrictions moreover minimax contraction result rely tuning estimating global tuning parameter many previous authors done instead appropriate selection hyperparameters bayesian hierarchy product density reality true sparsity level rarely known best obtain contraction rate log suitable modification theorem leads following corollary corollary suppose suppose true sparsity level unknown prior sup log sup log every shown posterior mean attains minimax risk multiplicative constant posterior density captures true ball squared radius log multiplicative constant quantify shrinkage profile around zero terms risk bounds show risk bound fact sharper known shrinkage priors risk bounds section established choice allows iggn posterior contract near minimax rate provided figure suggests shrinkage around zero aggressive iggn prior known shrinkage priors set small values section provide theoretical justification behavior near zero carvalho bhadra showed true data generating model bayes estimate sampling density horseshoe estimators converge true model rate terms distance true model posterior density argue result horseshoe estimators squelch noise better shrinkage estimators however section show iggn prior able shrink noise even aggressively appropriate chosen let true parameter value sampling model let log denote divergence density proof utilizes following result clarke barron proposition clarke barron let posterior distribution corresponding prior observing data according ther sampling model define posterior predictive density assume risk bayes estimator define satisfies log denotes measure set using proposition shown carvalho bhadra global parameter fixed true parameter horseshoe risk satisfies log log positive constant rate sense risk lower maximum likelihood estimator mle rate log next theorem establishes igg prior achieve even faster rate convergence sense appropriate choices theorem suppose true sampling model igg prior optimal rate convergence satisfies inequality log log log log proof see appendix since log see theorem iggn posterior density hyperparameters optimal convergence rate convergence rate faster horseshoe converge rate log log log knowledge sharpest known bound risk bayes estimator result provides rigorous explanation observation igg seems shrink noise aggressively shrinkage priors theorem justifies use choice hyperparameter igg prior also provides insight choose hyperparameter shows constant large set large theorem thus implies order minimize distance igg posterior density pick small since require order achieve contraction rate theoretical results suggest set small optimal posterior concentration multiple testing igg prior asymptotic bayes optimality sparsity suppose observe identify true signals conduct simultaneous tests assumed generated true model represents diffuse slab density point mass mixture model often considered theoretical ideal generating sparse vector statistical literature indeed carvalho referred model gold standard sparse problems model equivalent assuming follows random variable whose distribution determined latent binary random variable denotes event true corresponds event false assumed bernoulli random variables distribution mass assumed follow distribution marginal distributions given following model testing problem equivalent testing simultaneously versus consider symmetric loss individual test total loss multiple testing procedure assumed sum individual losses incurred test letting denote probabilities type type errors ith test respectively bayes risk multiple testing procedure model given bogdan showed rule minimizes bayes risk test rejects denotes marginal density denotes log log rule known bayes oracle makes use unknown parameters hence attainable finite samples reparametrizing threshold becomes log log bogdan considered following asymptotic scheme assumption sequences vectors satisfies following conditions log bogdan provided detailed insight threshold summarizing briefly type type errors zero inference essentially better tossing coin assumption bogdan showed corresponding asymptotic optimal bayes risk particularly simple form given ropt tbo ptbo terms tend zero testing procedure risk said asymptotically bayes optimal sparsity abos ropt optimal testing rule based igg estimator noted earlier posterior mean depends heavily shrinkage factor concentration properties igg prior proven sections sensible thresholding rule classifies observations signals noise based posterior distribution shrinkage factor consider following testing rule ith observation reject shrinkage factor based iggn prior within context multiple testing good benchmark test procedure whether abos whether optimal risk asymptotically equal bayes oracle risk adopting framework bogdan let rigg denote asymptotic bayes risk testing rule compare abos risk defined next theorem illustrates presence sparsity rule fact abos theorem suppose observations distribution sequence vectors satisfies assumption suppose wish test using classification rule suppose way lim rigg ropt rule based iggn prior abos proof see appendix shown thresholding rule based iggn prior asymptotically attains abos risk exactly provided decays zero certain rate relative sparsity level example prior mixing proportion known set hyperparameter conditions classification rule abos satisfied work ultimately moves testing problem beyond framework previously datta ghosh ghosh bhadra ghosh chakrabarti shown horseshoe priors asymptotically attain bayes oracle risk possibly multiplicative constant either specifying rate global parameter estimating empirical bayes estimator case igg prior prove thresholding rule based posterior mean abos without utilizing shared global tuning parameter simulation studies computation selection hyperparameters letting full conditional distributions rest rest rest gig gig denotes generalized inverse gaussian gig density therefore igg model implemented straightforwardly gibbs sampling utilizing full conditionals simulations set light theorems choices ensure igg posterior contract around true least rate keeping small denote igg prior hyperparameters simulation studies described run iterations gibbs sampler discarding first simulation study sparse estimation illustrate performance prior use bhadra specify sparsity levels set signals equal values either total eight simulation settings randomly generate vectors settings compute average squared error loss corresponding posterior median across replicates compare results average squared error loss posterior median horseshoe estimators since shrinkage priors singularities zero priors use fully bayesian approach ghosh prior specify dir prior scale component along bhattacharya results presented table table shows various sparsity signal strength settings posterior median lowest estimated squared error loss nearly simulation settings performs better igg table comparison average squared error loss posterior median estimate across replications results reported horseshoe horseshoe settings empirical results confirm theoretical properties proven section illustrate finite samples igg prior often outperforms popular shrinkage priors empirical results also lend strong support use inverted beta prior scale density shrinkage priors however results suggest obtain better estimation allow vary sample size rather keeping fixed horseshoe priors simulation study multiple testing multiple testing rule adopt simulation framework datta ghosh ghosh fix sparsity levels total simulation settings sample size generate data model log apply thresholding rule using classify model either signals noise estimate average misclassification probability thresholding rule replicates taking plot figure theoretical posterior inclusion probabilities model given along shrinkage weights corresponding posterior inclusion probability figure comparison posterior inclusion probabilities posterior shrinkage weights prior circles figure denote theoretical posterior inclusion probabilities triangles correspond shrinkage weights figure clearly shows small values sparsity level shrinkage weights close proximity posterior inclusion probabilities theoretical results established section justify use using approximation corresponding posterior inclusion probabilities sparse situations therefore motivates use prior corresponding decision rule identifying signals noisy data figure shows estimated misclassification probabilities decision rule prior along estimated bayes oracle procedure dirichletlaplace horseshoe bayes oracle rule defined decision rule minimizes expected number misclassified signals known bayes oracle therefore serves lower bound whereas line corresponds situation reject null hypotheses without looking data rule use log bogdan theoretically established abos property procedure choice misclassification probability oracle igg sparsity figure estimated misclassification probabilities thresholding rule based igg posterior mean nearly good bayes oracle priors use classification rule reject scale parameter age model horseshoe priors specify halfcauchy prior global parameter since shared global parameter posterior depends data carvalho first introduced thresholding rule horseshoe ghosh later extended rule general class shrinkage priors includes generalized double pareto priors based ghosh simulation results horseshoe performs similarly better aforementioned priors include priors comparison study results provide strong support theoretical findings section strong justification use classify signals figure illustrates misclassification probability igg prior practically indistinguishable bayes oracle gives lowest possible thresholding rule based igg table comparison false discovery rate fdr different classification methods dense settings lowest fdr different methods prior priors also appears quite competitive compared bayes oracle bhadra proved prior asymptotically matches bayes oracle risk multiplicative constant treated tuning parameter prove case endowed prior also appear theoretical justification thresholding rule prior literature hand theorem provides theoretical support use igg prior confirmed empirical study figure also shows performance rule horseshoe degrades considerably becomes dense sparsity level horseshoe misclassification rate close marginally better rejecting null hypotheses without looking data phenomenon also observed datta ghosh ghosh appears dense setting many noisy entries moderately far zero horseshoe prior shrink aggressively enough towards zero order testing rule classify true noise prior seems alleviate adding additional prior bayes hierarchy table report false discovery rate fdr dense settings different methods see fdr quite bit larger horseshoe methods table also shows prior tight control fdr dense settings although igg prior constructed specifically control fdr see practice provide excellent control false positives finally demonstrate shrinkage properties corresponding prior along horseshoe posterior expectation flat igg figure posterior mean plot laplace priors figure plot posterior expectations prior posterior expectations priors posterior expectations amount posterior shrinkage observed terms distance line posterior expectation figure clearly shows near zero noisy entries aggressively shrunk towards zero prior priors poles zero confirms findings theorem proved shrinkage profile near zero aggressive prior sense priors meanwhile figure also shows signals left mostly unshrunk confirming igg shares tail robustness priors aggressive shrinkage noise explains igg performs better estimation demonstrated section analysis prostate cancer data set demonstrate practical application igg prior using popular prostate cancer data set introduced singh data set gene expression values genes subjects normal control subjects prostate cancer patients aim identify genes significantly different control cancer patients problem reformulated normal means problem first conducting gene transforming test statistics using inverse normal cumulative distribution function cdf transform denotes cdf student distribution degrees freedom model allows implement igg prior conduct simultaneous testing identify genes significantly associated prostate cancer additionally also estimate argued efron interpreted effect size ith gene prostate cancer efron first analyzed model particular data set obtaining empirical bayes estimates ron based twogroups model analysis use posterior means estimate strength association implement model model use classification rule identify significant genes comparison also fit model priors benchmark procedure fdr set selects genes significant comparison genes procedure prior selects genes significant priors select genes respectively indicating conservative estimates genes flagged significant procedure included genes igg prior classifies significant hand prior conclusions diverge procedure seven genes genes deemed significant table shows top genes selected efron estimated effect size prostate cancer compare efron empirical bayes posterior mean estimates posterior mean estimates igg priors results confirm tail robustness igg prior shrinkage priors shrink estimated effect size significant genes less aggressively efron procedure table also shows large signals igg posterior slightly less shrinkage large signals posterior roughly amount posterior posterior shrinks test statistics least large signals igg estimates still quite similar gene ron table effect size estimates top genes selected efron igg models twogroups empirical bayes model efron concluding remarks paper introduced new shrinkage prior called inverse prior estimating sparse normal mean vectors prior shown number good theoretical properties including heavy probability mass around zero heavy tails enables igg prior perform selective shrinkage attain near minimax contraction around true igg posterior also converges true model faster rate horseshoe posterior densities sense igg fall class priors utilize beta prime density prior scale component model however results suggest added flexibility allowing parameters density vary sample size rather keeping fixed added flexibility leads excellent empirical performance obviates need estimate global tuning parameter moreover thresholding posterior mean igg used identify signals investigated asymptotic risk properties classification rule within decision theoretic framework bogdan established asymptotically optimal theoretical properties multiple testing since specify estimate global parameter paper appears first article establish abos property shrinkage prior fall framework simulation studies demonstrate igg strong finite sample performance obtaining sparse estimates correctly classifying entries either signals noise setting hyperparameters igg prior outperforms popular shrinkage estimators finally demonstrated practical application igg prior prostate cancer data set recent years bayesian shrinkage priors gained great amount attention computational efficiency ability mimic mixtures obtaining sparse estimates paper contributes large body methodological theoretical work possible future directions research example igg prior adapted statistical problems sparse covariance estimation variable selection covariates many others conjecture igg would satisfy many optimality properties model selection consistency optimal posterior contraction etc utilized contexts despite absence global parameter igg model adapts well sparsity performing well sparse dense settings seems stark contrast remarks made authors like carvalho argued shrinkage priors contain shared global parameters enjoy benefits adaptivity nevertheless could investigate theoretical empirical performance improved even incorporating global parameter igg framework leave interesting problems future research acknowledgments authors would like thank anirban bhattacharya xueying tang sharing codes modified generate figures proofs section proof proposition joint distribution prior proportional exp exp exp exp exp thus exp exp thus marginal density proportional expression bounded constant depends integral expression clearly diverges therefore diverges infinity monotone convergence theorem proof theorem posterior distribution iggn proportional exp since exp strictly decreasing exi exp exp exi exi proof theorem note since increasing additionally since increasing using facts exp exp exi exi exi exi proof theorem first note since increasing therefore letting denote normalizing constant depends exp exp exp exp also since increasing exp exp exp exp combining exp proofs section proof lemma enough show second term lefthand side satisfies utilize prior formulation prior scale term since log assumption large additionally inequality log exp combining exp exp log large thus holds proving theorems first state two lemmas lemmas denote posterior mean single observation arguments follow closely van der pas datta ghosh ghosh chakrabarti except arguments rely controlling rate decay tuning parameter empirical bayes estimator case since dealing fully bayesian model degree posterior contraction instead controlled positive sequence hyperparameters lemma let posterior mean single observation drawn suppose constants fixed bounded function depending satisfying following satisfies lim qsup log proof lemma fix first observe consider two terms separately fact increasing say use change variables second equality next observe since say exp use theorem second inequality let combining every fixed observe fixed strictly decreasing therefore fixed qsup log log log since implies lim qsup log next observe fixed eventually decreasing maximum therefore sufficiently large qsup log letting lim log log lim log lim log log log fact lim log otherwise follows lim qsup log otherwise combining lim qsup log otherwise since clear real number lager expressed form example taking hence given choose obtain clearly depends following see uniformly bounded condition also satisfied completes proof remark conditions lemma see fixed lim equation shows igg prior large observations almost remain unshrunk matter sample size critical ability properly identify signals data present second lemma bounds posterior variance lemma let posterior mean single observation posterior variance bounded proof lemma first prove law iterated variance fact since rewrite inequality follows fact next show holds may alternatively represent final inequality holds lemmas crucial proving theorems provide asymptotic upper bounds mean squared error mse posterior mean iggn prior posterior variance theorems ultimately allow provide sufficient conditions posterior mean posterior distribution iggn prior contract minimax rates proof theorem define qen split mse consider nonzero means zero means separately nonzero means using inequality fact get define log let fix choose using lemma exists function depending lim sup using fact together obtain sup using fact follows sup combining get noting holds uniformly combine conclude qen log zero means corresponding mse split follows using theorem log use integration parts third inequality using fact log used identity first equality mill ratio second inequality combining qen log immediately follows qen log qen log required result follows observing qen taking supremum completes proof theorem proof theorem define qen decompose total variance consider nonzero means zero means separately nonzero means log first inequality follows lemma last one follows zero means log first inequality follows lemma last one follows combining get qen log qen log required result follows observing qen taking supremum completes proof theorem proofs section proof theorem using beta prime representation igg prior exp denotes beta function transformation ables exp define set exp exp exp exp bound integral term note exp exp therefore combining use fact last two inequalities following clarke barron optimal rate convergence comes setting reflects ideal case independent samples therefore apply proposition substituting invoking lower bound found ultimately gives upper bound risk log log log log log proofs section establish conditions hold must first find lower upper bounds type type error probabilities respectively rule error probabilities given respectively true true end first prove following lemmas lemma give upper lower bounds proof methods follow datta ghosh ghosh chakrabarti except arguments rely control sequence hyperparameters rather specifying rate estimate global parameter framework lemma suppose observations distribution sequence vectors satisfies assumption suppose wish test using classification rule upper bound probability type error ith test given log proof lemma theorem event event implies log exi therefore noting using mill ratio log log log log true log lemma suppose observations distribution sequence vectors satisfies assumption suppose wish test using classification rule suppose sufficiently large lower bound probability type error ith test given proof lemma definition probability type error ith decision given true theorem exp follows exp thus using definition noting exp last inequality used fact fact log term final equality greater zero sufficiently large lemma suppose lemma assume way bpnn sufficiently large upper bound probability type error ith test given terms tend zero proof lemma definition probability type error given true fix using inequality obtain coupled theorem obtain sufficiently large exp therefore true log exp log log true final equality used fact second log term second last equality bounded quantity note therefore fact lim second condition assumption log assumption bpnn implies therefore fourth condition assumption fact log log log log log log thus using lemma suppose lemma lower bound probability type error ith test given terms tend zero proof lemma definition probability type error ith decision given true theorem exi therefore true exi true log true since second condition assumption lim facts sufficiently large log log log log second last equality used assumption second fourth conditions assumption proof theorem since posteriori independent type type error probabilities every test lemmas large enough log taking limit terms using sandwich theorem lim ith test assumptions hyperparameters lemmas therefore asymptotic risk classification rule rigg bounded follows rigg therefore rigg rigg lim sup rbo ropt opt lim inf supremum grid clearly numerator term therefore thus one lim inf rigg rigg lim sup ropt ropt classification rule abos rigg ropt references armagan clyde dunson generalized beta mixtures gaussians zemel bartlett pereira weinberger editors advances neural information processing systems pages armagan dunson lee generalized double pareto shrinkage statistica sinica benjamini hochberg controlling false discovery rate practical powerful approach multiple testing journal royal statistical society series methodological berger robust generalized bayes estimator confidence region multivariate normal mean ann bhadra datta polson willard estimator signals bayesian bhattacharya pati pillai dunson priors optimal shrinkage journal american statistical association pmid bogdan chakrabarti frommlet ghosh asymptotic sparsity multiple testing procedures ann carvalho polson scott handling sparsity via horseshoe van dyk welling editors proceedings twelth international conference artificial intelligence statistics volume proceedings machine learning research pages hilton clearwater beach resort clearwater beach florida usa pmlr carvalho polson scott horseshoe estimator sparse signals biometrika castillo van der vaart needles straw haystack posterior concentration possibly sparse sequences ann clarke barron asymptotics bayes methods ieee transactions information theory datta ghosh asymptotic properties bayes risk horseshoe prior bayesian donoho johnstone hoch stern maximum entropy nearly black object journal royal statistical society series methodological efron future indirect evidence statist ghosal ghosh van der vaart convergence rates posterior distributions ann ghosh chakrabarti asymptotic optimality onegroup shrinkage priors sparse problems bayesian ghosh tang ghosh chakrabarti asymptotic properties bayes risk general class shrinkage priors multiple hypothesis testing sparsity bayesian griffin brown inference prior distributions regression problems bayesian griffin brown priors sparse regression modelling bayesian johnstone silverman needles straw haystacks empirical bayes estimates possibly sparse sequences ann park casella bayesian lasso journal american statistical association bayesian estimation sparse signals continuous prior ann statist appear singh febbo ross jackson manola ladd tamayo renshaw amico richie lander loda kantoff golub sellers gene expression correlates clinical prostate cancer behavior cancer cell strawderman proper bayes minimax estimators multivariate normal mean ann math van der pas salomond conditions posterior contraction sparse normal means problem electron van der pas van der vaart adaptive posterior contraction rates horseshoe electron van der pas kleijn van der vaart horseshoe estimator posterior concentration around nearly black vectors electron wellcome trust association study cases seven common diseases shared controls nature
| 10 |
dec learning transferable architectures scalable image recognition barret zoph google brain vijay vasudevan google brain jonathon shlens google brain quoc google brain barretzoph vrv shlens qvl abstract cation represents one important breakthroughs deep learning successive advancements benchmark based convolutional neural networks cnns achieved impressive results significant architecture engineering developing neural network image classification models often requires significant architecture engineering paper attempt automate engineering process learning model architectures directly dataset interest approach expensive dataset large propose search architectural building block small dataset transfer block larger dataset key contribution design new search space enables transferability experiments search best convolutional layer cell dataset apply cell imagenet dataset stacking together copies cell parameters although cell searched directly imagenet architecture constructed best cell achieves among published works accuracy imagenet model better accuracy best architectures billion fewer flops reduction computational demand previous model evaluated different levels computational cost accuracies models exceed models instance smaller network constructed best cell also achieves accuracy better models mobile platforms architecture constructed best cell achieves error rate also finally image features learned image classification also transferred computer vision problems task object detection learned features used framework surpass achieving map coco dataset paper consider learning convolutional architectures directly data application imagenet classification addition difficult important benchmark computer vision features derived imagenet classifiers great importance many computer vision tasks example features networks perform well imagenet classification provide performance transferred computer vision tasks labeled data limited introduction approach inspired recently proposed neural architecture search nas framework uses policy gradient algorithm optimize architecture configurations even though nas attractive method search good convolutional network architectures applying directly imagenet dataset computationally expensive given size dataset therefore propose search good architecture far smaller dataset automatically transfer learned architecture imagenet achieve transferrability designing search space complexity architecture independent depth network size input images concretely convolutional networks search space composed convolutional layers cells identical structure different weights searching best convolutional architectures therefore reduced searching best cell structure searching best cell structure two main benefits much faster searching entire network architecture cell likely generalize problems experiments approach significantly accelerates search best architectures using factor learns architectures successfully transfer imagenet imagenet classification important benchmark computer vision seminal work using convolutional architectures imagenet main result best architecture found achieves accuracy transferred imagenet classification without much tion imagenet architecture constructed best cell achieves among published works accuracy result amounts improvement accuracy best architectures billion fewer flops architecture achieves error rate also additionally simply varying number convolutional cells number filters convolutional cells create convolutional architectures different computational demands thanks property cells generate family models achieve accuracies superior models equivalent smaller computational budgets notably smallest version learned model achieves accuracy imagenet better previously engineered architectures targeted towards mobile embedded vision tasks finally show image features learned image classification generically useful transfer computer vision problems experiments features learned imagenet classification combined framework achieve coco object detection task largest well models largest model achieves map better previous method work makes use search methods find good convolutional architectures dataset interest main search method use work neural architecture search nas framework proposed nas controller recurrent neural network rnn samples child networks different architectures child networks trained convergence obtain accuracy validation set resulting accuracies used update controller controller generate better architectures time controller weights updated policy gradient see figure main contribution work design novel search space best architecture found dataset would scale larger higherresolution image datasets across range computational settings one inspiration search space recognition architecture engineering cnns often identifies repeated motifs consisting combinations convolutional filter banks nonlinearities prudent selection connections achieve results repeated modules present inception resnet models observations suggest may possible controller rnn predict generic convolutional cell expressed terms motifs cell sample architecture probability train child network architecture convergence get validation accuracy controller rnn scale gradient update controller figure overview neural architecture search controller rnn predicts architecture search space probability child network architecture trained convergence achieving accuracy scale gradients update rnn controller stacked series handle inputs arbitrary spatial dimensions filter depth approach overall architectures convolutional nets manually predetermined composed convolutional cells repeated many times convolutional cell architecture different weights easily build scalable architectures images size need two types convolutional cells serve two main functions taking feature map input convolutional cells return feature map dimension convolutional cells return feature map feature map height width reduced factor two name first type second type convolutional cells normal cell reduction cell respectively reduction cell make initial operation applied cell inputs stride two reduce height width operations consider building convolutional cells option striding figure shows placement normal reduction cells imagenet note imagenet reduction cells since incoming image size compared cifar reduction normal cell could architecture empirically found beneficial learn two separate architectures use common heuristic double number filters output whenever spatial activation size reduced order maintain roughly constant hidden state dimension importantly much like inception resnet models consider number motif repetitions number initial convolutional filters free parameters tailor scale image classification problem varies convolutional nets structures normal reduction cells searched controller rnn structures cells searched provides good results although exhaustively searched space due computational limitations steps controller rnn selects operation apply hidden states collected following set operations based prevalence cnn literature figure scalable architectures image classification consist two repeated motifs termed normal cell reduction cell diagram highlights model architecture imagenet choice number times normal cells gets stacked reduction cells vary experiments within search space defined follows search space cell receives input two initial hidden states outputs two cells previous two lower layers input image controller rnn recursively predicts rest structure convolutional cell given two initial hidden states figure predictions controller cell grouped blocks block prediction steps made distinct softmax classifiers corresponding discrete choices elements block step select hidden state set hidden states created previous blocks step select second hidden state options step step select operation apply hidden state selected step step select operation apply hidden state selected step step select method combine outputs step create new hidden state algorithm appends hidden state set existing hidden states potential input subsequent blocks controller rnn repeats prediction steps times corresponding blocks convolutional cell experiments selecting identity convolution average pooling max pooling convolution conv conv convolution dilated convolution max pooling max pooling convolution conv step controller rnn selects method combine two hidden states either addition two hidden states concatenation two hidden states along filter dimension finally unused hidden states generated convolutional cell concatenated together depth provide final cell output allow controller rnn predict normal cell reduction cell simply make controller predictions total first predictions normal cell second predictions reduction cell finally work makes use reinforcement learning proposal nas intensively however also possible use random search search models search space random search instead sampling decisions softmax classifiers controller rnn sample decisions uniform distribution experiments find random search worse reinforcement learning dataset although value using reinforcement learning gap smaller found original work result suggests new search space well designed random search perform reasonably well compare reinforcement learning random search section experiments results section describe experiments method described learn convolutional cells summary architecture searches performed using classification task controller rnn trained using proximal policy optimization ppo employing global workqueue system generating pool child networks controlled rnn experiments pool workers workqueue consisted gpus please see appendix complete details architecture learning algorithm controller system softmax layer select second hidden state select operation first hidden state select operation second hidden state controller hidden layer select one hidden state select method combine hidden state new hidden layer add repeat times conv maxpool hidden layer hidden layer figure controller model architecture recursively constructing one block convolutional cell block requires selecting discrete parameters corresponds output softmax layer example constructed block shown right convolutional cell contains blocks hence controller contains softmax layers predicting architecture convolutional cell experiments number blocks result search process days yields several candidate convolutional cells note search procedure almost faster previous approaches took additionally demonstrate resulting architecture superior accuracy figure shows diagram top performing normal cell reduction cell note prevalence separable convolutions number branches compared competing architectures subsequent experiments focus convolutional cell architecture although examine efficacy convolutional cells imagenet experiments described appendix report results well call three networks constructed best three searches nasneta demonstrate utility convolutional cells employing learned architecture family imagenet classification tasks latter family tasks explored across orders magnitude computational budget learned convolutional cells several may explored build final network given task number cell repeats number filters initial convolutional cell selecting number initial filters use common heuristic double number filters whenever stride finally define simple notation indicate two parameters networks indicate number cell repeats number filters penultimate layer network respectively particular note previous architecture search used gpus days resulting method paper uses gpus across days resulting former effort used nvidia gpus whereas current efforts used faster nvidia discounting fact use faster hardware estimate current procedure roughly efficient results image classification task image classification set figure test accuracies best architectures reported table along models seen table large model cutout data augmentation achieves error rate averaged across runs slightly better previous best record best single run model achieves error rate results imagenet image classification performed several sets experiments imagenet best convolutional cells learned emphasize merely transfer architectures train imagenet models weights scratch results summarized table figure first set experiments train several image classification systems operating resolution images different experiments scaled computational demand create models roughly par computational cost polynet show family models achieve performance fewer floating point operations parameters comparable architectures second demonstrate adjusting scale model achieve performance smaller computational budgets exceeding streamlined cnns operating regime note residual connections convolutional cells models learn skip connections empirically found manually inserting residual connections cells help performance training setup imagenet similar please see appendix details table shows convolutional cells discovered generalize well imagenet concat add add concat max add sep add iden tity sep add sep iden tity avg add avg sep add avg sep avg iden tity add sep sep add sep max normal cell sep add avg sep reduction cell figure architecture best convolutional cells blocks identified input white hidden state previous activations input image output pink result concatenation operation across resulting branches convolutional cell result blocks single block corresponds two primitive operations yellow combination operation green note colors correspond operations figure model depth params error rate densenet densenet densenet cutout nas nas cutout cutout table performance neural architecture search models results nasnet mean accuracy across runs lems particular model based convolutional cells exceeds predictive performance corresponding model importantly largest model achieves new performance imagenet based single predictions surpassing previous best published result among unpublished works model par best reported result significantly fewer floating point operations figure shows complete summary results comparison published results note family models based convolutional cells provides envelope broad class architectures finally test well best convolutional cells may perform setting mobile devices table settings number floating point operations severely constrained predictive performance must weighed latency requirements device limited computational resources mobilenet shufflenet provide polynet xception senet accuracy precision accuracy precision shufflenet mobilenet senet polynet xception shufflenet mobilenet operations millions parameters millions figure accuracy versus computational demand left number parameters right across top performing published cnn architectures imagenet ilsvrc challenge prediction task computational demand measured number multiplyadd operations process single image black circles indicate previously published results red squares highlight proposed models model image size parameters top acc top acc inception inception xception inception resnet polynet senet table performance architecture search published models imagenet classification indicate number composite operations single image note composite operations calculated image size reported table model size calculated implementation model parameters top acc top acc inception shufflenet table performance imagenet classification subset models operating constrained computational setting operations per image models use images sults obtaining accuracy respectively images using operations architecture constructed best convolutional cells achieves superior predictive performance curacy surpassing previous models comparable computational demand summary find learned convolutional cells flexible across model scales achieving performance across almost ders magnitude computational budget efficiency architecture search methods improved features object detection primary advance best reported object detection system introduction novel loss pairing loss image featurization may lead even performance gains additionally performance gains achievable ensembling multiple inferences across multiple model instances image crops accuracy epochs image classification networks provide generic image features may transferred computer vision problems one important problems spatial localization objects within image validate performance family networks test whether object detection systems derived lead improvements object detection address question plug family networks pretrained imagenet object detection pipeline using opensource software platform retrain resulting object detection pipeline combined coco training plus validation dataset excluding images perform single model evaluation using rpn proposals per image words pass single image single network evaluate model coco dataset report mean average precision map computed standard coco metric library perform simple search learning rate schedules identify best possible model finally examine behavior two object detection systems employing best performing nasneta image featurization well image featurization geared towards mobile platforms network resulting system achieves map exceeding previous mobileoptimized networks employ table best nasnet network resulting network operating images spatial resolution achieves map exceeding equivalent object detection systems based lesser performing image featurization see appendix example detections images comparisons finally increasing spatial resolution input image results best reported single model result object detection surpassing best previous best results provide evidence nasnet provides superior generic image features may transferred across computer vision tasks figure figure appendix show four examples object detection results produced framework top unique models top unique models top unique models top unique models top unique models top unique models number models sampled figure measuring efficiency random search reinforcement learning learning neural architectures measures total number model architectures sampled validation performance epochs proxy training task emphasize absolute performance proxy task important see text relative gain initial state pair curves measures mean accuracy across top ranking models identified algorithm open question proposed method training efficiency architecture search algorithm section demonstrate effectiveness reinforcement learning architecture search image classification problem compare random search considered strong baseline blackbox optimization given equivalent amount computational resources define effectiveness architecture search algorithm increase model performance initial architecture identified search method importantly emphasize absolute value model performance proxy task less important artificially reflects irrelevant factors employed architecture search process number training epochs specific model construction thus employ increase model performance proxy judging convergence architecture search algorithm figure shows performance reinforcement learning random search model architectures sampled note best model identified significantly better best model found measured proxy classification task additionally finds entire range models superior quality random search observe mean performance model resolution map map shufflenet tdm short side retinanet short side table object detection performance coco datasets across variety image featurizations results object detection framework single crop image top rows highlight image featurizations bottom rows indicate computationally heavy image featurizations geared towards achieving best results results employ subset validation images models identified versus take results indicate although may provide viable search strategy significantly improve ability learn neural architectures related work proposed method related previous work hyperparameter optimization especially recent approaches designing architectures neural fabrics diffrnn metaqnn deeparchitect flexible class methods designing architecture evolutionary algorithms yet much success large scale xie yuille also transferred learned architectures imagenet performance models accuracy notably previous table concept one neural network interact second neural network aid learning process learning learn attracted much attention recent years approaches scaled large problems like imagenet exception recent work focused learning optimizer imagenet classification achieved notable improvements design search space took much inspiration lstms neural architecture search cell modular structure convolutional cell also related previous methods imagenet vgg inception conclusion work demonstrate learn scalable convolutional cells data transfer multiple image classification tasks learned architecture quite flexible may scaled terms computational cost parameters easily address variety problems cases accuracy resulting model exceeds models ranging models designed mobile applications models designed achieve accurate results key insight approach design search space decouples complexity architecture depth network resulting search space permits identifying good architectures small dataset transferring learned architecture image classifications across range data computational scales resulting architectures approach exceed performance imagenet datasets less computational demand humandesigned architectures imagenet results particularly important many computer vision problems object detection face detection image localization derive image features architectures imagenet classification models instance find image features obtained imagenet used combination fasterrcnn framework achieves object detection results finally demonstrate use resulting learned architecture perform imagenet classification reduced computational budgets outperform streamlined architectures targeted mobile embedded platforms references andrychowicz denil gomez hoffman pfau schaul freitas learning learn gradient descent gradient descent advances neural information processing systems pages kiros hinton layer normalization arxiv preprint baker gupta naik raskar designing neural network architectures using reinforcement learning international conference learning representations bergstra bardenet bengio algorithms optimization neural information processing systems bergstra bengio random search hyperparameter optimization journal machine learning research bergstra yamins cox making science model search hyperparameter optimization hundreds dimensions vision architectures international conference machine learning chen monga bengio jozefowicz revisiting distributed synchronous sgd international conference learning representations workshop track chen xiao jin yan feng dual path networks arxiv preprint chollet xception deep learning depthwise separable convolutions proceedings ieee conference computer vision pattern recognition clevert unterthiner hochreiter fast accurate deep network learning exponential linear units elus international conference learning representations deng dong socher feifei imagenet hierarchical image database ieee conference computer vision pattern recognition ieee devries taylor improved regularization convolutional neural networks cutout arxiv preprint donahue jia vinyals hoffman zhang tzeng darrell decaf deep convolutional activation feature generic visual recognition international conference machine learning volume pages duan schulman chen bartlett sutskever abbeel fast reinforcement learning via slow reinforcement learning arxiv preprint finn abbeel levine metalearning fast adaptation deep networks international conference machine learning floreano mattiussi neuroevolution architectures learning evolutionary intelligence fukushima neural network model mechanism pattern recognition unaffected shift position biological cybernetics page gastaldi regularization residual networks international conference learning representations workshop track dai hypernetworks international conference learning representations zhang ren sun deep residual learning image recognition ieee conference computer vision pattern recognition zhang ren sun identity mappings deep residual networks european conference computer vision hochreiter schmidhuber long memory neural computation hochreiter younger conwell learning learn using gradient descent artificial neural networks pages howard zhu chen kalenichenko wang weyand andreetto adam mobilenets efficient convolutional neural networks mobile vision applications arxiv preprint shen sun networks arxiv preprint huang liu weinberger densely connected convolutional networks ieee conference computer vision pattern recognition huang sun liu sedra weinberger deep networks stochastic depth european conference computer vision huang rathod sun zhu korattikara fathi fischer wojna song guadarrama modern convolutional object detectors ieee conference computer vision pattern recognition ioffe szegedy batch normalization accelerating deep network training reducing internal covariate shift international conference learning representations jozefowicz zaremba sutskever empirical exploration recurrent network architectures international conference learning representations krizhevsky learning multiple layers features tiny images technical report university toronto krizhevsky sutskever hinton imagenet classification deep convolutional neural networks advances neural information processing system lecun bottou bengio haffner gradientbased learning applied document recognition proceedings ieee malik learning optimize neural nets arxiv preprint lin girshick hariharan belongie feature pyramid networks object detection proceedings ieee conference computer vision pattern recognition lin goyal girshick focal loss dense object detection arxiv preprint lin maire belongie hays perona ramanan zitnick microsoft coco common objects context european conference computer vision pages springer loshchilov hutter sgdr stochastic gradient descent warm restarts international conference learning representations mendoza klein feurer springenberg hutter towards neural networks proceedings workshop automatic machine learning pages miconi neural networks differentiable structure arxiv preprint miikkulainen liang meyerson rawal fink francon raju navruzyan duffy hodjat evolving deep neural networks arxiv preprint negrinho gordon deeparchitect automatically designing training deep architectures arxiv preprint pinto doukhan dicarlo cox highthroughput screening approach discovering good forms biologically inspired visual representation plos computational biology ravi larochelle optimization model fewshot learning international conference learning representations real moore selle saxena suematsu kurakin evolution image classifiers international conference machine learning ren girshick sun faster towards object detection region proposal networks advances neural information processing systems pages saxena verbeek convolutional neural fabrics advances neural information processing systems schaul schmidhuber metalearning scholarpedia schroff kalenichenko philbin facenet unified embedding face recognition clustering proceedings ieee conference computer vision pattern recognition pages schulman wolski dhariwal radford klimov proximal policy optimization algorithms arxiv preprint shrivastava sukthankar malik gupta beyond skip connections modulation object detection arxiv preprint simonyan zisserman deep convolutional networks image recognition international conference learning representations snoek larochelle adams practical bayesian optimization machine learning algorithms neural information processing systems snoek rippel swersky kiros satish sundaram patwary ali adams scalable bayesian optimization using deep neural networks international conference machine learning srivastava hinton krizhevsky sutskever salakhutdinov dropout simple way prevent ral networks overfitting journal machine learning research stanley ambrosio gauci encoding evolving neural networks artificial life szegedy ioffe vanhoucke alemi impact residual connections learning international conference learning representations workshop track szegedy liu jia sermanet reed anguelov erhan vanhoucke rabinovich going deeper convolutions ieee conference computer vision pattern recognition szegedy vanhoucke ioffe shlens wojna rethinking inception architecture computer vision ieee conference computer vision pattern recognition ulyanov vedaldi lempitsky instance normalization missing ingredient fast stylization arxiv preprint wang tirumala soyer leibo munos blundell kumaran botvinick learning reinforcement learn arxiv preprint weyand kostrikov philbin geolocation convolutional neural networks european conference computer vision wichrowska maheswaranathan hoffman colmenarejo denil freitas learned optimizers scale generalize arxiv preprint wierstra gomez schmidhuber modeling systems internal state using evolino genetic evolutionary computation conference williams simple statistical algorithms connectionist reinforcement learning machine learning xie yuille genetic cnn arxiv preprint xie girshick aggregated residual transformations deep neural networks proceedings ieee conference computer vision pattern recognition zhang loy lin polynet pursuit structural diversity deep networks proceedings ieee conference computer vision pattern recognition zhang zhou mengxiao sun shufflenet extremely efficient convolutional neural network mobile devices arxiv preprint zoph neural architecture search reinforcement learning international conference learning representations appendix experimental details dataset architecture search dataset consists rgb images across classes train test images partition random subset images training set use validation set controller rnn images whitened undergone several data augmentation steps randomly crop patches upsampled images size apply random horizontal flips data augmentation procedure common among related work controller architecture controller rnn lstm hidden units layer softmax predictions two convolutional cells typically associated architecture decision predictions controller rnn associated probability joint probability child network product probabilities softmaxes joint probability used compute gradient controller rnn gradient scaled validation accuracy child network update controller rnn controller assigns low probabilities bad child networks high probabilities good child networks unlike used reinforce rule update controller employ proximal policy optimization ppo learning rate training ppo faster stable encourage exploration also use entropy penalty weight implementation baseline function exponential moving average previous rewards weight weights controller initialized uniformly training controller distributed training use workqueue system samples generated controller rnn added global workqueue free child worker distributed worker pool asks controller new work global workqueue training child network complete accuracy validation set computed reported controller rnn experiments use child worker pool size means networks trained gpus concurrently time upon receiving enough child model training results controller rnn perform gradient update weights using ppo sample another batch architectures global workqueue process continues predetermined number architectures sampled experiments predetermined number architectures means search process terminated child models trained additionally update controller rnn minibatches architectures search top architectures chosen train convergence determine best architecture details architecture search space performed preliminary experiments identify flexible expressive search space neural architectures learn effectively generally strategy preliminary experiments involved explorations identify run architecture search convolutions employ relu nonlinearity experiments elu nonlinearity showed minimal benefit ensure shapes always match convolutional cells convolutions inserted necessary unlike depthwise separable convolution employ batch normalization relu depthwise pointwise operations convolutions followed ordering relu convolution operation batch normalization following whenever separable convolution selected operation model architecture separable convolution applied twice hidden state found empirically improve overall performance training stochastic regularization performed several experiments various stochastic regularization methods naively applying dropout across convolutional filters degraded performance however training nasnet models found stochastically dropping path edge yellow box figure cell fixed probability effective regularizer similar dropout full parts model training test time scale path probability keeping path training interestingly found linearly increasing probability dropping path course training significantly improve final performance cifar imagenet experiments training cifar models cifar models use single period cosine decay models use momentum optimizer momentum rate set models also use weight decay architecture trained fixed epochs architecture search process additionally found beneficial use cosine learning rate decay epochs cifar models trained helped differentiate good architectures also found cifar models use small architecture search process allowed models train quite quickly still finding cells work well stacked training imagenet models use imagenet ilsvrc challenge data large scale image classification dataset consists images labeled across classes overall training testing procedures almost identical imagenet models trained evaluated images using data augmentation procedures described previously use distributed synchronous sgd train imagenet model workers backup workers tesla gpu use rmsprop decay epsilon evaluations calculated using running average parameters time decay rate use label smoothing value imagenet models done additionally models use auxiliary classifier located way network loss auxiliary classifier weighted done empirically found network insensitive number parameters associated auxiliary classifier along weight associated loss models also use regularization learning rate decay scheme exponential decay scheme used dropout applied final softmax matrix probability additional experiments present two additional cells performed well cifar imagenet search spaces used cells slightly different used nasneta model figure concatenate unused hidden states generated convolutional cell instead hiddenstates created within convolutional cell even currently used fed next layer note hiddenstates input cell numbers must match cell valid also allow addition followed layer normalization instance normalization predicted two combination operations within cell along addition concatenation figure concatenate unused hidden states generated convolutional cell like allow prediction addition figure architecture convolutional cell blocks identified input white hidden state previous activations input image convolutional cell result blocks single block corresponds two primitive operations yellow combination operation green concatenate output hidden states output hidden state used hidden state future layers cell takes hidden states thus needs also create output hidden states output hidden state therefore labeled represent next four layers order followed layer normalization instance normalization like example object detection results finally present examples object detection results coco dataset figure figure seen figures featurization works well gives accurate localization objects figure example detections showing improvements object detection previous model featurization top featurization bottom figure architecture convolutional cell blocks identified input white hidden state previous activations input image output pink result concatenation operation across resulting branches convolutional cell result blocks single block corresponds two primitive operations yellow combination operation green figure example detections best performing featurization trained coco dataset top middle images courtesy http bottom image courtesy jonathan huang
| 1 |
implementing efficient solutions sat solvers oct takahisa toda takehide soh abstract solutions sat allsat short variant propositional satisfiability problem despite significance allsat relatively unexplored compared variants thus survey discuss major techniques allsat solvers faithfully implement conduct comprehensive experiments using large number instances various types solvers including one public softwares experiments reveal solver characteristics implemented solvers made publicly available researchers easily develop solver modifying codes compare existing methods introduction propositional satisfiability sat short decide boolean formula satisfiable sat ubiquitous computer science significance attracted attention many researchers theory practice many applications motivated empirical studies particular development sat solvers softwares solve satisfiability fundamental task sat solvers solve many instances possible realistic amount time end various practical algorithms elegant implementation techniques developed many variants sat solutions sat allsat short model enumeration studied paper given cnf formula generate partial satisfying assignments form logically equivalent dnf formula compared neighboring areas allsat relatively unexplored mentioned literature also supported fact recent papers almost publicly available even taken major handbooks related satisfiability recent application allsat data mining fundamental task data mining generate interesting patterns given database examples include frequent itemsets maximal frequent itemsets closed itemsets transaction databases although algorithms exceptions clasp picosat relsat although support solution generation positioned answer set solver single solution sat solver sat solver rather allsat solvers respectively toda soh generating various patterns proposed basically specialized target patterns means different patterns require new algorithms reason framework based declarative paradigm recently proposed basic flow constrains patterns generated formulated logical formulae solved generic solver hence users simply model problems design algorithms among much related work approach problems encoded cnf formulae solved allsat solvers studied advantage declarative paradigm ability handle new patterns flexible manner need see details algorithms solvers based thereby opened wider users instead inferior efficiency approaches practice necessary balance efficiency flexibility therefore improving solver performance essential declarative framework besides data mining many studies application allsat particular formal verification network verification predicate abstraction backbone computation image preimage computation unbounded model checking considering find important clarify techniques allsat solvers improve firm basis however following issues existing researches allsat several methods proposed globally compared thus difficult decide method effective kinds allsat instances experiments carried comprehensive benchmarks public allsat solver makes difficult compare existing techniques thus would like survey major techniques allsat solvers try complement past references gathering organizing existing techniques add novel techniques evaluate solvers conduct experimental comparisons including clasp one softwares solution generation support implemented solvers made publicly available expectation improvement solvers evaluation easily done allsat research stimulated paper organized follows section provides related work allsat section provides necessary notions terminology results section surveys major techniques allsat solvers including original ideas indicated adding asterisks titles section provides experimental results section concludes paper implementing efficient solutions sat solvers related work another variant sat dualization boolean functions given dnf formula boolean function compute complete dnf formula dual function since cnf formula easily obtained interchanging logical disjunction logical conjunction well constants main part convert cnf complete dnf hence essential difference allsat resulting dnf formula must complete dualization terms complexity seems recent empirical study exceptions practical algorithms restricted form dualization presented implementations though arbitrary boolean functions another variant problem counting number total satisfying assignments called propositional model counting sat good applications probabilistic inference problems hard combinatorial problems solvers available although sat apparently similar allsat techniques connected components component caching inherent counting applicable allsat preliminaries necessary notions terminology results concerning boolean functions satisfiability solvers binary decision diagrams presented section boolean basics literal boolean variable negation clause finite disjunction literals term finite conjunction literals propositional formula conjunctive normal form cnf short finite conjunction clauses disjunction normal form dnf short finite disjunction terms identify clauses sets literals cnf formulae sets clauses applies terms dnfs dual boolean function function defined implicant boolean function term considered boolean function order boolean functions introduced implicant prime removal literal results dnf formula complete consists prime implicants dualization repository keisuke murakami takeaki uno http accessed hypergraph transversal computation binary decision diagrams takahisa toda erato minato discrete structure manipulation system project japan science technology agency hokkaido university http accessed toda soh assignment set boolean variables partial function satisfying assignment cnf formula assignment cnf formula evaluates assignment total complete total function variables assigned values boolean formula satisfiable satisfying assignment simplicity say literal assigned value assignment underlying variable makes literal evaluate fear confusion identify assignment function set form identify assignment way assignments literals used interchangeably throughout paper example consider sequence literals means selected order values assigned respectively satisfiability solvers propositional satisfiability problem sat short problem deciding exists satisfying assignment cnf formula algorithm shows basic framework modern sat solvers based simplicity techniques lazy data structures variable selection heuristics restarting deletion policy learnt clauses omitted see details basic behavior algorithm search satisfying assignment way solver finds candidate assignment assigning values variables assignment turns unsatisfying solver proceeds next candidate backtracking extension assignment triggered decide stage unassigned variable selected value assigned spread deduce stage assignments variables deduced recent decision decision assignments given decide stage decision variables assigned values consider decision tree branches decision assignment decision level depth decision tree maintained variable algorithm decision level variable denoted one assigned value literal notation defined underlying variable denotes seen assignment given level deduce stage described clause unit one literals assigned value remaining one unassigned remaining literal called unit literal unit clauses important deduce stage assignments underlying variables unit literals necessarily determined unit literals evaluate assignments determined unit clauses clauses may become unit hence implications deduced implementing efficient solutions sat solvers algorithm dpll procedure conflict driven clause learning denotes decision level variable input cnf formula empty assignment output sat satisfied unsat otherwise decision level true propagate deduce stage conflict happens return unsat analyze diagnose stage else variables assigned values report return sat else decide stage select unassigned variable value end end end unsatisfied clause exists unit clause exists process called unit propagation function propagate performs unit propagation implied assignments given stage implied variables assigned values decision level implied variable notations literal representing implied assignment defined way example consider cnf formula consists following clauses assume decision assignment implied assignment obtained assume decision assignment implied assignments obtained toda soh figure conflict graph arcs labeled antecedents target vertices order assume decision assignment cnf formula satisfied noted middle unit propagation may encounter unsatisfied clause case called conflict soon conflict happens unit propagation halts even though unit clauses still remain conflict case assigned variables assigned prior decision means assignments examined thereby cnf formula must unsatisfiable case solver halts reporting unsat decision made least enter diagnose stage resolve conflict diagnose stage cause conflict met analyzed new clause learnt result added cnf formula solver guided fall conflict conflicts related efficiently modern solvers maintain implication graph search represents implication relation assignments unit propagation specifically implication graph directed acyclic graph vertices correspond literals representing assignments variables arcs correspond implications unit clause unit literal yields unit propagation arcs assignments underlying variables implied assignment added unsatisfied clause exists arcs assignments underlying variables clause special vertex added implication graph might implemented whenever variable implied associated clause determined assignment unit clause clause called antecedent implementing efficient solutions sat solvers example consider cnf formula given example assume turn decision assignments order resulting implication graph shown fig case results conflict becomes unsatisfied conflict graph subgraph implication graph obtained restricting vertices paths subset vertices conflict graph hereafter cut corresponding set arcs connect vertices examples illustrated dotted curves fig ready describe clause learning scheme performed function analyze consider cuts decision assignments one side called reason side special vertex side called conflict side take negation literals reason side incident arcs cut form clause conflict clause clause considered cause conflict indeed variables assigned values following literals reason side incident arcs cut implications conflict side derived conflict must take place therefore avoid conflict necessary variables assigned values least one literals negated condition formulated conflict clause obtained illustrated fig many choices cuts induce conflict clauses among conflict clauses contain exactly one literal current decision level known effective unique implication point uip short vertex conflict graph every path decision current decision level passes note least one uip exists decision current decision level uip first uip scheme find uip closest example consider conflict graph given fig middle curve gives first uip hence conflict clause first uip scheme efficiently performed traversing conflict graph reverse order implications based implementation implication graph stated traversal easy decide current literal uip consider cut induces uip literals reason side assignments determined prior conflict side since traversal reverse order implications uip located keeping track number unvisited vertices immediate neighbors visited vertices remains decide backtrack level cancel assignments set current decision level decide backtracking level two choices suppose decision level resulted conflict solution extends toda soh current assignment chronological backtracking cancels assignments level including decision attempts find solution extending assignment level conflict clause way chronological backtracking undoes assignments higher lower level drawback solution higher levels hard get levels hand nonchronological backtracking jumps lower level canceling assignments attempts find solution upward extending assignment level backtracking level commonly determined largest level variable conflict clause current level canceled assignment levels possibility becoming solution left general though mean solver loses opportunity find solutions example consider conflict graph given fig since conflict clause backtrack level hence assignments canceled search restarts however solution could obtained backtracking level remark conflict clause learnt assignment unit literal implied hence function analyze adds implied assignment current assignment function effect assignments taken subsequent propagation binary decision diagrams binary decision diagram bdd short graphical representation boolean functions compressed form follow notation terminology knuth book figure shows example bdd exactly one node indegree called root branch node label two children node labels taken variable indices children consists child child arc child called arc illustrated dotted arrow arc means assigning value variable similarly arc child called arc illustrated solid arrow arc turn means assigning value variable two sink nodes denoted paths root mean satisfying unsatisfying assignments respectively bdds called ordered node branch node child index less bdds called reduced following reduction operations applied branch node whose arcs point redirect incoming arcs eliminate fig two branch nodes subgraphs rooted equivalent merge fig implementing efficient solutions sat solvers figure bdd representation function node elimination node merging figure reduction rules paper ordered reduced bdds simply called bdds ordered bdds need fully reduced distinguished ordinary bdds calling obdds note node bdd obdd conventionally identified subgraph rooted also forms bdd obdd techniques solutions sat solvers order implement efficient allsat solver carefully determine appropriate suite techniques considering various factors characteristics details mentioned partly scattered past references survey major existing techniques allsat solvers try complement past references gathering organizing existing techniques add novel techniques section organized follows three major types solvers presented subsections subsection starts overview provides specific techniques ends configuration implemented solvers added asterisk title specific technique contains original ideas blocking solvers overview one easiest ways implementing allsat solver repeatedly run ordinary sat solver black box find satisfying assignments one one specific procedure follows toda soh run sat solver cnf formula unsatisfiable halt report found total satisfying assignment compute clause form set notation add step clause obtained step called blocking clause since blocking clause complement term corresponding extended cnf formula satisfied later search thereby new solution found repetition furthermore since total assignment blocked example execute procedure cnf formula suppose solver returns satisfying assignment blocking clause added solver returns satisfying assignment blocking clause added time solver returns unsat means satisfying assignments found since blocking clause size equal number variables unit propagation likely slow hence arguably better consider blocking clauses consist decisions call blocking clause convenience since decisions determine assignments blocking clauses effect literals however know literals decisions need modify solver code algorithm pseudo code obtained modifying algorithm convenience simply call blocking procedure distinguish procedures presented later lines changed line solver halts means variables implied without decision solutions found line assignments except determined without decision canceled line solver backtracks root level blocking clause blocks single assignment many number blocking clauses total satisfying assignments since stored likely result space explosion slow unit propagation considered serious issue since unit propagation modern sat solvers occupies majority whole processing time another disadvantage whenever implementing efficient solutions sat solvers algorithm blocking procedure denotes decision level input cnf formula empty assignment output satisfying assignments decision level true propagate deduce stage conflict happens halt analyze diagnose stage else variables assigned values report halt compute blocking clause else decide stage select unassigned variable value end end end solution found solver enforced restart scrach extended cnf formula resuming search hand blocking implementation might considered good choice instances even one solution hard find instances small number solutions easily implementable blocking clause mechanism realized outside solver small modification solver code demonstrated algorithm benefit powerful techniques modern sat solvers clause learning nonchronological backtracking remark blocking clauses added cnf formula deleted afterward must treated way conflict clauses otherwise solutions would rediscovered many times allowed paper toda soh simplifying satisfying assignments simplification satisfying assignments obtain satisfying assignment cnf formula smaller still makes evaluate done canceling assignments redundant variables redundant means either value assigned without effect value simplified assignment partial general represents set total assignments including original assignment possibly satisfying assignments variable lifting refers number simplification techniques since topic literature details interested readers referred well references therein recent results see also literature simplification allows obtain single solution possibly exponentially many solutions compact form partial assignment desirable obtain partial assignment minimum size however minimization known computationally hard practice compromise assignment means combine simplification blocking mechanism algorithm suffices perform simplification line take complement decisions simplified assignment noted simplified blocking clause empty variables except implied ones turn redundant means remaining solutions covered thus case solver must halted thanks simplification number blocking clauses may largely reduced leads good effect unit propagation cost performing simplification example consider cnf formula given example assignments given order last decision redundant removing obtain partial assignment consider decision assignments variables implied assigned either value flipping assignments obtain simplified blocking clause continuing search algorithm whenever solution found solver enforced backtrack root level due variable selection heuristic different assignment examined region search space solution found remains incomplete may give rise unnecessary propagations conflicts later search region restart however essential allsat solving particular simplification used indeed simplification performed new clause added say assignment smaller variables assigned values also assigned values values coincide computing minimal satisfying assignment requires quadratic time still expensive implementing efficient solutions sat solvers state implications literals decisions becomes inconsistent necessary deduce implications straightforward answer continue search problem due backtracking addressed simple technique called progress saving stores recent canceled decisions array simulates backtracking proposed specifically time solver enters decide stage checks assignment selected variable stored simulates previous decision exists otherwise follows default heuristic although technique proposed context sat also applicable allsat example continuing example suppose added blocking clause backtracked root level progress saving enabled point assignments root level canceled yet previous decisions stored array selected previous decision made implementation implemented programs based blocking procedure according whether simplification continuation techniques selected simplification technique used method related set covering model minimal satisfying cube sake efficiency satisfying assignments computed basic idea given total assignment select small number decision variables make cnf formula evaluate done following way select decision variables related implications least one variable clause satisfied selected variables select arbitrary decision variable makes current clause satisfied simplified satisfying assignment consists assignments selected decision variables implied variables flipping assignments selected decisions obtain blocking clause blocks total assignments represented simplified assignment implemented continuation technique backtracking line decisions stored array backtracking simulate decisions whenever possible order decision levels clearly decisions assumed due blocking clause conflict contradiction previous decision happen point solver continues search enter conflict resolution decide stage solvers overview give basic idea allsat procedure without aid blocking clauses like blocking procedure modify toda soh algorithm chronological backtracking procedure denotes decision level input assignment current decision level output updated objects decision assignment level opposite value return solution found conflict conflict figure snapshots solver state three different cases assignments given left right top bottom separated line level integers specify assignments variable assigned value decision assignments asterisk superscript assignments null antecedent asterisk subscript algorithm main feature employ chronological backtracking instead backtracking chronological backtracking used bit different ordinary one described section shown fig differences insert flipped decision register implication graph incomming arc reason implied flipped decisions absence blocking clauses say literal seen assignment null antecedent reason implied incomming arc implication graph chronological backtracking given hereafter abbreviated convenience performed function backtrack collectively call number procedures allsat solving based procedure contrast blocking procedure presented section fear confusion chronological backtracking always means important point approach make compatible clause learning consider conflict graph clause implementing efficient solutions sat solvers learning phase due conflict graph may contain several roots assignments null antecedent decision level see example literals fig since ordinary first uip scheme commonly assumes unique root decision level implementations based assumption get stuck literals null antecedent resolve problem two techniques presented later example look fig variables assigned values without conflict means solution found following assignments level canceled flipped decision inserted assignment level since propagation takes place new decision made subsequent propagation results conflict decision one null antecedent current decision level ordinary first uip scheme suffices case conflict met conflict clause learnt solver backtracks level assignment implied conflict clause subsequent propagation results conflict time assignment null antecedent decision level major advantage approach matter many solutions exist performance unit propagation deteriorate thanks absence blocking clauses instead find total satisfying assignments one one hence limit number solutions generated realistic mount time first uip scheme grumberg introduced notion sublevels presented first uip scheme compatible approach basic idea divide single decision level sublevels specifically new sublevel defined whenever performed sublevels undefined decision levels undefined ordinary first uip scheme applied current sublevel conflict clause obtained approach may contain many literals current sublevel yet decision level among literals conflict clause null antecedent necessary avoiding rediscovery solutions removed exist however expected literals reduced decision first uip scheme present alternative first uip scheme need require sublevels scheme realized small modification simply stop literals null antecedent attempt find first uip case oppotunity rediscover solutions ordinary chronological backtracking hence flipped decision inserted technique explained detail backtracking level limit introduced toda soh algorithm procedure decision first uip scheme input cnf formula empty variable assignment output satisfying assignments decision level lim limit level true propagate deduce stage conflict happens halt lim resolve lim resolve stage else variables assigned values report halt backtrack lim else decide stage select unassigned variable value end end end current decision level specific procedure traverse conflict graph reverse order implications construct conflict clause repeating following procedure first uip appears current literal current decision level add negated literal incomming arcs current literal current literal null antecedent add negated literal case incoming arcs current literal source vertices yet visited first uip found add negated literal compared first uip scheme decision levelbased scheme might considered better first simple secondly conflict clause contains unique literal current decision level except null antecedent however noted conflict clauses obtained decision scheme necessarily smaller unique implication point thus conflict clause may contain literals implementing efficient solutions sat solvers figure implication graph conflict case fig current decision level pseudo code modified scheme omitted modification would straightforward example continuing example consider conflict case fig figure illustrates difference two schemes scheme finds first uip learns conflict clause decision scheme finds first uip learns conflict clause note conflict clause either case become unit backtracking though menace algorithmic correctness flipped decision remark recall function analyze performed assignment unit literal set current assignment function procedure conflict clause necessarily unit seen example hence function analyze procedure adds conflict clause consider induced assignment algorithm pseudo code approach using clause learning decision first uip scheme recall function backtrack backtrack chronologically following algorithm choices generic function named resolve called simple way realizing resolve stage perform clause learning based either first uip scheme perform elaborate methods introduced later resolve also replaced since variable lim used one methods introduced needed conflict directed backjumping grumberg augmented approach conflict resolution means restricted backtracking backtracking method considered form conflict directed backjumping cbj short toda soh cbj studied one tree search algorithms constraint satisfaction problem basic idea described consider scenario conflict happens decision level backtracking level conflict happens case obtain conflict clause former conflict latter conflict obtain conflict clause uip first uip former analysis perform resolution obtain resulting clause backtrack level preceding highest level pseudo code given algorithm almost faithfully rephrased setting code given literature call propagate end loop explicitly written original code consider necessary call unit propagation considers effect recent flipped decision inserted result backtrack note effect assignment implied recent conflict clause considered nonblocking procedure assumes function analysis simply records conflict clause insert implied assignment separately inserted line halt means procedure halted backtracking level limit combination cbj present alternative conflict resolution means backtracking backtrack level limit knowledge method first presented though context answer set programming thus would like import idea procedure since procedure record blocking clauses must backtrack arbitrary level even though backtracking level one legitimately derived conflict clause however opportunity rediscovering found solutions derived level use variable lim holds safe level backtracked first level current assignment previous satisfying assignment differ pseudo code given algorithm call approach nonchronological backtracking level limit denoted underline means backtrack level limited since lim always less equal performed lim lim backtracking entail inserting flipped decision see example lim implies solution yet found hence opportunity rediscover solutions backtracking implementing efficient solutions sat solvers algorithm conflict resolution based backjumping input cnf formula assignment current decision level output updated objects stack empty stack true conflict happens halt analyze push learnt conflict clause stack backtrack else stack empty clause popped stack unit clause unit unit literal add unit seen assignment antecedent propagate conflict happens halt conflict clause recent conflict unit uip resolution push stack highest level backtrack end end else break end propagate end return furthermore present cbj obtained replacing else part performed cbj lim less current decision level perform otherwise perform cbj mentioned combination backjumping backtracking proposed however pseudo code almost corresponds algorithm different stated paper toda soh algorithm conflict resolution based backtracking level limit input cnf formula assignment current decision level level limit lim output updated objects lim analyze diagnose stage lim lim lim else backtrack lim end return lim step selected lim equals current decision level words conflict happens current assignment diverged previous satisfying assignment hence preferentially applied one decisions diverging point made designed expectation decisions made diverging point effectively prunes search space since likely frequently applied approach denoted implementation implemented programs based nonblocking procedure according two first uip schemes selected conflict resolution methods cbj selected means performing clause learning caching solvers overview formula caching refers number techniques memorize formulae avoid recomputation subproblems examples include caching technique probabilistic planning conflict clauses sat component caching cachings sat blocking clauses allsat another type formula caching formulae associated propositional languages fbdd obdd subset studied context knowledge compilation work revealed correspondence exhaustive dpll search propositional languages also proposed speeding compilation exploiting techniques modern sat solvers correspondence although exhaustive dpll search simply used efficiency compilation approach compilation turn contribute speeding exhaustive dpll search actually cnf implementing efficient solutions sat solvers formula compiled bdd satisfying assignments generated simply traversing possible paths root sink node seems like taking long way around allsat solving however thanks caching mechanism recomputation many subproblems saved connection allsat mentioned however primary concern compilation suitable languages required queries restricted allsat knowledge comparisons conducted various compilers application allsat solver explicitly mentioned literature allsat solver released however comparisons allsat solvers conducted yet power remains unknown similar caching techniques appear areas preimage computation unbounded model checking satisfiability discrete optimization paper deals caching method records pairs formulae obdds call caching formulabdd caching embedded either blocking procedure nonblocking done without almost loss optimizations employed underlying procedure exception variables must selected fixed order decide stage effect far negligible terms efficiency single solution sat however confirmed experiments caching solvers exhibit quite good performance whole provides efficient solution method instances huge number solutions possibly solved means caching mechanism give basic idea caching using simple bdd construction method caching method elaborated later implementing top sat solver first introduce terminology subinstance assignment cnf formula derived applying assignments defined current subinstance refers subinstance induced current assignment consider following procedure cnf formula empty variable assignment initial arguments unsatisfied clause exists current subinstance return variables assigned values return smallest index unassigned variable result obtained recursive call combination blocking procedure presented past work toda soh result obtained recursive call return node label references child child since different assignments yield subinstances logically equivalent want speed procedure applying dynamic programming need quickly decide current subinstance solved unsolved compute bdd solutions instance memorize associating instance otherwise result obtained form bdd recomputation avoided however approach involves equivalence test cnf formulae computationally intractable includes satisfiability testing hence consider weaker equivalence test encode subinstances formulae two subinstances logically equivalent encoded formulae identical decide current subinstance solved suffices search encoded formula set registered pairs requirements caching work simply sentence italic encoding meets noted test sound acceptance always correct decision however prioritize efficiency encoding excessively logically equivalent subinstances likely result formulae wrong decision examples cachings include induced cutsets separators defined variant cutsets definition cutset cnf formula set clauses literals underlying variables satisfying cutwidth maximum size cutset definition separator cnf formula set variables clause cutset literal underlying variable satisfying pathwidth maximum size separtor example look cnf formula illustrated fig cutset consists separator consists cutwidth pathwidth following proposition states clauses variables cutsets separators meet requirement caching respectively proof omitted see implementing efficient solutions sat solvers figure cnf formula left obdd cnf right arcs sink node omitted cutsets associated arcs underlined clauses mean satisfied proposition let cnf formula variables ordered according indices let assignments less variables assigned values variables unassigned satisfied clauses cutset coincide subinstance logically equivalent variables assigned value separator coincide subinstance logically equivalent example figure illustrates obdd constructed using cutsets caching cutsets associated arcs satisfied clauses underlined two arcs set satisfied clauses proposition implies target vertices merged safely noted weaker equivalence test may reject logically equivalent subintances subgraphs constructed bdd correspond subinstances merged means constructed bdd fully reduced obdd though obdd give fig happens fully reduced important point caching approach balance quality efficiency weaker equivalence test quality refers many correct decisions made theoretically holds correct decision separator approach always implies correct decision cutset approach hand efficiency refers much time taken create formulae toda soh subinstances substantially amounts evaluating clauses variables cutsets separators respectively terms efficiency evaluating clauses cutset would require time linear total size clauses due lazy evaluation mechanism implemented top modern sat solvers hand evaluating variables separator requires time linear number variables argument say instances small cutwidth evaluation cost cutset negligible compared separator hence cutset better choice instances many clauses separator used instead embedding caching allsat procedure demonstrate embed caching concrete allsat procedure take procedure example knowledge first time combination presented combination blocking procedure caching omitted assumed far conflict clauses blocking clauses blocking procedure added cnf assume separately maintained makes unchanged throughout execution allsat procedure accordingly cutset separator level unchanged algorithm pseudo code procedure formulabdd caching embedded caching mechanism consists encode stage extend stage enroll stage encode stage function makef ormula receives cnf current assignment index largest assigned variable computes formula current subinstance specific encoding presented see section examples encoding note variables assigned values must hence case let function makef ormula return special formula representing true line search entry key holds registered pairs exists current subinstance already solved result obdd associated key appears subgraph hence extend stage function extendobdd augments obdd adding path root root following current assignment returns pair extended obdd added path since stage straightforward omit pseudo code enroll stage associate formulae solved subinstances corresponding obdds insert pairs important points identify subinstances solved find obdds let subinstance implementing efficient solutions sat solvers smallest unassigned variable since unit propagation performed beginning repetition without loss generality assume decision variable means formula made thereby let dli assignment decision level point without loss generality assume exist one solutions extending otherwise obdd created adding least one path node corresponds decision variable clearly node reachable root path following root obdd solutions found exhausted search space backtracking lower level dli performed directly triggered occurrence conflict implying solution left discovery last solution backtracking lower level also performed without exhausting solutions however case happens solution yet found hence distinguish summarizing backtracking level less dli least one solution found solved root obdd located end path following part assignment backtracking function associate charge enroll stage observation called whenever backtracking performed procedure given follows scan nodes recently added path assignment taken contradicts current assignment note scanned nodes correspond subinstances turns satisfiable assignment induced part current assignment scanned node test backtracking level less decision level formula test passed pair inserted label function resolveplus behaves way reslove except updated time backtracking performed lines example figure illustrates algorithm constructs obdd cnf formula given fig refreshing obdds constructed obdd may become large stored memory though would practice much better size list representation solutions worst case equal except constant factor present simple technique resolve problem let number variables introduce threshold obdd size insert following procedure bdd extended backtracking equivalent finding entry label toda soh figure progress obdd construction paths added one one obdd augmented left right step thick arcs represent path added gray nodes mean subintances corresponding solved performed size obdd larger equal current obdd dumped file secondary storage objects caching mechanism refreshed initial states since caching almost independent underlying procedure refreshing procedure simply attempts examine unprocessed assignments caching empty implementation implemented programs nonblocking procedure programs blocking procedure according caching selected cutset separator experiments implementation environment solvers implemented top clasp taken potassco far aware clasp picosat relsat sat solvers support enumeration solutions among used clasp comparison achieved better performance picosat relsat support quiet mode solution generation generated solutions may large stored experiments performed xeon ram running red hat enterprise linux gcc compiler version execution allsat solver time limit memory limit set seconds respectively either limit exceeded solver enforced halt solvers simply touch found solutions never output potsdam answer set solving collection bundles tools answer set programming developed university potsdam http accessed implementing efficient solutions sat solvers algorithm procedure caching denotes decision level variable input cnf formula empty variable assignment output obdd satisfying assignments decision level lim limit level obdd set pairs set formulae indices sequence bdd nodes true propagate deduce stage conflict happens return lim resolveplus lim resolve enroll stage else min assigned value makeformula encode stage entry key exists obdd node associated extendobdd extend stage return associate enroll stage backtrack lim else decide stage select value end end end types compared solvers blocking solver solver caching solver clasp first three types variations according techniques used see end subsection section among solvers type selected solver solved instances selected solvers called representative solvers follows blocking nosimple cont blocking solver simplification unselected continuation selected toda soh nonblocking dlevel solver decision level first uip scheme backtracking level limit selected bdd cut nonblocking dlevel caching solver cutset caching selected implemented top nonblocking dlevel throughout section fear confusion abbreviated blocking nonblocking bdd notation solvers configurations introduced way known variable orderings significantly affect performance bdd compilation hence used software mince version decide static variable order execution caching solvers execution mince failed instances case used original order time required deciding variable order included although instances time limit exceeded preprocessing negligible many instances problem instances used total cnf instances satisfiable classified follows satlib satlib benchmark problems instances taken satlib sat competition benchmarks application track instances crafted track instances taken sat competition iscas circuit benchmarks dimacs cnf format instances taken among instances released repository selected instances satisfiability could decided seconds either one sat solvers clasp minisat minisat result satisfiable satlib random instances excluded comparison running time figure shows cactus plot representative solvers solved instances ranked respect times required solve point represents solved instance rank horizontal coordinate required time vertical coordinate since one wants solve many instances satisfiability library holger hoos thomas darmstadt university technology http accessed may competition http accessed atpg system huan chen joao marquessilva http accessed may implementing efficient solutions sat solvers clasp time sec solved figure cactus plot representative solvers respect running time possible given amount time thought gentler slope plotted points efficient solver caching solver clearly outperforms solvers followed solver clasp blocking solver order figures depict differences solvers types figure observe continuation search effective yet simplification degrades performance instances simplification enables solver find large number solutions however instances limited current implementation powerful enough make possible solve instances handled without simplification figure narrower horizontal range figures solvers exhibit quite similar performance distinguished otherwise surprising almost efficient elaborated backtracking methods decision scheme equal efficient scheme figure shows nonblocking procedure clearly better underlying solver caching mechanism embedded almost difference caching methods comparison maximum memory usage figure shows cactus plot maximum memory usage solved instances turn ranked respect maximum memory usage point represents solved instance rank horizontal coordinate required memory vertical coordinate terms memory consumption caching solver worst solver clasp exhibit stable performance toda soh time sec solved figure cactus plot blocking solvers time sec solved figure cactus plot solvers horizontal scale narrowed make difference clear rapid increase curves solver clasp due large cnf formulae although caching solver consumes much memory days unusual even laptop computers several giga bytes ram advantage caching solver impaired much comparison scalability number solutions shown table representative solver following limit number solutions within seconds time limit implementing efficient solutions sat solvers time sec solved figure cactus plot caching solvers maximum memory usage clasp solved figure cactus plot representative solvers respect maximum memory usage vertical axis logarithmic scale blocking one million solutions clasp one hundred million solutions ten billion solutions bdd one quadrillion solutions distribution solved instances instance series table shows distribution solved instances instance series toda soh table distribution solved instances respect number solutions total blocking clasp nonblocking bdd almost series differences number solved instances solvers explained scalability exceptions instances clearly harder instances solvers unable find even one solution many instances series solved instances less solutions exceptions bdd table shows distribution instances including unsolved instances although clasp could find relatively many solutions solvers could find less solutions majority instances terms ability find solutions instances could say best solver clasp blocking solvers solvers tie ranked second caching solver match instances due fixed variable ordering favorite ranges instances representative solvers illustrated fig according two factors hardness instances numbers solutions instances solver placed way vertical position corresponds ability find solutions instances blocking solver solver benefit sat solver horizontal position corresponds scalability number solutions hence indicated range refers instances leftward downward well within noted shrinking vertical axis would suitable real performance implementing efficient solutions sat solvers table distribution solved instances instance series number instances series enclosed parenthesis ais bmc flat gcp hanoi inductive logistics parity ssa iscas total blocking clasp nonblocking bdd reality solvers whole exhibits poor performance hard instances comparisons publicly available sat solvers also comparison publicly available sat solvers sharpsat version version relsat version check performance developed allsat solvers sat result instances relsat sharpsat bdd cut nonblocking dlevel solved toda soh table distribution instances including unsolved instances according number found solutions within time limit total blocking clasp nonblocking bdd figure favorite ranges instances representative solvers respectively since relsat supports solution count used solution counter comparison even sat formulabdd caching solver shows better performance implementing efficient solutions sat solvers conclusion surveyed discussed major techniques existing allsat solvers classified types solvers blocking solver nonblocking solver caching solver faithfully implemented released solvers publicly researchers easily develop solver modifying codes compare existing methods conducted comprehensive experiments total instances taken satlib sat competition iscas benchmarks apart implemented solvers used clasp one softwares solution generation support experiments revealed following solver characteristics seconds time limit see also fig caching solver powerful solved instances including instances one quadrillion solutions maximum memory usage amounts several tens giga bytes worst case though controllable refreshing caches cost low cache hit rate bad hard instances due fixed variable ordering solver ranked next best followed clasp solver clasp handle instances ten billion solutions one hundred million solutions low maximum memory usage mega bytes several tens mega bytes respectively although solvers exhibit relatively similar performance difference clasp able find moderately many number solutions even hard instances though powerful enough make possible solve instances handled means blocking solver limited instances one million solutions blocking clauses deteriorate performance unit propagation however benefit techniques sat solvers thereby suitable finding small number solutions hard instances conclude caching solver superior terms exact allsat solving various kinds instances however since solutions necessary practical applications duplicated solutions may allowed recommended select appropriate solver accordance types instances applications references akers binary decision diagrams ieee trans june toda soh fadi aloul igor markov karem sakallah mince static global heuristic sat search bdd manipulation ucs andersen hadzic hooker tiedemann constraint store based multivalued decision diagrams christian editor principles practice constraint programming volume lecture notes computer science pages springer berlin heidelberg bacchus dalmao pitassi algorithms complexity results sat bayesian inference foundations computer science proceedings annual ieee symposium pages oct roberto bayardo pehoushek counting models using connected components proceedings aaai national conference pages roberto bayardo robert schrag using csp lookback techniques solve sat instances proceedings fourteenth national conference artificial intelligence ninth conference innovative applications artificial intelligence aaai pages aaai press paul beame russell impagliazzo toniann pitassi nathan segerlind formula caching dpll acm trans comput theory march bergman cire van hoeve hooker discrete optimization decision diagrams informs journal computing appear biere biere heule van maaren walsh handbook satisfiability volume frontiers artificial intelligence applications ios press amsterdam netherlands netherlands armin biere picosat essentials jsat jrg brauer andy king jael kriener existential quantification incremental sat ganesh gopalakrishnan shaz qadeer editors computer aided verification volume lecture notes computer science pages springer berlin heidelberg bryant algorithms boolean function manipulation computers ieee transactions aug xinguang chen peter van beek backjumping revisited artif int march edmund clarke daniel kroening natasha sharygina karen yorav predicate abstraction programs using sat formal methods system design implementing efficient solutions sat solvers crama hammer boolean functions theory algorithms applications encyclopedia mathematics applications cambridge university press adnan darwiche new advances compiling cnf decomposable negation normal form proceedings eureopean conference artificial intelligence ecai including prestigious applicants intelligent systems pais valencia spain august pages rina dechter daniel frost backtracking constraint satisfaction problems artificial intelligence thomas eiter kazuhisa makino georg gottlob computational aspects monotone dualization brief survey discrete applied mathematics memory leonid khachiyan niklas niklas srensson extensible enrico giunchiglia armando tacchella editors theory applications satisfiability testing volume lecture notes computer science pages springer berlin heidelberg ganai gupta ashar efficient unbounded symbolic model checking using circuit cofactoring computer aided design international conference pages nov martin gebser benjamin kaufmann neumann torsten schaub clasp answer set solver chitta baral gerhard brewka john schlipf editors logic programming nonmonotonic reasoning volume lecture notes computer science pages springer berlin heidelberg martin gebser benjamin kaufmann neumann torsten schaub answer set enumeration chitta baral gerhard brewka john schlipf editors logic programming nonmonotonic reasoning volume lecture notes computer science pages springer berlin heidelberg orna grumberg assaf schuster avi yadgar memory efficient sat solver application reachability analysis alanj andrewk martin editors formal methods design volume lecture notes computer science pages springer berlin heidelberg tias guns siegfried nijssen luc raedt itemset mining constraint programming perspective artificial intelligence toda soh aarti gupta zijiang yang pranav ashar anubhav gupta image computation application reachability analysis hunt warrena stevend johnson editors formal methods design volume lecture notes computer science pages springer berlin heidelberg jiawei han hong cheng dong xin xifeng yan frequent pattern mining current status future directions data mining knowledge discovery jinbo huang adnan darwiche using dpll efficient obdd construction holgerh hoos davidg mitchell editors theory applications satisfiability testing volume lecture notes computer science pages springer berlin heidelberg jinbo huang adnan darwiche language search artif intell res jair jabbour jerry lonlac lakhdar sais yakoub salhi extending modern sat solvers models enumeration proceedings ieee international conference information reuse integration iri redwood city usa august pages said jabbour joao lakhdar sais yakoub salhi enumerating prime implicants propositional formulae conjunctive normal form eduardo leite editors logics artificial intelligence volume lecture notes computer science pages springer international publishing said jabbour lakhdar sais yakoub salhi boolean satisfiability sequence mining proceedings acm international conference conference information knowledge management cikm pages new york usa acm hoonsang jin hyojung han fabio somenzi efficient conflict analysis finding satisfying assignments boolean circuit nicolas halbwachs lenored zuck editors tools algorithms construction analysis systems volume lecture notes computer science pages springer berlin heidelberg hoonsang jin fabio somenzi prime clauses fast enumeration satisfying assignments boolean circuits proceedings annual design automation conference dac pages new york usa acm kang park unbounded symbolic model checking design integrated circuits systems ieee transactions feb implementing efficient solutions sat solvers donald knuth art computer programming volume fascicle bitwise tricks techniques binary decision diagrams professional edition lee representation switching circuits programs bell system technical journal bin michael hsiao shuo sheng novel sat allsolutions solver efficient preimage computation design automation test europe conference exposition date february paris france pages nuno lopes nikolaj bjorner patrice godefroid george varghese network verification light program verification technical report september stephen majercik michael littman using caching solve larger probabilistic planning problems proceedings fifteenth national conference artificial intelligence tenth innovative applications artificial intelligence conference aaai iaai july madison wisconsin pages sharad malik lintao zhang boolean satisfiability theoretical hardness practical success commun acm august joao janota lynce computing backbones propositional theories proceedings conference ecai european conference artificial intelligence pages amsterdam netherlands netherlands ios press sakallah grasp search algorithm propositional satisfiability computers ieee transactions may kenl mcmillan applying sat methods unbounded symbolic model checking brinksma kimguldstrand larsen editors computer aided verification volume lecture notes computer science pages springer berlin heidelberg morgado good learning implicit model enumeration tools artificial intelligence ictai ieee international conference pages nov doronb motter igorl markov compressed search satisfiability davidm mount clifford stein editors algorithm engineering experiments volume lecture notes computer science pages springer berlin heidelberg toda soh keisuke murakami takeaki uno efficient algorithms dualizing hypergraphs discrete applied mathematics knot pipatsrisawat adnan darwiche lightweight component caching scheme satisfiability solvers theory applications satisfiability testing sat international conference lisbon portugal may proceedings pages patrick prosser hybrid algorithms constraint satisfaction problem computational intelligence qadir hasan applying formal methods networking theory techniques applications communications surveys tutorials ieee firstquarter kavita ravi fabio somenzi minimal assignments bounded model checking kurt jensen andreas podelski editors tools algorithms construction analysis systems volume lecture notes computer science pages springer berlin heidelberg shuo sheng michael hsiao efficient preimage computation using novel atpg proceedings conference design automation test europe volume date pages washington usa ieee computer society marc thurley sharpsat counting models advanced component caching implicit bcp armin biere carlap gomes editors theory applications satisfiability testing sat volume lecture notes computer science pages springer berlin heidelberg takahisa toda hypergraph transversal computation binary decision diagrams vincenzo bonifaci camil demetrescu alberto editors experimental algorithms volume lecture notes computer science pages springer berlin heidelberg takahisa toda dualization boolean functions using ternary decision diagrams international symposium artificial intelligence mathematics isaim fort lauderdale usa january takahisa toda koji tsuda bdd construction solutions sat efficient caching mechanism proceedings annual acm symposium applied computing sac pages new york usa acm frank van harmelen frank van harmelen vladimir lifschitz bruce porter handbook knowledge representation elsevier science san diego usa implementing efficient solutions sat solvers yinlei subramanyan tsiskaridze malik allsat using minimal blocking clauses vlsi design international conference embedded systems international conference pages jan shuyuan zhang sharad malik rick mcgeer verification computer switching networks overview supratik chakraborty madhavan mukund editors automated technology verification analysis lecture notes computer science pages springer berlin heidelberg toda graduate school information systems university chofugaoka chofu tokyo japan address soh information science technology center kobe university rokkodai nada kobe japan address soh
| 8 |
mar approximation algorithms tsp neighborhoods plane adrian dumitrescu university milwaukee joseph mitchell stony brook university stony brook jsbm august abstract euclidean tsp neighborhoods tspn given collection regions neighborhoods seek shortest tour visits region generalization classical euclidean tsp tspn also paper present new approximation results tspn including approximation algorithm case arbitrary connected neighborhoods comparable diameters ptas important special case disjoint unit disk neighborhoods nearly disjoint disks methods also yield improved approximation ratios various special classes neighborhoods previously studied give algorithm case neighborhoods infinite straight lines introduction salesman wants meet set potential buyers buyer specifies connected region plane neighborhood within willing meet salesman example neighborhoods may disks centered buyers locations radius disk specifies maximum distance buyer willing travel meeting place salesman wants find tour shortest length visits buyers neighborhoods finally returns initial departure point variant problem address paper departure point specified tour neighborhoods found problem known tsp neighborhoods tspn generalization classic euclidean traveling salesman problem tsp regions neighborhoods single points consequently related work tsp long rich history research combinatorial optimization studied extensively many forms including geometric instances see problem known even points euclidean plane work done author visiting faculty member stony brook university partially supported grants hrl laboratories nasa ames national science foundation corporation sandia national labs sun microsystems recently shown geometric instances tsp including euclidean tsp approximation scheme developed arora mitchell later improved rao smith arkin hassin first study approximation algorithms geometric tspn gave algorithms several special cases including parallel segments equal length translates convex region translates connected region generally regions diameter segments parallel common direction ratio longest shortest diameter bounded constant general case connected polygonal regions mata mitchell obtained log algorithm based guillotine rectangular subdivisions time bound total complexity regions gudmundsson levcopoulos recently obtained faster method fixed guaranteed perform least one following outputs tour length log times optimum time log outputs tour length times optimum time far approximation algorithm known general connected regions recently shown tspn approximated within factor unless fact inapproximability factor vertex cover problem graphs degree bounded stated result based implies factor larger time since paper first appeared shown berg tspn algorithm case regions regions connected disjoint convex fat also schwartz safra improved lower bounds hardness approximation several variants tspn problem jonsson given time algorithm case regions lines plane previous time algorithm appears section summary results geometric tspn including paper obtain several approximation results extend approaches initiated obtain first algorithm tspn connected regions similar diameter solves among others open problem posed provide approximation algorithm tspn segments length arbitrary orientation give approximation scheme ptas case disjoint unit disks case nearly disjoint disks nearly size algorithm based applying method new charging scheme fact ptas case neighborhoods nice point lying constant number neighborhoods contrasted fact tspn arbitrary regions construction proof utilizes skinny neighborhoods intersect extensively also give modest improvements earlier approximation bounds cases parallel segments equal length translates convex region translates connected region one know advance one accomplished present simple algorithms achieve guarantee case equal disks case infinite straight lines preliminaries input algorithms set regions closed subset plane bounded finite union arcs constantdegree algebraic curves degenerate regions simply points since regions assumed closed include points lie curves form boundary assumption regions simply connected means region ideally region subset plane lying inside simple closed continuous curve together curve however deal regions hence definition examples allowable regions include simple polygons whose boundaries unions finite number straight line segments circular disks regions bounded straight segments circular arcs infinite straight lines etc let denote total number arcs specifying regions total combinatorial complexity input tour circuit closed continuous curve visits region length tour denoted euclidean length curve avoid ambiguity size finite set denote tsp neighborhoods tspn problem goal compute tour whose length guaranteed close shortest possible length tour let denote optimal tour let denote length algorithm outputs tour whose length guaranteed said algorithm approximation ratio family algorithms parameterized running polynomial time fixed said approximation scheme ptas outline paper section use simple packing arguments yield approximation algorithms tspn disks section presents ptas tspn disks section give approximation algorithm tspn regions diameter finally section give approximation algorithm case regions infinite straight lines conclude short list open problems future research disks begin giving simple arguments corresponding algorithms achieve constant approximation ratio tspn set disks size without loss generality assume disks unit radius results carry naturally disks nearly size corresponding changes approximation factor first consider case disjoint unit disks algorithm simple natural using known ptas results tsp points compute time log tour center points disks refer center tour clearly valid region tour claim proposition given set disjoint unit disks one compute tour whose length satisfies optimal tour running time dominated computing approximate tour points proof put since visits disks area swept disk radius whose center moves along covers unit disks area bounded follows thus center tour length obtained going along making detour length visit center disk point first visits disk hence length computed tour bounded claimed large approximation ratio small constant values problem solved exactly using brute force note algorithm outputs tour center points expect approximation ratio smaller see consider large square place almost touching unit disks along perimeter inside outside disks touch perimeter also optimal disk tour except four corners square length disk center tour roughly two times perimeter square disks nearly size ratio maximum minimum radius bounded constant large approximation ratio next consider case disks overlap first compute maximal independent set disks next compute tour center points disks finally output tour obtained following tour taking detours around boundaries disks illustrated figure specifically select arbitrary disk one intersection points start point clockwise along whenever boundary disk encountered follow clockwise around boundary disk encounter tour finally reach continue clockwise around continue counterclockwise around taking detours clockwise around disks encounter along way tour finally ends return second time way tour traverses boundary disk exactly therefore visits disks well remark method constructing feasible tour disks results slightly better ratio another natural strategy extending full disk tour traversing time one encounters boundary disk traverse entire circumference exactly directly point tour exits continue along denote optimal disk tour length optimal disk tour subset disks constant approximation ratio derived following three inequalities figure construction tour tour center points maximal independent set disks starting follow thick solid tour back follow thick dashed tour back around third inequality follows fact second case disjoint unit disks considered check first inequality decompose parts assuming one disk cutting segment two consecutive centers middle let length one parts tour corresponding disk lengths two segments adjacent center write arc lengths boundary traversed writing ratio length corresponding part length part get maximum attained thus satisfied cases putting together get proposition given set unit disks possibly overlapping one compute tour whose length satisfies optimal tour running time dominated computing approximate tour points large approximation ratio note approximation ratio obtained approach disjoint unit disks unit disks better weaker approximation ratio given end section translates convex region applies course case unit disks ptas disjoint equal disks section present approximation scheme tspn case regions disks nearly disks given powerful methods developed obtain ptas various geometric optimization problems euclidean tsp natural suspect techniques may apply tspn indeed one may expect tspn ptas based applying existing methods however know recent result goes wrong basic issue must address order apply techniques able write recursion solve appropriate succinct subproblem dynamic programming subproblem responsible solving problems involving points subproblem made responsible constructing kind inexpensive network points inside subproblem defined rectangle interconnect network boundary nicely controlled way constant complexity connection case methods problem regions cross subproblem boundaries know subproblem responsible visit region region visited outside subproblem afford enumerate subset regions cross boundary subproblem responsible many subsets leading many subproblems thus need new idea approach employ new type structural result based general method subdivisions particular show transform optimal tour one special class tours recursively special structure permitting succinct specification subset regions crossing subproblem boundary subproblem responsible must visit regions interior order bound increase tour length performing transformation must charge added tour length small fraction length optimal tour done proving bounds ptas method tsp order charging must assume special structure class neighborhoods tspn regions disks similar structure allowing relate tour length area show approach applies disjoint disks equal radii generalizations readily made case nearly equal radii constant upper bound ratio radii case modestly overlapping disks become decreased size constant factor keeping center points begin definitions largely following notation let embedding planar graph let denote total euclidean length edges assume without loss generality restricted unit square int consider rectangle window rectangle correspond subproblem dynamic programming algorithm let line intersecting refer cut assume without loss generality vertical intersection int cut int consists possibly empty set subsegments subsegments possibly singleton points let number endpoints subsegments along let points denoted order decreasing along positive integer define respect follows otherwise defined line segment joining mth endpoint endpoints refer figure note segment may zero length case figure definition respect window highlighted thick shaded vertical segment intersection disks intersect consists possibly empty set subsegments one disk intersected let disks order decreasing positive integer define respect follows otherwise defined possibly line segment joining bottom endpoint top endpoint refer figure line cut respect particular trivially cut since empty case say satisfies property respect window either fully contain disk exists cut splits recursively satisfies property respect figure definition respect window highlighted thick shaded vertical segment theorem let embedded connected planar graph edge set total length let given set disks radius intersects assume contained unit square positive integer exists planar graph satisfies property respect edge set length proof convert new graph adding new set edges whose total length construction recursive stage show exists cut respect current window initially unit square afford add say point cut respect along int least edges intersected side line perpendicular say subsegment respect points segment respect important property points along following assume without loss generality horizontal consider line segment lies along edge top side bottom side top side seen point points subsegment charge length bottoms first subsegments edges lie tops first subsegments edges lie since know least edges blocking charge length half charging units charge half charging levels think edges walls effective blocking light light walls stopped hits mth wall line illuminated light shone boundary along direction units charge refer type charge red levels charge say point cut respect along int least disks nonempty intersection side line perpendicular say subsegment respect points segment respect chargeable length within cut defined sum lengths portion portion refer figure figure definition points respect four subsegments lie within shaded regions comprise set points respect horizontal cuts important property points along horizontal following points subsegment charge length bottoms first disks lie tops first disks lie since know least disks blocking charge length units charge half upwards charging bottoms levels disks half downwards charging tops levels disks units charge refer type charge blue charge cut favorable chargeable length within least large sum lengths attempt take advantage fact may overlap among may portions two segments already part argument taking advantage facts may improve slightly constants bounds existence favorable cut guaranteed following key lemma whose proof similar key lemma lemma window favorable cut proof show must favorable cut either horizontal vertical let denote cost vertical line passing point cost means sum lengths thus area region points dark respect horizontal cuts area region points respect horizontal cuts refer figure similarly define cost horizontal line let assume without loss generality claim exists horizontal favorable cut claim exists horizontal cut chargeable length least large cost meaning length portion plus portion least see note computed switching horizontally rather vertically order rof integration slicing ther regions length horizontal line length intersection horizontal line chargeable length horizontal line words length portion horizontal line thus since get thus values exists horizontal line cut satisfying claim lemma instead would get vertical cut satisfying claim know must favorable cut charge cost making red charge bottoms tops segments lie points making blue charge bottoms tops disks lie points recurse side cut two new windows portion charged red one side due cut within levels boundary windows either side hence within levels boundary future windows found deeper recursion contain portion thus portion ever charged red side two directions horizontal vertical portion ever pay times length red charge per unit length perimeter segment bounding charge rate box worst case achieved segment slope thus total red charge similarly disk ever boundary charged blue per unit length two directions horizontal vertical since charged rate bounding box whose perimeter get total blue charge appeal lower bound previous section based area argument radius disks argument shows get note area argument uses fact connected thus total blue charge also important note always charging red portions original edge set new edges added never charged since lie window boundaries therefore serve make portion future cut overall total increase length caused adding along favorable cuts bounded goal adding obtain succinct representation disks straddle boundary window visited within window visited outside window segment visits constant number disks corresponding side window however one remaining issue respect segments need argue afford within charging scheme connect input edge set dynamic programming optimization find shortest possible planar graph property obeys certain connectivity constraints well properties guarantee graph eulerian subgraph spanning disks optimal graph compute uses segments visit corresponding disks boundaries windows define subproblems remark clarification added journal publication previously phrased find shortest possible connected planar graph property rephrased find shortest possible planar graph property obeys certain connectivity constraints particular add connections endpoints point corresponding disk closest endpoint know connection length per endpoint since disks diameter total adds length assuming stabs least three disks length least implying charge extra connections way charge hand stabs one two disks afford skip addition altogether keep track dynamic program necessary information couple extra disks specifying whether visited within window remark clarification added journal publication thanks sophie spirkl inquiry input argue afford add connections since length added proportional bridge length dynamic programming algorithm computes planar network property obeys certain connectivity constraints network connected except possibly may connected rest connected network objective function dynamic program requires minimize total length network counting lengths bridges serve constant number times since know afford add length proportional total lengths know add connections mentioned resulting overall connected network make eulerian appropriate doubling bridge segments usual way mentioned appropriately close optimal length corollary tspn set disjoint disks ptas true set nearly disjoint nearly disks constant upper bound ratio largest smallest radius constant factor disks become disjoint radii multiplied factor keeping center points proof consider case disjoint disks radius generalization nearly disjoint nearly disks straightforward impose regular grid disk let denote resulting set grid points consider optimal tour length simple polygon perturb vertices lies grid point resulting new tour visiting every disk whose length using fact previous assuming get length section solve problem constant time brute force theorem implies convert consists set edges planar graph obeying property increasing total length much particular length apply dynamic programming algorithm running time compute planar graph prescribed set properties satisfies property necessary dynamic program claimed efficiency visits least one grid point region contains eulerian subgraph spans disks third condition allows extract tour end outline dynamic programming algorithm details similar modification account subproblem defined rectangle whose coordinates among grid points together constant amount information solution subproblem interacts across boundary solution outside information includes following four sides specify bridge segment segments endpoints among cross side done exactly case euclidean tsp points four sides specify disk bridge segment corresponding disks intersected disk bridge segment specify single bit whether disk visited within subproblem visited outside window specify required connection pattern within particular indicate subsets specified edges crossing boundary required connected within done exactly detailed euclidean tsp point sets order end graph eulerian subgraph spanning disks use trick done double bridge segments disk bridge segments require number connections side bridge segment satisfy parity condition exactly allows extract tour planar graph results dynamic programming algorithm gives shortest possible graph obeys specified conditions result polynomial time one compute shortest possible graph special class graphs graph spans regions theorem guarantees length resulting graph close within factor length optimal solution tspn also know remarks previously afford add connections assure connected rest connected network thus extract tour eulerian subgraph desired solution finally mention running time improved constant independent using notion guillotine subdivisions dependence constant exponential multiplicative constant concealed suspect techniques rao smith based arora method used improve time bound log possibly also addressing problem higher dimensions leave future work connected regions diameter section give approximation algorithm tspn problem applies case regions diameter nearly diameter diameter region distance two points region farthest apart without loss generality assume regions unit diameter general method use carefully select representative point region compute almost optimal tour representative points approach initiated also employ specialized version combination lemma describe algorithm similar many aspects one region compute diameter segment case multiple segments select one arbitrarily computing diameter segment region done efficiently time linear complexity region classify regions two types selected diameter almost horizontal mean slope selected diameter almost vertical mean others use algorithm two region types prove constant ratio achievable class suitable transformation applied regions type apply combination lemma obtain approximation regions lemma combination lemma given regions partitioned two types almost horizontal unit diameters almost vertical unit diameters respectively constants bounding error ratios approximate optimal tours regions types approximate optimal tour regions error ratio bounded omit proof simplified version argument bound approximation ratio still remark combination lemma implicitly assumes two types diameters regions parallel direction assumption hold case algorithm gives approximation optimal tour regions type regions type readily handled rotating obtain type regions diameters nearly ratio largest smallest bounded constant still get approximation algorithm constant ratio however since ratio rather large even diameters omit calculations set lines cover set regions region intersected least one line covering set refer set lines covering lines figure illustration step algorithm algorithm input set regions type step construct cover regions minimum number vertical lines procedure works greedy fashion namely leftmost line far right possible right tangent region obtain cover intervals projection regions computed greedy cover set intervals found removing intervals covered previous line still uncovered intervals another covering line repeatedly added cover time representative point region arbitrarily selected corresponding covering line inside region topmost boundary point intersection region covering line way representative points one per region selected illustration procedure appears figure important remark representative points necessarily selected diameters since diameter may entirely contained region assuming regions type greedy cover effect obtaining large enough horizontal distance two consecutive covering lines remark greedy covering algorithm set closed intervals line known output cover minimum size property carries vertical line cover step proceed according following three cases case greedy cover contains one covering line compute smallest perimeter rectangle intersects regions considered domain let denote width height respectively consider graph four vertices four edges four sides let vertical segments height partition three parts add graph double edge corresponding doubled edge corresponding resulting graph eulerian multigraph since node degrees even vertices edges output euler tour tour general shortest possible tour visiting regions suffices purposes approximation case greedy cover contains two covering lines move possible rightmost vertical covering line left much possible still covering regions recompute representative points obtained way set distance two covering lines clearly case compute rectangle width vertical sides along two covering lines minimal height includes representative points two covering lines let denote height output tour perimeter case similar case compute smallest perimeter rectangle touches intersects regions considered domain let denote width height respectively note let vertical segments height partition eight parts consider edges together doubled copies edges define eulerian multigraph vertices edges output euler tour case greedy cover contains least three covering lines compute tour representative points output tour regions simple polygons touching rectangle determined four contact points region boundary arcs brute force procedure examining possible arcs computes time total running time approximation algorithm either bounded complexity step complexity computing tours points depending size greedy cover theorem given set connected regions diameter plane approximation optimal tour computed polynomial time proof let optimal region tour address cases distinguished previous algorithm use repeatedly following simple fact see positive following inequality holds case write diag diagonal rectangle first argue visits regions since regions covered unique covering line lie vertical strip width horizontal projection type region least hence region intersected either boundary one two vertical segments inside since segments partition three subrectangles width region intersected perimeter lies entirely inside consequently valid region tour first give lower bound since length tour touching four sides rectangle least twice length diagonal rectangle see length case distinguish two case recall case since region unit diameter optimal tour may lie inside distance one boundary must touch four sides rectangle width height hence since case horizontal projection region least rectangle width partitioned subrectangles width vertical segments thus since visits regions case region intersected perimeter lies entirely inside hence intersected one seven vertical segments similar calculation yields case partition optimal tour blocks starts arbitrary point intersection leftmost covering line ends last intersection second left covering line intersects different covering line notice cross left leftmost covering line general blocks determined last point intersection covering line crosses different covering line figure hypothetical optimum tour partitioned six blocks example crosses second vertical line twice crossing third vertical line last point intersection second line consider bounding box smallest perimeter rectangle includes write width height two cases consider case intersects regions stabbed two consecutive covering lines say distance implies lies two covering lines without loss generality touches lower side point touches upper side point since horizontal projection region least let specify starting ending points see figure namely vertical distance start point lower horizontal side vertical distance end point upper horizontal side considering reflections respect two horizontal sides get bounded length polygonal line cdb cdb figure case lines show exists partial tour path representative points regions visits length bounded positive constant starts ends take adef points lines unit distance corners see figure put together get tour representative points length smaller approximate tour representatives algorithm actually computes length case intersects regions stabbed three consecutive covering lines denote horizontal distance implies lies three covering lines touch rightmost line assume starts ends previous case touches lower side point touches upper side point let supporting line side horizontal distance right max case touches upper side touches shown figure case similar get bound let distinguish two case lower bound used earlier still valid show partial tour representative points regions visits take adef bghif see figure points lines unit vertical distance upper side lower side put together get tour representative points length smaller tour representatives length figure case lines case use different lower bound get considering reflections respect two horizontal sides respect see figure take adef bghif case put together get tour representative points length smaller tour representatives length overall approximation ratio algorithm derived combination lemma comparison point similarities differences techniques arkin hassin algorithm analysis based first three different algorithms presented parallel segments translates convex region translates arbitrary connected region second two algorithms refinements first representative points chosen differently case presented single algorithm works regions type consideration cases slightly different corresponding cases allows handle larger class inputs namely regions type case distinguishes subcases distinguish cases treated slightly differently finally analysis algorithms based similar ideas divide optimal tours blocks analysis differs able address single algorithm larger class inputs special cases note calculations approximation ratio connected regions diameter give improved bounds three cases addressed parallel equal segments translates convex region translates connected region reason cases improved new approximation ratios similar expressions old ones algorithm similar algorithms cases exemplify case parallel equal segments omit details rest algorithm computes greedy cover segments assumed unit length using vertical lines proceeds according cardinality cover cases treated ratio bottleneck case case one covering line optimal tour easy obtain case two covering lines smallest aligned rectangle touches segments output tour case three covering lines algorithm computes almost optimal tour representative points algorithm analysis divided two proof theorem first intersects segments covered two consecutive covering lines lower bound equation valid upper bound equation adjusted dropping constant term equal second intersects segments covered three consecutive covering lines lower bound equation valid upper bound equation adjusted dropping constant term equal also overall approximation ratio obtained parallel equal segments lines consider case regions defining tspn instance infinite straight lines plane interesting case allows exact solution polynomial time proposition given set infinite straight lines plane shortest tour visits computed polynomial time proof convert problem instance watchman route problem simple polygon watchman route polygon tour inside polygon every point polygon visible point along tour watchman route problem asks watchman route minimum length problem known algorithm see well let rectangle contains vertices arrangement one two points intersection line boundary extend narrow spike outward point along fixed distance let simple polygon vertices union spikes illustrated figure make observation tour visits lines sees polygon required watchman route problem consequently solve tspn set lines solving watchman route problem polygon figure proof proposition given high running time watchman route algorithms interest consider efficient algorithms may approximate optimal solution end present approximation algorithm let input set lines minimum touching circle disk circle minimum radius intersects lines algorithm computes outputs minimum touching circle show provides tour length usual denotes optimal tour first argue approximation ratio leave later presentation algorithm start assume simplicity two lines parallel though assumption later removed observation optimal tour set lines possibly degenerate convex polygon proof easy see optimal tour must polygonal consisting finite union straight line segments optimal tour polygon obtain contradiction optimality since boundary convex hull shorter also visits lines since observation determined lines inscribed circle triangle formed lines distinguish two cases case acute triangle well known acute triangle minimum perimeter inscribed triangle vertex side triangle pedal triangle whose vertices feet altitudes given triangle see case optimal tour visits lines pedal triangle denote perimeter clearly lower bound denote radius circumscribed circle radius inscribed circle fact acute triangle proof angles equivalent sin sin sin simplification well known inequality geometry acute triangle page fact acute triangle proof equality found page claim acute triangle proof putting together get length output tour bounded follows case obtuse triangle case length altitude corresponding obtuse angle say clearly lower bound fact triangle proof let denote area triangle side lengths know definition altitude also know elementary geometry recalling radius inscribed circle triangle thus obtain using triangle inequality using inequality length output tour bounded follows thus cases approximation ratio determined lines two parallel distance say form generalized triangle clearly lower bound also radius thus describe algorithm computing distance point coordinates line equation let lines equations finding minimum touching circle amounts finding center coordinates minimum radius equivalent solving following linear program min subject takes time consequently proved theorem given set infinite straight lines plane visits computed time tour conclusion several open problems remain including approximation algorithm arbitrary connected regions plane regions disconnected giving geometric version tsp approximation bounds obtained higher dimensions packing arguments disjoint disks lift higher dimensions methods readily generalize particularly intriguing special case generalization case infinite straight lines said tspn set lines planes ptas general regions plane acknowledgements thank estie arkin michael bender several useful discussions tspn problem thank anonymous referees detailed comments suggestions greatly improved paper references arkin hassin approximation algorithms geometric covering salesman problem discrete appl arora nearly linear time approximation schemes euclidean tsp geometric problems acm asano ghosh shermer visibility plane handbook computational geometry sack urrutia editors pages elsevier science publishers amsterdam bentley fast algorithms geometric traveling salesman problems orsa berg gudmundsson katz levcopoulos overmars van der stappen tsp neighborhoods varying size proc annual european symposium algorithms appear september bottema geometric inequalities groningen berman karpinski tighter inapproximability results technical report eccc carlsson jonsson nilsson finding shortest watchman route simple polygon discrete comput geom garey johnson computers intractability guide theory freeman new york gudmundsson levcopoulos fast approximation algorithm tsp neighborhoods nordic gudmundsson levcopoulos hardness result tsp neighborhoods technical report department computer science lund university sweden jonsson traveling salesman problem lines plane inform process reinelt rinaldi traveling salesman problem network models handbook operations science ball magnanti monma nemhauser editors pages elsevier science amsterdam lawler lenstra rinnooy kan shmoys editors traveling salesman problem john wiley sons new york mata mitchell approximation algorithms geometric tour network design problems proc annu acm sympos comput pages megiddo linear programming linear time dimension fixed acm mitchell guillotine subdivisions approximate polygonal subdivisions simple approximation scheme geometric tsp related problems siam mitchell guillotine subdivisions approximate polygonal subdivisions part iii faster approximation schemes geometric network optimization manuscript university stony brook mitchell approximation algorithms geometric optimization problems proc ninth canadian conference computational geometry queen university kingston canada august mitchell geometric shortest paths network optimization handbook computational geometry sack urrutia editors pages elsevier science publishers amsterdam papadimitriou euclidean traveling salesman problem theoret comput rademacher toeplitz enjoyment mathematics princeton university press translation von zahlen und figuren springer berlin rao smith approximating geometrical graphs via spanners banyans proc annu acm sympos theory reinelt fast heuristics large geometric traveling salesman problems orsa schwartz safra complexity approximating tsp neighborhoods related problems manuscript submitted july tan fast computation shortest watchman routes simple polygons inform process
| 8 |
using trusted data train deep networks labels corrupted severe noise dan hendrycks mantas mazeika duncan wilson kevin gimpel feb abstract growing importance massive datasets advent deep learning makes robustness label noise critical property classifiers sources label noise include automatic labeling large datasets labeling label corruption data poisoning adversaries latter case corruptions may arbitrarily bad even bad classifier predicts wrong labels high confidence protect sources noise leverage fact small set clean labels often easy procure demonstrate robustness label noise severe strengths achieved using set trusted data clean labels propose loss correction utilizes trusted examples dataefficient manner mitigate effects label noise deep neural network classifiers across vision natural language processing tasks experiment various label noises several strengths show method significantly outperforms existing methods introduction robustness label noise set become increasingly important property supervised learning models advent deep learning need labeled data makes inevitable examples highquality labels especially true data sources admit automatic label extraction web crawling images tasks labels expensive produce semantic segmentation parsing additionally label corruption may arise data poisoning steinhardt natural malicious label corruption known sharply degrade performance classification systems zhu consider scenario access large set examples potentially corrupted labels determine equal contribution university chicago foundational research institute toyota technological institute chicago correspondence mantas mazeika mantas true glc forward confusion matrix figure label corruption matrix top left three matrix estimates corrupted dataset entry cij probability label class corrupted class cij estimate matches true corruption matrix closer confusion matrix forward method comparisons method descriptions section much gained access small set examples labels considered gold standard scenario realistic usually case number trusted examples gathered validation test sets could gathered necessary leverage additional information trusted labels propose new loss correction empirically verify number vision natural language datasets label corruption specifically demonstrate recovery extremely high levels label noise including dire case untrusted data majority labels corrupted severe corruption occur adversarial situations like data poisoning number classes large comparison loss corrections employ trusted data patrini method significantly using trusted data train deep networks accurate problem settings moderate severe label noise relative recent method also uses trusted data method far generally accurate results demonstrate systems weather label corruption access small number gold standard labels code available https related work performance machine learning systems reliant labeled data shown degrade noticeably presence label noise nettleton pechenizkiy case adversarial label noise degredation even worse reed accordingly modeling correcting learning noisy labels well studied natarajan biggio verleysen methods mnih hinton larsen patrini sukhbaatar allow label noise robustness modifying model architecture implementing loss correction unlike mnih hinton focus binary classification aerial images larsen assume labels symmetric noise labels independent patrini sukhbaatar consider label noise problem setting asymmetric labels sukhbaatar authors introduce stochastic matrix measuring label corruption note inability calculated without access true labels propose method forward loss correction forward loss correction adds linear layer end model loss adjusted accordingly incorporate learning label noise work patrini also make use forward loss correction mechanism propose estimate label corruption estimation matrix relies strong assumptions clean labels contra sukhbaatar patrini make assumption training model access small set clean labels use create label noise correction assumption leveraged others purpose label noise robustness notably veit xiao tenuously relates work field learning zhu chapelle veit labels used train label cleaning network estimating residuals noisy clean labels classification setting setting focus work propose distilling predictions model trained clean labels second network trained noisy labels predictions first work differs two train neural networks clean labels alone gold loss correction examples given untrusted dataset assume examples potentially corrupted examples true data distribution classes corruption according label noise distribution also given trusted dataset examples drawn refer trusted fraction concretely web scraper labeling images metadata may produce untrusted set examples would form trusted dataset gold standard leveraging trusted data focus investigation stochastic matrix correction approach used sukhbaatar patrini approach stochastic matrix applied softmax output classifier resulting new softmax output trained match noisy labeling stochastic matrix engineered approximate label noising procedure approach bring original output close distribution clean labels moderate assumptions explore two avenues utilizing trusted dataset improve approach first involves directly using trusted data training final classifier could applied existing stochastic matrix correction methods run ablation studies demonstrate effect second avenue involves using additional information conferred clean labels obtain better matrix use approach first approximation one could use normalized confusion matrix classifier trained untrusted dataset evaluated trusted dataset demonstrate however work well estimate used method describe method makes use estimate matrix corruption probabilities cij estimate obtained use train modified classifier recover estimate desired conditional distribution call method gold loss correction glc named make use trusted gold standard labels estimating corruption matrix estimate probabilities make use identity using trusted data train deep networks left hand side equality approximated let parameters training neural network network let softmax output vector given example true label term right reduces case conditionally independent given reduces case conditionally independent given still approximate know forces integrating gives approximate integral left expectation empirical distribution given explicitly let subset label denote estimate bij estimate corruption matrix glc second equality comes noting known preceding discussion implies approximation relies good estimate number trusted examples class training corrected classifier follow method sukhbaatar patrini train corrected classifier given softmax output classifier reinitialize define new outputs train model noisy labels crossentropy loss conditionally independent given nonsingular follows given perfect estimate find using work well practice even singular corruption matrices improve method using data trusted set train corrected classifier examples trusted set encountered training temporarily set identity matrix turn correction effect allowing label correction handle degree label noise menon summary method algorithm algorithm old oss orrection glc loss input trusted data untrusted data loss train network zeros number classes fill add kth column end end initialize new model train output model experiments empirically demonstrate glc variety datasets architectures several types label noise description generating corrupted labels suppose dataset examples sample set datapoints probabilistically remaining examples form corrupt according true corruption matrix note knowledge untrusted examples corrupted know potentially corrupted generate untrusted labels true labels first obtain corruption matrix example true label sample corrupted label categorical distribution parameterized ith row comparing loss correction methods glc differs previous loss corrections label noise reasonably assumes access annotation source therefore compare loss correction methods ask method performs starting dataset label noise words additional information method uses knowledge examples trusted potentially using trusted data train deep networks uniform trusted corruption strength imdb flip trusted corruption strength trusted corruption strength test error corruption strength uniform trusted corruption strength flip trusted test error test error corruption strength sst flip trusted test error mnist flip trusted test error test error test error glc distillation forward gold forward correction test error flip trusted corruption strength corruption strength figure error curves compared methods across range corruption strengths different datasets corrupted datasets architectures noise corrections mnist mnist dataset contains grayscale images digits training set images test set images preprocessing rescale pixels unit range train fully connected network dimension network optimized adam epochs using batches size learning rate regularization use weight decay layers cifar two cifar datasets contain color images ten classes classes superclasses partition classes semantically similar sets use superclasses hierarchical noise datasets training images testing images datasets train wide residual network zagoruyko komodakis depth train epochs using widening factor stochastic gradient descent restarts loshchilov hutter imdb imdb large movie reviews dataset maas contains highly polarized movie reviews internet movie database split evenly train test sets pad clip reviews length tokens learn word vectors scratch vocab size train lstm hidden dimensions data train using adam optimizer kingma epochs batch size suggested learning rate regularization use dropout srivastava linear output layer keep probability twitter twitter part speech dataset gimpel contains tweets annotated pos tags training set tweets test set use pretrained word vectors token concatenate word vectors fixed window centered token form training test set use window size train fully connected network hidden size use nonlinearity hendrycks gimpel train using adam optimizer epochs batch size learning rate regularization use weight decay linear output layer sst stanford sentiment treebank dataset consists single sentence movie reviews reviews training set test set use binarized labels sentiment classification moreover pad clip reviews length tokens learn word vectors scratch vocab size classifier model affine output layer use adam optimizer epochs batch size learning rate regularization mnist using trusted data train deep networks corruption type uniform uniform uniform flip flip flip mean uniform uniform uniform flip flip flip mean uniform uniform uniform flip flip flip hierarchical hierarchical hierarchical mean percent trusted trusted forward correction correction forward gold distillation confusion correction matrix glc table vision dataset results percent trusted trusted fraction multiplied unless otherwise indicated values percentages representing area error curve computed test points best mean result shown bold use weight decay output layer forward loss correction forward correction trainmethod patrini also obtains ing classifier noisy labels using resulting softmax probabilities however method make use trusted fraction training data instead uses argmax percentile softmax probabilities given class heuristic detecting example truly member said class original paper replace argmax softmax probabilities given class experiments estimate used train corrected classifier way glc forward gold examine effect training trusted labels done glc augment forward estimate identity method replacing trusted examples refer resulting method forward gold seen intermediate method forward glc distillation distillation method involves training neural network large trusted dataset using network provide soft targets untrusted data way labels distilled neural network classifier decisions untrusted inputs less reliable original noisy labels network utility limited thus obtain reliable neural network large trusted dataset necessary new classifier trained using labels convex combination soft targets original untrusted labels uniform flip hierarchical corruption matrices consider three types corruption matrices corrupting uniformly classes cij flipping label different class corrupting uniformly classes semantically similar order create uniform corruption different strengths take convex combination identity matrix matrix refer coefficient corruption strength uniform corruption flip corruption strength involves row giving column probability mass entries along diagonal probability mass twitter imdb sst using trusted data train deep networks corruption type uniform uniform uniform flip flip flip mean uniform uniform uniform flip flip flip mean uniform uniform uniform flip flip flip mean percent trusted trusted correction forward forward gold distillation confusion matrix glc table nlp dataset results percent trusted trusted fraction multiplied unless otherwise indicated values percentages representing area error curve computed test points best mean result bolded nally realistic corruption hierarchical corruption corruption apply uniform corruption semantically similar classes example bed may corrupted couch beaver examples deemed semantically similar share superclass coarse label specified dataset creators experiments analysis results train models described section uniform hierarchical label corruptions various fractions trusted data dataset assess performance glc compare loss correction methods two baselines one train network trusted data without label corrections one network trains data without label corrections additionally report results variant glc uses normalized confusion matrices elaborate discussion record errors test sets corruption strengths since compute model accuracy numerous corruption strengths cifar experiments involves training wide residual networks tables report area error curves across corruption strengths baselines corrections sample error curves displayed figure across experiments glc obtains better area error curve forward distillation methods rankings methods baselines mixed mnist training trusted data alone outperforms methods save glc confusion matrix performs significantly worse even large trusted fractions interestingly forward gold performs worse forward several datasets observe behavior turning corresponding component glc believe may due variance introduced training difference signal provided forward method estimate clean labels glc provides superior estimate thus may better able leverage training clean labels additional results svhn supplementary materials weak classifier labels next benchmark glc use noisy labels obtained weak classifier models scenario label noise arising classification system weaker one access information true labels one wishes transfer one system example scraping image labels surrounding text web pages provides valuable signal labels would train classifier without correcting label noise using trusted data train deep networks percent trusted mean mean trusted correction forward forward gold distillation confusion matrix glc table results obtaining noisy labels sampling softmax distribution weak classifier percent trusted trusted fraction multiplied unless otherwise indicated values percent error attained indicated correction best average result dataset shown bold weak classifier label generation obtain labels train wide residual networks clean labels ten epochs sample softmax distributions temperature fix resulting labels results noisy labels use place labels obtained uniform flip hierarchical corruption methods weak classifiers obtain accuracies despite presence highly corrupted labels able significantly recover performance use trusted set note unlike previous corruption methods weak classifier labels one corruption strength thus performance measured percent error rather area error curve results displayed table analysis results overall glc outperforms methods weak classifier label experiments distillation method performs better glc small margin highest trusted fraction performs worse lower trusted fractions indicating glc enjoys superior data efficiency highlighted glc attaining error rate trusted fraction original error rate noted however training correction attains error experiment suggesting weak classifier labels low bias improvement conferred glc significant higher trusted fractions discussion future directions confusion matrices intuitively reasonable alternative glc estimate confusion matrix one would train classifier untrusted examples obtain confusion matrix trusted examples rownormalize matrix train corrected classifier glc however glc far method estimating particular classes confusion matrix requires least trusted examples estimate entries whereas glc requires trusted examples another problem using confusion matrices normalized confusion matrices give biased estimate limit due using argmax class scores rather randomly sampling class leads vastly overestimating value dominant entry row seen figure correspondingly found glc outperforms confusion matrices significant margin across nearly experiments smaller gap performance datasets number classes smaller results displayed main tables also found smoothing normalized confusion matrices necessary stabilize training data efficiency seen glc works small trusted fractions corroborate data efficiency turning dataset xiao massive dataset humanannotated noisy labels use compare data efficiency glc distillation trusted labels present dataset consists million noisily labeled clothing images obtained crawling online marketplaces images humanannotated examples take subsamples trusted set glc distillation first pretrained resnet untrusted training examples four epochs use estimate corruption matrix thereafter network four epochs combined trusted untrusted sets using respective method fine tuning freeze first seven layers train using gradient descent nesterov momentum cosine learning rate schedule preprocessing randomly crop resolution use mirroring also upsample trusted dataset finding give better performance methods percent accuracy using trusted data train deep networks distillation glc number trusted examples figure data efficiency method compared distillation shown figure glc outperforms distillation large margin especially lower numbers trusted examples distillation requires classifier trusted data alone generalizes poorly examples contrast estimating matrix done examples correspondingly find advantage decreases number trusted examples increases trusted labels performance saturates evident figure consider extreme train entire trusted set resnext xie untrusted training examples estimate corruption matrix resnext training examples use gradient descent nesterov momentum first two epochs tune output layer learning rate thereafter tune whole network learning rate two epochs another two epochs apply loss correction entire network learning rate two epochs continue training based upon validation set previous work xiao obtain setting however method obtains accuracy procedure forward method obtains accuracy estimation datasets classifier improving may poor estimate presenting glc see extent bottleneck estimation could impact performance whether simple methods improving could help ran several variants glc experiment label flipping corruption trusted fraction describe variants averaged area error curve five random initializations first variant replaced glc estimate true corruption matrix used generating noisy labels demonstrated guo modern deep neural network classifiers tend overconfident softmax distributions found case estimate despite higher entropy noisy labels used temperature scaling confidence calibration method proposed paper calibrate suppose know base rates corrupted labels base rate true labels corrupted trusted set posit labels thus may obtain superior estimate corruption matrix argmin kbt computing new estimate subject found using true corruption matrix provides benefit percentage points area error curve neither confidence calibration base rate incorporation able change performance original glc indicates glc robust use uncalibrated networks estimating improving performance may difficult without directly improving performance neural network used estimate better performance corruption uniform corruption use experiments example corruption sense mutual information zero corruption strength equals found training trusted dataset resulted superior performance corruption setting especially twitter indicates may possible devise loss trusted untrusted examples using information theoretic would improve performance measures obtained regimes conclusion work shown impact small set trusted examples classifier label robustness proposed gold loss correction glc method handling label noise method leverages assumption model access small set correct labels yield accurate estimates noise distribution experiments glc surpasses previous label robustness methods across various natural language processing vision domains showed considering several using trusted data train deep networks corruptions numerous strengths consequently glc powerful label corruption correction references biggio nelson laskov support vector machines adversarial label noise acml chapelle olivier schlkopf bernhard zien alexander learning mit press edition verleysen michel classification presence label noise survey ieee trans neural netw learn syst may gimpel kevin schneider nathan connor brendan das dipanjan mills daniel eisenstein jacob heilman michael yogatama dani flanigan jeffrey smith noah tagging twitter annotation features experiments proceedings annual meeting association computational linguistics human language technologies short papers volume hlt stroudsburg usa association computational linguistics guo chuan pleiss geoff sun weinberger kilian calibration modern neural networks corr url http hendrycks dan gimpel kevin bridging nonlinearities stochastic regularizers gaussian error linear units june kingma diederik jimmy adam method stochastic optimization corr url http larsen nonboe hansen design robust neural network classifiers acoustics speech signal processing proceedings ieee international conference volume may wang yining singh aarti vorobeychik yevgeniy data poisoning attacks collaborative filtering corr url http yuncheng yang jianchao song yale cao liangliang luo jiebo jia learning noisy labels distillation corr url http loshchilov ilya hutter frank sgdr stochastic gradient descent restarts corr url http maas andrew daly raymond pham peter huang dan andrew potts christopher learning word vectors sentiment analysis proceedings annual meeting association computational linguistics human language technologies menon aditya krishna van rooyen brendan natarajan nagarajan learning binary labels instancedependent corruption corr url http mnih volodymyr hinton geoffrey learning label aerial images noisy data proceedings international conference machine learning natarajan nagarajan dhillon inderjit ravikumar pradeep tewari ambuj learning noisy labels burges bottou welling ghahramani weinberger eds advances neural information processing systems curran associates nettleton david albert fornells albert study effect different types noise precision supervised learning techniques artif intell rev april patrini giorgio rozza alessandro menon aditya nock richard lizhen making deep neural networks robust label noise loss correction approach september pechenizkiy tsymbal puuronen pechenizkiy class noise supervised learning medical domains effect feature extraction ieee symposium medical systems cbms reed scott lee honglak anguelov dragomir szegedy christian erhan dumitru rabinovich andrew training deep neural networks noisy labels bootstrapping december srivastava nitish hinton geoffrey krizhevsky alex sutskever ilya salakhutdinov ruslan dropout simple way prevent neural networks overfitting journal machine learning research steinhardt jacob koh pang wei liang percy certified defenses data poisoning attacks nips sukhbaatar sainbayar bruna joan paluri manohar bourdev lubomir fergus rob training convolutional networks noisy labels june using trusted data train deep networks veit andreas alldrin neil chechik gal krasin ivan gupta abhinav belongie serge learning noisy datasets minimal supervision corr url http xiao tong xia tian yang huang chang wang xiaogang learning massive noisy labeled data image classification ieee conference computer vision pattern recognition cvpr june xie saining girshick ross piotr zhuowen kaiming aggregated residual transformations deep neural networks arxiv preprint zagoruyko sergey komodakis nikos wide residual networks may zhu learning literature survey zhu xingquan xindong class noise attribute noise quantitative study artificial intelligence review november using trusted data train deep networks svhn mnist additional results figures corruption type uniform uniform uniform flip flip flip mean uniform uniform uniform flip flip flip mean uniform uniform uniform flip flip flip mean uniform uniform uniform flip flip flip hierarchical hierarchical hierarchical mean percent trusted trusted forward correction correction forward gold distillation confusion correction matrix glc table vision dataset results percent trusted trusted fraction multiplied unless otherwise indicated values percentages representing area error curve computed test points best mean result shown bold twitter imdb sst using trusted data train deep networks corruption type uniform uniform uniform flip flip flip mean uniform uniform uniform flip flip flip mean uniform uniform uniform flip flip flip mean percent trusted trusted correction forward forward gold distillation confusion matrix glc table nlp dataset results percent trusted trusted fraction multiplied unless otherwise indicated values percentages representing area error curve computed test points best mean result bolded percent trusted mean mean trusted correction forward forward gold distillation confusion matrix glc table results obtaining noisy labels sampling softmax distribution weak classifier percent trusted trusted fraction multiplied unless otherwise indicated values percent error attained indicated correction best average result dataset shown bold using trusted data train deep networks corruption strength uniform trusted corruption strength flip trusted corruption strength uniform trusted distillation forward method forward gold confusion data corruption strength flip trusted corruption strength test error distillation forward method forward gold confusion data corruption strength flip trusted uniform trusted corruption strength hierarchical trusted corruption strength mnist uniform trusted distillation forward method forward gold confusion data corruption strength flip trusted corruption strength mnist uniform trusted corruption strength uniform trusted distillation forward method forward gold confusion data corruption strength flip trusted corruption strength corruption strength mnist uniform trusted distillation forward method forward gold confusion data corruption strength corruption strength mnist flip trusted corruption strength mnist flip trusted test error test error test error test error test error test error mnist flip trusted corruption strength hierarchical trusted corruption strength test error uniform trusted test error test error test error test error test error hierarchical trusted test error test error test error flip trusted test error test error distillation forward method forward gold confusion data test error test error uniform trusted test error corruption strength corruption strength using trusted data train deep networks corruption strength svhn uniform trusted distillation forward method forward gold confusion data corruption strength svhn uniform trusted corruption strength corruption strength svhn flip trusted twitter uniform trusted corruption strength imdb uniform trusted distillation forward method forward gold confusion data corruption strength svhn flip trusted distillation forward method forward gold confusion data corruption strength twitter flip trusted corruption strength imdb uniform trusted corruption strength sst uniform trusted corruption strength twitter uniform trusted test error test error corruption strength twitter flip trusted corruption strength sst uniform trusted corruption strength twitter uniform trusted test error test error corruption strength twitter flip trusted corruption strength sst uniform trusted corruption strength test error test error test error corruption strength imdb flip trusted corruption strength sst flip trusted corruption strength sst flip trusted corruption strength imdb flip trusted corruption strength imdb flip trusted test error test error test error test error distillation forward method forward gold confusion data test error distillation forward method forward gold confusion data test error test error test error test error test error test error imdb uniform trusted test error test error distillation forward method forward gold confusion data test error test error svhn flip trusted test error svhn uniform trusted test error corruption strength sst flip trusted corruption strength corruption strength
| 9 |
feb algebraic approach openness conjecture demailly mattias jonsson mircea abstract reduce openness conjecture demailly singularities plurisubharmonic functions purely algebraic statement contents introduction background plurisubharmonic functions proof main results references introduction paper study singularities plurisubharmonic psh functions important complex analytic geometry see instance specifically study openness conjecture demailly reduce conjecture purely algebraic statement let germ psh function point complex manifold easy see set real numbers exp locally integrable interval nonempty result skoda openness conjecture see remark asserts interval open define complex singularity exponent sup exp locally integrable conjecture stated follows conjecture function exp locally integrable date april key words phrases plurisubharmonic function graded sequence log canonical threshold valuation mathematics subject classification primary secondary first author partially supported nsf grant second author partially supported nsf grant mattias jonsson mircea fact demailly made following slightly precise conjecture easily implies conjecture see remark conjecture every open neighborhood defined estimate vol log throughout paper write exists sufficiently small demailly also proved conjecture implies stronger openness statement namely local integrability exp open condition respect topology see conjecture conjectures easily verified dimension one proof twodimensional case given higher dimensions open paper reduce conjecture purely algebraic conjecture conjecture let algebraically closed field characteristic zero let graded sequence ideals polynomial ring maximal ideal exists quasimonomial valuation computes lct let briefly explain meaning terms see details log canonical threshold lct ideal algebrogeometric analogue complex singularity exponent graded sequence ideals sequence always assume nonzero define lct sup lct lim lct limit nonzero similarly valuation set sup lim limit nonzero one show lct inf infimum quasimonomial valuations valuations monomial suitable coordinates suitable blowup spec log discrepancy finally say quasimonomial valuation computes lct infimum achieved conjecture holds dimension two see higher dimensions open main result theorem conjecture holds dimension algebraically closed field characteristic zero conjecture holds dimension openness conjecture fact prove slightly general result log canonical threshold replaced general jumping numbers sense see theorem result following consequence let psh function complex manifold recall multiplier ideal analytic ideal sheaf whose stalk point given set holomorphic germs locally integrable coherent ideal sheaf define increasing locally stationary limit show suitable generalization conjecture implies see remark note one formulate version conjecture general setting dealing arbitrary graded sequences regular excellent connected schemes shown general conjecture follows special case conjecture one also formulate similar conjecture subadditive sequences recall sequence nonzero ideals subadditive case graded sequences define lct valuation consider whether computes lct say controlled growth quasimonomial valuations left inequality obvious subadditive systems usually arise multiplier ideals controlled growth see proposition also proposition show conjecture implies fact equivalent following statement conjecture let subadditive sequence ideals excellent regular domain equicharacteristic zero controlled growth maximal ideal positive integer mpj exists quasimonomial valuation computes lct form conjecture use proof theorem let indicate strategy proof suppose psh germ point complex manifold associate sequence ideals letting analytic multiplier ideal psh function follows subadditive using techniques due demailly one show controlled growth singularities closely approximate complex singularity exponent lower semicontinuous function point define log canonical locus germ analytic set defined assume smooth fact restriction let localization ring holomorphic germs prime ideal defined since latter ring excellent theorem subadditive sequence ideals excellent regular local ring applying conjecture conclude exists mattias jonsson mircea quasimonomial valuation computing lct valuation monomial suitable algebraic coordinates regular scheme admitting projective birational map spec analytify latter map interpret quasimonomial valuation analytic invariant kiselman number using basic properties psh functions kiselman numbers obtain desired volume estimates conjecture already mentioned conjecture holds dimension two obtain new proof openness conjecture dimension two fact proof quite similar one strategy loc cit consider subspace semivaluations satisfying maximal ideal one equip natural topology compact hausdorff also structure tree studied detail see also psh germ one associate lower semicontinuous function whose minimum equal turns minimum must occur semivaluation either quasimonomial associated germ analytic curve cases one deduce volume estimate conjecture using simplified version arguments higher dimensions analogue space studied shown computed using quasimonomial valuations however isolated singularity origin seems difficult define suitable lower semicontinuous functional directly minimum equal idea instead work generic point log canonical locus quite make sense analytic category reason pass algebraic arguments using subadditive sequence algebraic category localization arguments work quite well extensively used idea studing psh functions using valuations systematically developed appears already work lelong kiselman recent work singularities psh functions see also paper organized follows review facts sequences ideals log canonical thresholds algebraic setting adapt statements setting complex analytic manifolds also prove equivalence conjectures discuss plurisubharmonic functions kiselman numbers multiplier ideal sheaves demailly approximation procedure finally main results proved acknowledgment thank rashkovskii spotting mistake proof theorem earlier version paper also thank referee careful reading several useful remarks background algebraic setting start recalling basic algebraic facts details refer even though much follows standard material let excellent regular domain equicharacteristic zero openness conjecture cases consider localization prime ideal ring germs holomorphic functions point complex manifold quasimonomial valuations valuation mean rank valuation valuation divisorial exists projective birational morphism spec regular prime divisor positive number orde orde denotes order vanishing along generally consider projective birational morphism spec regular reduced simple normal crossing divisor subset irreducible component nonnegative numbers zero unique valuation following holds local coordinates generic point write either min call valuation quasimonomial say morphism spec adapted general unique hand always choose rationally independent see lemma finally note quasimonomial valuations also known abhyankar valuations see log discrepancy using notation define log discrepancy quasimonomial valuation orddj spec relative canonical divisor particular orddj orddj one show log discrepancy quasimonomial valuation depend choices made see furthermore one extend definition log discrepancy arbitrary valuations case log discrepancy infinite see log canonical thresholds jumping numbers proper nonzero ideal define log canonical threshold lct inf infimum nonzero valuations enough consider quasimonomial even divisorial valuations quantity arn opposed convention consider trivial valuation identically zero quasimonomial mattias jonsson mircea lct called arnold multiplicity generally nonzero ideal define lctq inf arnq lctq lctq jumping number sense jumping numbers appear way note lct lctr infimum resp attained divisorial valuation associated prime divisor log resolution resp make convention lctq respectively graded sequences recall definitions asymptotic invariants graded sequences ideals proofs details refer see also sequence ideals graded sequence example valuation positive real number putting obtain graded sequence refer laz examples graded sequences ideals assume graded sequences nonzero sense nonzero follows definition graded sequence ideals valuation fekete lemma subadditivity property implies inf lim limit nonzero similarly nonzero ideal lctq sup lctq lim lctq limit nonzero also put arnq lctq jumping number lctq positive may infinite one show see corolloray case one ideal lctq inf infimum nonzero valuations enough fact consider quasimonomial even divisorial valuations subadditive sequences let review corresponding notions case subadditive sequences referring details sequence nonzero ideals called subadditive implies valuations hence sup lim openness conjecture subadditive sequence controlled growth quasimonomial valuation fact enough impose condition divisorial valuations particular every quasimonomial valuation every subadditive system every nonzero ideal define lctq inf lctq lim lctq also put arn lct every subadditive sequence lctq inf infimum valuations see corollary clear definition lctq unless moreover controlled growth lctq indeed one easily see arnq arnq arnq arn subadditive sequences arise algebraically asymptotic multiplier ideals graded sequence ideals asymptotic multiplier ideal exponent subadditive sequence controlled growth see proposition furthermore lctq lctq every nonzero ideal every valuation see proposition computing jumping numbers graded sequences graded sequence ideals nonzero ideal say nonzero valuation computes lctq achieves infimum lctq note lct every hence every computes lctq follows focus case lctq every valuation computes lctq must satisfy shown theorem every graded sequence every nonzero ideal valuation computes lctq one contrast conjecture every excellent regular domain equicharacteristic zero every graded sequence ideals every nonzero ideal quasimonomial valuation computes lctq one also consider following special case conjecture conjecture algebraically closed field characteristic zero every nonzero ideal every graded sequence ideals maximal ideal quasimonomial valuation computes lctq mattias jonsson mircea note ideal equal conjecture specializes conjecture introduction theorem conjecture holds rings dimension conjecture holds rings dimension note conjectures trivially true dimension one also true dimension two proof modeled ideas given computing jumping numbers subadditive sequences turn analogous considerations subadditive sequences sequence valuation computes lctq achieves infimum following conjecture extends conjecture introduction case arbitrary jumping numbers conjecture let nonzero ideal subadditive sequence ideals excellent regular domain equicharacteristic zero controlled growth maximal ideal positive integer mpj exists quasimonomial valuation computes lctq key requirement conjecture valuation quasimonomial next proposition shows drop requirement find valuation computing log canonical threshold analogue corresponding result graded sequences mentioned proposition assumptions conjecture nonzero valuation computes lctq remark similar result appears see also proof argument follows verbatim proof thm treated case graded lctq assertion trivial may take quasimonomial valuation since case hence may assume lctq assumption therefore need focus valuations normalizing may assume let fix arnq suppose every hence therefore implies thus lctq inf set valuations space valuations carries natural topology subspace compact proposition moreover lower semicontinuous comparison note ring equicharacteristic zero iff spec scheme proof thm involves extra step reduction case ideals maximal ideal case worry step since part hypothesis openness conjecture function functions continuous proposition corollary last assertion make use hypothesis controlled growth follows function lower semicontinuous hence achieves infimum point equivalence conjectures main result conjecture implies openness conjecture see theorem first step show conjectures equivalent proposition one conjectures holds rings dimension two conjectures hold rings proof already mentioned theorem gives equivalence conjectures hand easy see conjecture holds dimension conjecture indeed let conjecture let case subadditive sequence controlled growth every mpj furthermore quasimonomial valuation computes lctq since lctq lctq follows computes lctq therefore conjecture holds dimension assume conjecture holds dimension consider nonzero subadditive sequence conjecture dim may assume lctq since otherwise assertion proved trivial note also lctq since controlled growth follows proposition nonzero valuation computes lctq particular finite positive put graded sequence ideals conjecture quasimonomial valuation computes lctq enough show case also computes lctq follows easily definition inf see example lemma first deduce lctq hence furthermore since computes lctq hand using fact obtain therefore gives assumption computes lctq hence equality also computes lctq mattias jonsson mircea remark running argument proof proposition see conjectures introduction equivalent sense one holds rings dimension one converse conjectures partial converse equivalent conjectures show quasimonomial valuation computes jumping number result used sequel formulation proof freely use terminology proposition let excellent regular connected separated scheme let quasimonomial valuation exists nonzero ideal graded sequence computes lctq proof thm suffices find nonzero ideal following statement holds every valuation valx sense ideals log discrepancy respect replacing open neighborhood center may assume spec affine since quasimonomial exists proper birational morphism regular algebraic local coordinates respect monomial let associated prime divisors pick large enough ordei also write nonzero claim principal ideal job indeed suppose valx satisfies particular since monomial coordinates ordei definition also ordei equations definition imply ordei ordei ordei last inequality follows choice completes proof openness conjecture remark follows thm choice also computes lctq subadditive sequence well lctq graded sequence defined analytic setting let complex manifold talking open sets always refer classical topology unless mentioned otherwise ideal always mean coherent analytic ideal sheaf point denotes ring germs holomorphic functions note isomorphic ring convergent power series variables dim hence excellent regular local ring see thm denote maximal ideal valuation mean valuation sense subadditive sequence ideals sequence everywhere nonzero ideals write corresponding subadditive sequence inside say controlled growth controlled growth everywhere nonzero ideal subadditive sequence ideals define lctqx arnqx lctqx thus arnqx sup supremum quasimonomial valuations note simply write respectively generally shall consider following situation let germ complex submanifold point complex manifold let localization along ideal excellent regular local ring maximal ideal consider subadditive system ideals defined near nonzero ideal set lctqx arnqx lctqx supremum quasimonomial valuations note recover previous situation arnqx sup analytification birational morphisms let consider projective birational morphism spec schemes regular analytify since projective exists closed embedding spec restriction projection right hand side onto spec thus cut finitely many homogeneous equations analytification procedure hoc functorial nevertheless related construction complex manifold associated smooth complex projective variety mattias jonsson mircea coefficients coefficients written let neighborhood defined set possibly empty analytic subset contain shrinking may assume either empty contains define complex manifold analytic subset cut equations complex manifold induced projection proper modification shall say construction later given point define way projective birational morphism spec namely spec defined homogeneous polynomials shrinking increasing keeping may obtain prime divisor image contains suppose ideal spec log resolution may assume birational morphism spec log resolution bijection set prime divisors orde set prime divisors orde log canonical locus key proof theorem localize locus log canonical threshold small possible let let resp subadditive system ideals resp nonzero ideal defined neighborhood assume small enough submanifold lemma assume controlled growth lctqy lctqx lctqx proof let first prove lctqx fix pick log resolution spec ideal qbm base change spec spec induces log resolution ideal qbm orde arnqx max orde orde maximum set prime divisors orde hand arnqx given expression maximum subset prime divisors contains clear arnqx arnqx dividing letting yields arnqx arnqx hence lctqx lctqx prove reverse inequality pick consider log resolution spec ideal qbm gives rise open neighborhood analytic subset containing openness conjecture log resolution spec qbm bijection set prime divisors orde set prime divisors orde implies arnqy arnqx since quantities calculated using divisors thus arnqy arnqx arnqx inequality definitional exists quasimonomial even divisorial valuation arnqy gives arny arnqx arnqy second inequality follows assumption controlled growth letting first obtain arnqx hence lctqx completes proof plurisubharmonic functions let complex manifold function plurisubharmonic psh connected component upper semicontinuous subharmonic every holomorphic map unit disc germ psh function point defined obvious way basic example psh function log maxi holomorphic functions ideal define psh germ log setting log log max generators choice generators affects log bounded additive term two ideals log log log facts psh functions see dem jumping numbers singularity exponents let psh germ point complex manifold let nonzero ideal define cqx sup exp locally integrable definition depend choice generators used define also write called complex singularity mattias jonsson mircea exponent whereas cqx jumping number sense nonzero ideal cqx log lctqx right hand side defined see proposition following generalizations conjectures conjecture cqx function exp locally integrable conjecture also due demailly paraphrased semicontinuity statement multiplier ideals see remark conjecture cqx open neighborhood defined vol log log clear conjecture implies conjecture neither conjecture depends choice generators following result variation theorem introduction shall prove theorems theorem conjecture holds algebraically closed fields characteristic zero conjecture holds complex manifolds dimension kiselman numbers recall analytic version monomial valuations due kiselman special case generalized lelong numbers introduced demailly setting differs slightly references give details convenience reader let complex manifold dimension connected submanifold codimension distinct smooth connected hypersurfaces meet transversely along also suppose given positive real numbers situation associate psh function kiselman number preparation definition pick point local analytic coordinates locally let polydisc radius distinguished boundary eti eti also write let psh germ pick small enough defined open neighborhood polydisc log set sup sup loc cit case treated proof works general case openness conjecture clearly increasing argument since upper semicontinuous less obvious fact convex see note continuous closed set since defined convex open neighborhood set define new function setting log limit well defined convexity lim lemma function following properties nonnegative continuous concave increasing argument depend choice long defined open neighborhood log iii depend choice local coordinates long depend choice point long defined neighborhood proof alleviate notation shall write relevant part subscripts fact nonnegative continuous concave increasing follows continuous convex increasing clear proves suppose clear since increasing prove reverse inequality first suppose set mini log log log log log log implies continuity get hence turn iii suppose let another set local analytic coordinates write easy see small enough exists log log gives reverse inequality follows symmetry thus iii holds mattias jonsson mircea finally prove thus pick point set local coordinates pick log defines local coordinates log log log log implies similarly thus locally constant completes proof since connected assume number called kiselman number along weight along explained lemma depend choice coordinates defining hypersurfaces however given coordinates follows convexity estimate log max near inequality easily deduce lemma suppose psh functions defined near zariski general point write near max min proof inequality follows immediately definition note implies max min reverse inequality follows remark also true need fact remark using construction define germs complex submanifolds point complex manifold remark choice hypersurfaces play role case kiselman number equal lelong number along kiselman numbers quasimonomial valuations let complex manifold point germ complex submanifold allow case assume codimension let localization ideal let maximal ideal let quasimonomial valuation want associate kiselman number suitable modification consider projective birational morphism spec adapted pthe sense thus exist prime divisors simple normal crossing singularities irreducible component kiselman number called refined lelong number whereas demailly calls directional lelong number openness conjecture monomial weight assumption implies let generic point pick functions thus functions regular zariski open subset using construction conventions shrinking little projective birational morphism spec gives rise complex manifold proper modification complex subvariety containing shrinking increasing necessary exists open subset functions holomorphic following properties hold sets dian complex submanifolds oftcodimension one meeting transversely along connected submanifold dian let dan denote kiselman number respect data see definition germ psh function define note definition priori depends lot choices made however proposition definition depend choices made long birational morphism spec adapted shall prove result using multiplier ideals see remark treat following special case lemma nonzero ideal log proof note sides depend continuously weight hence may assume rationally independent view lemma may also assume generated single element must prove log consider zariski general closed point pick functions define local algebraic coordinates write consider expansion formal power series mattias jonsson mircea since rationally independent exists unique minimizing definition since point generically chosen corresponds point also denoted complex manifold may assume holomorphic near pick holomorphic open polydisk first series taylor series holomorphic function analytic coordinates series converges locally uniformly polydisk kuk every series converges locally uniformly holomorphic function second series converges locally uniformly kuk assumption holomorphic function constantly equal zero moving keeping little translating coordinates accordingly may assume let use notation log log log set log implies log shown multiplier ideal sheaves demailly approximation psh function complex manifold associated multiplier ideal sheaf ideal sheaf whose stalk point set holomorphic germs locally integrable coherence nontrivial result due nadel proved using see thm recall definition jumping number cqx relative ideal given consider colon ideal ideal sheaf whose stalk point given locally integrable since coherent lemma cqx iff consequence function cqx lower semicontinuous analytic zariski topology proof first statement clear hence set cqx equal support coherent sheaf particular analytic subset follows set cqx also analytic subset concludes proof openness conjecture remark conjecture equivalent semicontinuity statement multiplier ideals indeed define increasing locally stationary limit conjecture precisely says holomorphic functions generating ideal sheaf log corresponding psh function defined log side defined see proposition lemma log integer proof view assumption log last inclusion holds since ideal fix psh function set follows subadditive sequence ideals following result known case see theorem theorem allows understand singularities terms proposition let point let germ proper complex submanifold define following properties hold every nonzero ideal cqx lctqx subadditive sequence controlled growth iii every quasimonomial valuation remark iii compute kiselman number pullback suitable proper modification latter analytification blowup spec see since quantity depend choices made see uniquely defined thus obtain proof proposition proof proposition relies fundamental approximation procedure due demailly refer details follows let psh function defined pseudoconvex domain containing consider hilbert space natural inner product fact every elements generate stalk multiplier ideal sheaf define sup log mattias jonsson mircea psh follows theorem constant depending nonzero ideal also lctqy cqy cqy cqy lctqy two equalities follow whereas first inequality results second inequality proved thm case proof works general case proof proposition clearly follows letting remains prove iii use notation write follows proposition show grant moment letting tend infinity see proving iii particular well defined independently choices made established proposition since arbitrary quasimonomial valuation also see controlled growth proving remains prove since sides depend continuously weight may assume rationally independent argue proof lemma recycling notation proof thus expansion unique fix define sequence disjoint open subsets log log large following estimates log openness conjecture second estimate follows let nonvanishing holomorphic volume form near write near dun log log near log moreover volume estimated log note large enough biholomorphic contained thus get exp exp yields strict inequality proof main results ready prove theorem introduction variant theorem consider germ psh function point complex manifold dimension let nonzero ideal cqx let small open neighborhood defined open neighborhood also fix nonvanishing holomorphic volume form neighborhood compute volumes respect positive measure mattias jonsson mircea analytic reduction set cqy cqx cqy note lower semicontinuity cqy see lemma proper analytic subset decreasing intersection using fact defined neighborhood deduce existence lemma order prove theorem suffices assume smooth log near integer proof replace zariski general point indeed cqy zariski general point estimate vol log log holds every neighborhood point dense subset also holds every neighborhood particular may assume smooth pick generators shrinking may assume generators defined associated psh function log defined negative integer define max log cqx cqx claim allow replace complete proof indeed estimate holds replaced must also hold prove claim pick consider colon ideal coherent ideal sheaf whose stalk given locally integrable fact implies zero locus equal hence nullstellensatz implies exists ivn pick integer large enough pick define borel subsets log log log log log log log openness conjecture follows choice inclusion ivn guarantees possibly shrinking vol indeed set exp log log shrinking vol vol last equality follows setting hand fact cqx implies vol inclusion gives vol cqx letting get cqx must cqx cqx hence cqx cqx establishing claim completing proof lemma remark proof lemma viewed analytic analogue arguments end proof let particular smooth let localization ideal regular local ring maximal ideal dimension equal codimension hence bounded also excellent ring indeed isomorphic ring convergent power series variables hence excellent see theorem excellence preserved localization set subadditive system ideals controlled growth see proposition lemma may assume log hence lemma implies mpj definition proposition see lctqy cqy every lemma shows recall assume conjecture holds rings dimension proposition implies conjecture also holds rings dimension thus find quasimonomial valuation consider projective birational morphism spec defines log resolution adapted thus given data mattias jonsson mircea analytify following let denote kiselman number respect data dan see know proposition iii remark thus yields use notation pick zariski general point log log near orddi see end also log log see finally recall log max fix define disjoint open subsets following estimates log using estimates imply log open set set log estimate volume follows vol dui exp exp estimate together concludes proof theorem choosing throughout arguments see also remark also obtain proof theorem openness conjecture references berndtsson subharmonicity properties bergman kernel functions associated pseudoconvex domains ann inst fourier boucksom favre jonsson valuations plurisubharmonic singularities publ res inst math sci dem demailly complex analytic algebraic geometry book available demailly nombres lelong acta math demailly regularization closed positive currents intersection theory alg geom demailly numerical criterion ample line bundles differential geom demailly ein lazarsfeld subadditivity property multiplier ideals michigan math demailly semicontinuity complex singularity exponents metrics fano orbifolds ann sci norm sup ein lazarsfeld smith uniform approximation abhyankar valuations smooth function fields amer math ein lazarsfeld smith varolin jumping coefficients multiplier ideals duke math favre jonsson valuative tree lecture notes mathematics berlin favre jonsson valuative analysis planar plurisubharmonic functions invent math favre jonsson valuations multiplier ideals amer math soc guenancia toric plurisubharmonic functions analytic adjoint ideal sheaves notions convexity progress mathematics boston valuative multiplier ideals preprint valuations log canonical thresholds preprint jonsson dynamics berkovich spaces low dimensions appear berkovich spaces applications france jonsson valuations asymptotic invariants sequences ideals ann inst fourier kiselman nombre lelong analyse complexe des sciences tunis des sciences techniques monastir kiselman attenuating singularities plurisubharmonic functions ann polon math lagerberg new generalization lelong number laz lazarsfeld positivity algebraic geometry ergebnisse der mathematik und ihrer grenzgebiete folge vol berlin lelong plurisubharmonic functions positive differential forms gordon breach new york dunod paris matsumura commutative algebra mathematics lecture note series reading mattias jonsson mircea multiplicities graded sequences ideals algebra nadel multiplier ideal sheaves existence metrics positive scalar curvature proc nat acad sci usa nadel multiplier ideal sheaves metrics positive scalar curvature ann math rashkovskii relative types extremal problems plurisubharmonic functions int math res art siu analyticity sets associated lelong numbers extension positive closed currents inventiones math skoda analytiques ordre fini infini dans bull soc math france dept mathematics university michigan ann arbor usa address mattiasj mmustata
| 0 |
presentations cusped arithmetic hyperbolic lattices alice mark julien paupert oct april abstract present general method compute presentation cusped hyperbolic lattice applying classical result macbeath suitable horoball cover corresponding symmetric space applications compute presentations picard modular groups quaternionic lattice entries hurwitz integer ring introduction discrete subgroups lattices semisimple lie groups form rich class finitely generated groups acting curved metric spaces case real rank one associated symmetric space negatively curved special interest essentially two main families constructions lattices arithmetic one hand geometric arithmetic lattices roughly speaking obtained taking matrices entries lying integer ring number field general definition complicated give arithmetic lattices consider paper simplest type margulis celebrated superrigidity arithmeticity theorems irreducible lattices arithmetic type semisimple lie group real rank least family involves geometric constructions polyhedra reflections types involutions isometries prototype type construction given coxeter groups constant curvature geometries generated reflections across hyperplanes groups classical classified coxeter spaces whereas hyperbolic counterparts studied vinberg others still completely understood however construction groups come equipped data including presentation abtract coxeter group fundamental domain action symmetric space arithmetic lattices given global description global structure sense well understood work siegel borel tits prasad others however concrete information presentation fundamental domain readily accessible arithmetic construction one obtain geometric information volume prasad celebrated volume formula computing constants appearing formula usually involves work see example sto presentations arithmetic lattices lattices general known presentations provide useful geometric algebraic information groups explicit index subgroups effective selberg lemma used example sto cohomology group quotient space see instance picard modular groups course representations instance one interested deformations larger lie group presentations given steinberg ste following magnus case classical possibly dates gauss see also siegel rank one swan gave presentations bianchi groups pgl denotes ring integers positive integer following bianchi original construction act isometries real hyperbolic lattices pgl presentations related picard modular groups found recently simplest cases ffp one reasons associated symmetric space complex hyperbolic complicated particular pinched negative curvature particular feature spaces absence totally geodesic real hypersurfaces makes constructions fundamental domains difficult obvious walls use bound domains presentations obtained fact obtained constructing fundamental domains using polyhedron theorem approach seems become complicated considering complicated groups picard modular groups higher values constructions appeared using similar strategy zhao gave generating sets picard modular groups far obtaining presentation finding set whose translates covers space without control intersections cycles fact use covering argument closely related one uses cover fundamental prism ideal boundary isometric spheres see lemma paper present method obtaining presentations cusped hyperbolic lattices noncocompact lattices semisimple lie groups real rank one based classical result macbeath theorem gives presentation group acting homeomorphisms topological space given open subset whose cover apply finding suitable horoball based cusp point whose cover analyzing triple intersections associated cyles obtain presentation main tools analysis come additional arithmetic structure get assuming fact integral lattice sense contained number field finitely generated division algebra crucial tool use notion level two boundary points see definition gives notion distance points using algebraic data importantly levels measure relative sizes horospheres based correpsonding boundary points allows control whether horospheres intersect given height see lemma applications method compute presentations picard modular groups given expect treat cases method computationally intensive treated elsewhere also compute presentation quaternion hyperbolic lattice call hurwitz modular group ring lattice sometimes denoted psp hurwitz integers acting symmetric space far know first presentation ever found quaternion hyperbolic space dimension groups studied dvv see also paper organized follows section discuss generalities horoball coverings hyperbolic spaces levels cusp points integral lattices outline apply macbeath theorem context section discuss horosphere intersections detail particular quantitative relation levels heights horospheres integral lattices sections apply method compute presentations picard hurwitz modular groups respectively would like thank daniel allcock suggesting method many helpful comments matthew stover daniel allcock pointing mistake earlier version paper horoball coverings lattice presentations adapted horoball coverings covering complex let negatively curved symmetric space hyperbolic space hnk refer reader general properties spaces isometry groups particular isometries spaces roughly classified following types elliptic fixed point parabolic fixed point exactly one loxodromic fixed point exactly two let lattice isom godement compactness criterion states contains parabolic isometries assume cusp point point fixed parabolic element cusp group subgroup form cusp point assume given covering open horoballs see definition collection horoballs moreover assume horoball based cusp point cusp point basepoint unique horoball giving bijection cusp points horoballs call covering horoball covering since lattice finitely many cusp points modulo action follows horoball covering finite union horoballs given horoball covering covering complex associated simplicial vertex set edge connecting pair vertices triangle triple vertices simplicial complex sometimes called nerve covering remark quotient covering complex action finite simplicial use following classical result macbeath theorem let group acting homeomorphisms topological space let open subset whose cover connected set generates moreover admits presentation generating set relations lattice isom horoball covering may remarked write finite union horoballs say minimally one apply macbeath theorem possibly enlarging horoball order union connected simplicity exposition henceforth asume single cusp horoball covering consists single horoballs case examples considered paper case process obtaining presentation fom covering complex closely related complex groups structure quotient covering complex difference need take account edge face stabilizers levels proximal cusp complex recall hyperbolic space hnk admits following projective model briefly recall consider vector space endowed hermitian form signature use convention scalars vectors right whereas matrices act vectors left let let kpn denote projectivization one defines hnk kpn endowed distance bergman metric given zihw note side independent choice lifts projectivization matrix group preserving hermitian form see note usually denoted psp boundary infinity identified kpn would like measure distances points using hermitian form one way use integral lifts vectors rational coordinates follows assume integral lattice sense contained number field division algebra ring integers hermitian form defined say integral vector primitive integral submultiple following sense unit point projective image vector primitive integral lift lift primitive integral vector lemma principal ideal domain primitive integral lifts unique multiplication unit lemma matrix primitive integral vector moreover imaginary quadratic principal ideal domain one standard basis vectors primitive integral vector primitive integral vector proof let integral assuming primitive would exist also integral matrix obtained replacing would also det det contradiction since latter integer det unit let primitive integral vector principal ideal domain single cusp see therefore exists mapping must unit hence conclude definition given two points level denoted lev two primitive integral lifts respectively given preferred point depth point level lemma principal ideal domain proximal cusp complex level denoted complex whose vertices cusp points edge connecting vertices whenever lev triangle triple distinct edges levels give convenient way distinguish orbits edges triangles covering complex following observation follows lemmas lemma principal ideal domain two points lev lev importantly levels allow find optimal height horosphere based preferred point orbit covers relies following result follows corollary lemma exists decreasing function point depth integral satisfying set empty fact see corollary function given definition covering depth unique maximal height covers ucov ucov denotes reduction modulo vertex stabilizer choose preferred cusp point general take siegel model see section consider cusp stabilizer since lattice well known acts cocompactly horospheres based let denote covering depth ucov corresponding covering height horoball cover let compact fundamental domain action practice choose affinely convex polytope heisenberg coordinates see section assume given finite presentation may reduce procedure macbeath theorem finitely many additional generators relations follows let denote points depth assume simplicity ordered way first form system representatives action assume moreover found element possible principle since assumed single cusp generators group generated follows easily part macbeath theorem lemma point depth one note notation theorem using open set cover indeed lemma either point depth one relations rephrase part macbeath theorem context let satisfy first assume conjugating element may asume corresponding relation taking image sides relation gives practice detect relations finding points depth sent points depth generators one recovers relation follows triple exist obtain relation identifying element word generators assume one relations already considered corresponding relation obtained using point correpsonding group element summarizing discussion gives lemma notation admits presentation method practice give outline method use apply macbeath theorem find explicit affine fundamental domain action presentation find covering depth consider corresponding covering complex find points depth denote points find explicit triple exist obtain relation identifying element word generators macbeath theorem lemma order avoid tedious repetition similar arguments straightforward computations give one detailed proof step picard modular groups give detailed arguments quaternionic lattice substantially different usually choose difficult case instructive various cases similar difficulty steps routine state results except step quaternionic lattice cover detail lemmas give detailed argument proof step picard modular group lemma step group depth lemma seems general strategy step find relevant matrices paper combining two tricks luckily cover cases need first trick use stabilizers vertical complex line heisenberg group easy find matrix stabilizes vertical axis carry vertical lines conjugating horizontal translation second trick hit relevant integral points group elements already know see land point trying reach toy example psl order illustrate method steps psl exactly complicated picard hurwitz modular groups results either elementary state without proof presentation fundamental domain cusp stablilizer cusp stabilizer presentation fundamental domain action concretely use following generator covering depth points covering depth psl points depth integral lift denote generators following element maps point relations list table relations obtained psl applying generators points depth described part section second relation obtained following corresponding cycle points gives latter element computed giving relation image vertex cycle points relation table action generators vertices psl horosphere intersections main reference section use siegel model hyperbolic space hnk projective model described section associated hermitian form given hyperbolic space hnk parametrized follows denoting projectivization map hnk parametrization boundary infinity hnk corresponds compactification coordinates called horospherical coordinates point hnk definition fixed level set called horosphere height based called horoball height based punctured boundary hnk naturally identified generalized heisenberg group heis defined set equipped group law denotes usual euclidean classical heisenberg group identification hnk heis given action heis hnk element heis acts vector following heisenberg translation matrix given element heisenberg rotation given following matrix additional class isometries fixing coming action diagonal matrices case recall convention matrices act vectors left scalars act vectors right unit quaternion diagonal matrix acts isometry hyperbolic space given conjugating horospherical coordinates result multiplying vector form left normalizing right qvq hnh reason relevant projectivization acting rather heisenberg translations rotations well conjugation unit quaternions preserve following distance function heis called cygan metric defined heis fact restriction hnk incomplete distance function hnk called extended cygan metric see defined hnk dxc define cygan spheres cygan balls extended cygan spheres extended cygan balls usual way relative distance functions apply macbeath theorem argue images horoball based certain height cover equivalently cover horosphere following result follows proposition allows control traces images terms cygan spheres depending arithmetic data lemma let extended cygan sphere center radius horosphere based height corollary let number field principal ideal domain point depth satisfying cygan sphere centered radius particular proof since lift first column vector lift since integral lift fact lemma primitive lift therefore depth denoting result follows lemma second part statement follows using radius formula extended cygan metric equation also use following observation lemma ffp covering arguments lemma extended cygan balls affinely convex horospherical coordinates finally considering action discrete subgroup isom relative cygan metric convenient consider vertical horizontal components defined follows see case homomorphism heis given projection first factor decomposition heis induces short exact sequence isom isom heis isom isometries relative euclidean metric denoting isom gives short exact sequence picard modular groups section use method described section compute following presentations picard modular groups denote mod itv action well understood see ffp section values using unpublished notes refer papers presentations fundamental domains state lemmas figure covering prism cygan balls depth modular group presentation fundamental domain cusp stabilizer lemma cusp stabilizer admits following presentation let convex hull points horospherical coordinates affine fundamental domain acting concretely use following generators recall covering depth points denote open extended cygan ball centered radius see equation definition extended cygan metric recall height balls depth appear sense corollary lemma let horosphere height based prism covered intersections following extended cygan balls depth omit proof similar proof lemma much simpler see figure note order cover need balls depth particular none depths even though present height consider however necessary pass height observed experimentally height balls depth cover points depth corollary covering depth inspection see points depth horospherical coordinates depth depth one depth integral lifts representatives points generators following elements map point corresponding relations list table relations obtained applying generators points depth described part section successively eliminating obtain presentation details straightforward left reader modular group presentation fundamental domain cusp stabilizer lemma cusp stabilizer admits following presentation let affine convex hull points horospherical coordinates fundamental domain acting concretely use following generators denoting image vertex cycle points relation table action generators vertices covering depth points denote open extended cygan ball centered radius see equation definition extended cygan metric recall height balls depth appear sense corollary lemma let horosphere height based prism covered intersections following extended cygan balls depth depth omit proof similar proof lemma simpler see figure note order cover need balls depth particular none depth even though present height consider however necessary pass height observed experimentally height balls depth cover points depth corollary covering depth inspection see points depth horospherical coordinates depth depth figure covering prism cygan balls depth depth one integral lifts representatives points generators following elements map point corresponding relations list table relations obtained applying generators points depth described part section successively eliminating obtain presentation details straightforward left reader picard modular group presentation fundamental domain cusp stabilizer lemma cusp stabilizer admits following presentation let affine convex hull points horospherical coordinates fundamental domain acting image vertex cycle points relation table action generators vertices concretely use following generators denoting covering depth points denote open extended cygan ball centered radius see equation definition extended cygan metric recall height balls depth appear sense corollary lemma let horosphere height based prism covered intersections following extended cygan balls depth depth figure covering prism cygan balls depth depth proof figure shows prism relevant cygan balls prove result dissecting prism affine polyhedra lies one extended cygan balls reminiscent proof proposition consider following points horospherical coordinates see figure denoting hull affine hull horospherical coordinates subset claim following affinely convex pieces contained corresponding open extended cygan sphere hull hull hull hull hull hull hull hull verify claims check numerically vertices indeed belongs ball question using equation extend whole covex lemma example point indeed belongs dxc dxc dxc result follows prism union affinely convex pieces see figure corollary covering depth note covering argument needed balls depth particular none depth even though present height consider however necessary pass height observed experimentally height balls depth cover points depths inspection see points depth following horospherical coordinates give lemma detailed justifcation points depth depth depth one depth one second third depth containing points taken mod integral lifts representatives points lemma points depth exactly proof illustrate general procedure finding points depth specialize present case depths points natural numbers solution first begin assuming found points depth less case points depth find multiplication unit two possibilities either figure affine cell decomposition prism consider standard lift point note calculation transparent rewrite first coordinate vector form point depth must satisfy following depth general depth less words coordinates coordinates must step next calculations make sure satisfied find projection possible find compute use list point horo coords get rid ofnall ones level previous level left level points generators following elements map point corresponding relations list table relations obtained applying generators points depth described part section successively eliminating setting obtain presentation details straightforward left reader hurwitz quaternion modular group section use method described section compute following presentation hurwitz modular group sometimes denoted psp recall hurwitz integer ring denote generators relations defined lemma iri iri tvi tvi tvj tvi tvj tvk itvj tvj tvi tvj itvi tvj iti presentation coarse fundamental domain section study action use notation equations namely respectively denote heisenberg translation heisenberg rotation conjugation note unit quaternion purely imaginary contains following heisenberg translations tvq lemma admits following presentation tvi tvj tvk image vertex cycle points relation rtv rtv rtv rtv table action generators vertices following sets relations tvw tvw runs tvi tvj tvk runs runs runs tvi tvj tvk entry column row table tvi tvj tvk tvi tvj tvk tvi tvj tvk tvj tvi tvj tvk tvj tvk tvi tvi tvj tvk tvi tvi tvj tvi tvj tvk tvi proof obtain presentation identify subgroups observe normalize build presentation via sequence extensions using following procedure suppose group subgroups normal extension suppose also know presentations hsn hsk admits presentation hsn set consists relations form knk runs elements runs elements expressed word generators three subgroups identify rotation conjugation translation subgroups rotation subgroup consists heisenberg rotations hurwitz integral unit quaternion isomorphic binary tetrahedral group order see admits presentation hri conjugation subgroup consists conjugations unit quaternions elements group also correspond hurwitz unit integral quaternions acts thus group isomorphic quotient binary tetrahedral group tetrahedral group alternating group elements admits presentation hci translation subgroup consists heisenberg translations admits presentation tvi tvj tvk rotation subgroup normalized conjugation subgroup extension rotation subgroup conjugation subgroup finite group order obtain four relations conjugating translation subgroup normalized subgroup conjugates translation generators rotation conjugation generators listed table translations along left side along top table entry element written word translation generators contains relations action translation subgroup relatively straightforward however finite group fixing complicated namely group units order generated say acts faithfully rotations kernel conjugation together actions produce group product order described geometric compatibility action finite group translation subgroup clear reason make explicit fundamental domain action rather use larger subset geometrically simpler sufficient purposes sense cover set sometimes called coarse fundamental domain define hull lemma cover proof proceed several steps first considering horizontal projections defined short exact sequence subgroups claim fundamental domain clear heisenberg translations act ordinary translations horizontal factor claim fundamental domain seen subdividing cube cubes half group hri isomorphic classical quaternion group order acts subcubes orbits claim follows choosing adjacent representatives orbits claim fundamental domain first note union two hull obtained negating first coordinate obtained cutting cube along diagonal hyperplane example obtained taking half cube cut along diagonal hyperplane note hyperplane equidistant hyperplane likewise hyperplane equidistant hyperplane since subgroup index claim follows previous claim claim fundamental domain tvi tvj tvk follows previous claim additional generators tvi tvj tvk vertical translations units heisenberg coordinates claim translates tvi tvj tvk cover recall unit quaternion matrix acts conjugation horizontal vertical factors since conjugate two others opposites claim follows considering action vertical factor covering depth points denote open extended cygan ball centered radius see equation definition extended cygan metric dxc recall height balls depth appear sense corollary lemma let horosphere height based prism covered intersections following extended cygan balls depth proof recall base union two hull obtained negating first coordinate claim contained separate pieces horizontal hyperplane claim verified showing denote bounded done previously checking vertices numerically using equation extending result convex hull lemma vertices note base vertices vertical direction spanned check using equation base vertices vertical vertices except top vertex corresponding likewise top vertices ball finally check midpoints top vertices lower level spanned contained intersection example midpoint satisfies dxc dxc computations similar likewise replacing gives coordinate negating first contained corollary covering depth fact suspect covering depth however covering argument much delicate height particular cygan balls used suffice inspection see points depth horospherical coordinates depth depth depth depth integral lifts points generators following elements map point corresponding relations list tables relations obtained applying generators points depth described part section successively eliminating generators obtain presentation details straightforward left reader give details obtain relations tvi tvj tvk tvi tvj entirely straightforward substituting using table obtain tvk tvi tvj itvi tvj tvi tvj use relations tvw tvw relations table get itvi itvj tvk itvj tvk use relation itvi comes eliminating using make subsitution itvi tvj tvk itvj tvk use relations tvi tvw get itvi tvj tvk relation tvi tvj tvk comes substituting expression relation table relation tvi tvj comes first observing table table conjugate since equal obtain get relation appears presentation substitute using table using expression obtained references allcock new reflection groups duke math belolipetsky volumes arithmetic quotients ann scuola norm sup pisa sci iii image vertex cycle points relation tvi tvk tvi tvj tvj tvk tvi tvj table action generators vertices degenerate cycles bianchi sui gruppi sostituzioni lineari con coefficienti appartenenti corpi quadratici immaginari math ann cartan sur groupe comment math helv conway smith quaternions octonions geometry arithmetic symmetry peters chen greenberg hyperbolic spaces contributions analysis academic press new york dvv diaz verjovsky vlacci quaternionic kleinian modular groups arithmetic hyperbolic orbifolds quaternions ffp falbel francsics parker geometry modular group math ann falbel parker geometry modular group duke math image vertex tvi tvj cycle points relation tvj tvi tvj tvj tvi tvk tvi tvj tvi tvi tvj tvk tvi tvj tvk tvi tvj tvi tvj tvi tvj tvi tvj tvj tvi tvi tvj table action generators vertices nondegenerate cycles garland raghunathan fundamental domains lattices rank semisimple lie groups ann math goldman complex hyperbolic geometry oxford mathematical monographs oxford university press invariants arithmetic ball quotient surfaces math nachr kim parker geometry quaternionic hyperbolic manifolds math proc camb phil soc macbeath groups homeomorphisms simply connected space ann math philippe invariants globaux des espaces hyperboliques quaternioniques phd thesis bordeaux prasad volumes quotients groups inst hautes etudes sci publ math paupert real reflections commutators complex hyperbolic space groups geom dyn siegel discontinuous groups ann math ste steinberg consequences elementary relations sln cont math sto stover volumes picard modular surfaces proc amer math soc swan generators relations certain special linear groups advances math yasaki integral cohomology certain picard modular surfaces number theory woodward integral lattices hyperbolic manifolds phd thesis university york zhao generators euclidean picard modular groups trans amer math soc zink die anzahl der spitzen einiger arithmetischer untergruppen gruppen math nachr alice mark julien paupert school mathematical statistical sciences arizona state university paupert
| 4 |
digital neuromorphic architecture efficiently facilitating complex synaptic response functions applied liquid state machines michael aaron kristofor craig jonathon david pamela john conrad james mar sandia national laboratories albuquerque usa email ajhill kdcarls cmviney jwdonal jhnaegl cdjame jbaimon lewis rhodes labs concord usa email drfollett plfollett tufts university medford usa neural networks represented weighted connections synapses neurons poses problem primary computational bottleneck neural networks multiply inputs multiplied neural network weights conventional processing architectures well suited simulating neural networks often requiring large amounts energy time additionally synapses biological neural networks binary connections exhibit nonlinear response function neurotransmitters emitted diffuse neurons inspired neuroscience principles present digital neuromorphic architecture spiking temporal processing unit stpu capable modeling arbitrary complex synaptic response functions without requiring additional hardware components consider paradigm spiking neurons temporally coded information opposed rate coded neurons used neural networks paradigm examine liquid state machines applied speech recognition show liquid state machine temporal dynamics maps onto flexibility efficiency stpu instantiating neural algorithms ntroduction learning algorithms achieving state art performance many application areas speech recognition image recognition natural language processing information concepts dog person image represented synapses weighted connections neurons success neural network dependent training weights neurons network however training weights neural network often high computational complexity large data sets requiring long training times one contributing factors computational complexity neural networks multiplications work supported sandia national laboratories laboratory directed research development ldrd program hardware acceleration adaptive neural algorithms haana grand challenge project sandia national laboratories laboratory managed operated sandia corporation wholly owned subsidiary lockheed martin corporation department energys national nuclear security administration contract input vector multiplied synapse weight matrix conventional computer processors designed process information manner neural algorithm requires multiply recently major advances neural networks deep learning coincided advances processing power data access however reaching limits moore law terms much efficiency gained conventional processing architectures addition reaching limits moore law conventional processing architectures also incur von neumann bottleneck processing unit program data memory exist single memory one shared data bus contrast conventional processing architectures consist powerful centralized processing unit operate mostly serialized manner brain composed many simple distributed processing units neurons sparsely connected operate parallel communication neurons occurs synaptic connection operate independently neurons involved connection thus multiplications implemented efficiently facilitated parallel operations additionally synaptic connections brain generally sparse information encoded combination synaptic weights temporal latencies spike synapse biological synapses simply weighted binary connection rather exhibit synaptic response function due release dispersion neurotransmitters space neurons biological neurons communicate using simple data packets generally accepted binary spikes contrast neuron models used traditional artificial neural networks ann commonly rate coded neurons rate coded neurons encode information neurons magnitude output larger output represents higher firing rate use rate coded neurons stems assumption firing rate fig high level overview stpu stpu composed set leaky integrate fire neurons neuron associated temporal buffer inputs mapped neuron time delay neuronal encoding transformation addresses connectivity efficacy temporal shift functionality stpu mimics functionality biological neurons neuron important piece information whereas temporally coded neurons encode information based spike one neuron arrives another neuron temporally coded information shown powerful rate coded information biologically accurate based neuroscience principles present spiking temporal processing unit stpu novel neuromorphic hardware architecture designed mimic neuronal functionality alleviate computational restraints inherent conventional processors neuromorphic architectures shown strong energy efficiency powerful scalability aggressive utilizing principles observed brain build upon efforts leveraging benefits low energy consumption scalability run time speed ups include efficient implementation arbitrarily complex synaptic response functions digital architecture important synaptic response function strong implications spiking recurrent neural networks also examine liquid state machines lsms show constructs available stpu facilitate complex dynamical neuronal systems examine stpu context lsms stpu general neuromorphic architecture algorithms implemented stpu section present stpu high level comparison neuromorphic architectures presented section iii present lsms section section examine lsms map onto stpu show results running lsm stpu conclude section piking emporal rocessing nit section describe spiking temporal processing unit stpu components stpu map functionality biological neurons design stpu based following three neuroscience principles observed brain brain composed simple processing units neurons operate parallel sparsely connected neuron local memory maintaining temporal state information encoded connectivity efficacy signal propagation characteristics neurons overview biological neuron components map onto stpu shown figure stpu derives dynamics leaky integrate fire lif neuron model lif neuron maintains membrane potential state variable tracks stimulation time step based following differential equation dvj wkj tkl variable time constant dynamics index presynaptic neuron wkj weight connecting neuron neuron tkl time lth spike neuron synaptic delay neuron lth spike dynamic synaptic response function input spike lif model neuron fire exceeds threshold synapses input neurons destination neurons defined weight matrix given time weights inputs neurons change time unique stpu lif neuron local temporal memory buffer composed memory cells model synaptic delays biological neuron fires latency associated arrival spike soma postsynaptic neuron due time required propagate axon presynaptic neuron time propagate dendrite soma postsynaptic neuron temporal buffer represents different synaptic junctions dendrites lower index value temporal buffer constitutes dendritic connection closer soma shorter axon length one larger index value thus synapses stpu specified weight wkjd source input neuron destination neuron dth cell temporal buffer allows multiple connections neurons different synaptic delays time step summation product inputs synaptic weights occurs added current value position temporal buffer wkjd temporary state temporal buffer value cell temporal buffer shifted one position values bottom buffer fed lif neuron biological neurons neuron fires near binary spike propagated axon synapse defines connection neurons purpose synapse transfer electric activity information one neuron another neuron direct electrical communication take place rather chemical mediator used presynaptic terminal action potential emitted spike causes release neurotransmitters synaptic cleft space pre postsynaptic neurons synaptic vescles neurotransmitters cross synaptic cleft attach receptors postsynaptic neuron injecting positive negative current postsynaptic neuron chemical reaction neurotransmitters broken receptors postsynaptic neuron released back synaptic cleft presynaptic neuron reabsorbs broken molecules synthesize new neurotransmitters terms electrical signals propagation activation potentials axon digital signal shown figure however chemical reactions occur synapse release reabsorb neurotransmitters modeled analog signal behavior synapse propagating spikes neurons important ramifications dynamics liquid equation synaptic response function represented following zhang dirac delta function used synaptic response function convenient implementation digital hardware however dirac delta function exhibits static behavior zhang show dynamical behavior modeled synapse using response presynaptic spike tkl time constant response heaviside step function normalizes firstorder response function dynamical behavior also implemented using dynamic model tkl fig spike propagation along axon across synapse spike propagated axon generally accepted binary spike upon arrival synapse spike initiates chemical reaction synaptic cleft stimulates postsynaptic neuron chemical reaction produces analog response fed soma postsynaptic neuron stpu arbitrary synaptic response functions modeled efficiently using temporal buffer synaptic response function discretely sampled encoded weights connecting one neuron another mapped corresponding cells temporal buffer time constants second order response normalizes dynamical response function zhang showed significant improvements accuracy dynamics liquid using dynamical response functions implementing exponential functions hardware expensive terms resources needed implement exponentiation considering stpu composed individual parallel neuronal processing units neuron would need exponentiation functionality including hardware mechanisms neuron exponentiation would significantly reduce number neurons orders magnitude limited resources fpga rather explicitly implement exponential functions hardware use temporal buffer associated neuron exponential function discretely sampled value sample assigned connection weight wkjd presynaptic neuron corresponding cell temporal buffer postsynaptic neuron thus single weighted connection two neurons expanded multiple weighted connections two neurons shown graphically figure use temporal buffer allows efficient implementation digital signal propagation axon neuron table igh level omparison stpu rue orth nnaker platform stpu truenorth spinnaker interconnect neuron model synapse model mesh lif mesh unicast binary mesh multicast mesh enabled due temporal buffer available neuron stpu truenorth provides highly programmable lif facilitate additional neural dynamics spinnaker provides flexibility neuron model however complex biological models synapse model neuron computationally expensive programmable stpu via temporal buffer discretely sampling arbitrary synapse model model spinnaker optimized simpler synaptic models complex synaptic models incur cost computational complexity analog signal propagation neurons synapse iii omparison euromorphic rchitectures stpu first neuromorphic architecture four prominent neuromorphic architectures ibm truenorth chip stanford neurogrid heidelberg brainscales machine manchester spiking neural network architecture spinnaker stanford neurogrid heidelberg brainscales analog circuits truenorth spinnaker digital circuits stpu also digital system focus comparison truenorth spinnaker truenorth chip leverages highly distributed crossbar based architecture designed high composed cores neuron highly parametrized lif neuron truenorth core binary crossbar existence synapse encoded junction individual neurons assign weights particular sets input axons crossbar architecture allows efficient multiplication truenorth allows routing neurons core programmed spike destination addressed single row particular core could core enabling recurrence different core crossbar inputs coupled via delay buffers insert axonal delays neuron natively able connect multiple cores connect single neuron different temporal delays work around neuron replicated within core mapped different cores multiple temporal delays two neurons stpu obvious mechanism implementation spinnaker massively parallel digital computer composed simple arm cores emphasis flexibility unlike stpu truenorth spinnaker able model arbitrary neuron models via instruction set provided arm core spinnaker designed sending large numbers small data packages many destination neurons spinnaker designed modeling neural networks could potentially used generally due flexibility stpu architecture falls truenorth spinnaker architectures stpu implements less parameterized lif neuron truenorth however routing neural spikes flexible allows multicast similar spinnaker rather unicast used truenorth key distinguishing feature stpu temporal buffer associated neuron giving stpu routing summary comparison stpu truenorth spinnaker shown table iquid tate achines liquid state machine lsm algorithm mimics cortical columns brain conjectured cortical microcircuits nonlinearly project input streams state space representation used input areas brain learning achieved cortical microcircuits sparse representation fading state microcircuit forgets time lsms may able mimic certain functionality brain noted lsms try explain brain operates machine learning lsms variation recurrent neural networks fall category reservoir computing along echo state networks lsms differ echo state machines type neuron model used lsms use spiking neurons echo state machines use rate coded neurons transfer function lsms operate temporal data composed multiple related time steps lsms composed three general components input neurons randomly connected leakyintegrate fire spiking neurons called liquid readout nodes read state liquid diagram lsm shown figure input neurons connected random subset liquid neurons readout neurons may connected neurons liquid subset connections neurons liquid based probabilistic models brain connectivity pconnection represent two neurons euclidean distance variables two chosen constants paper use grid define positions neurons liquid liquid functions temporal kernel casting input data higher dimension lif neurons allow temporal state carried one time step another lsms avoid problem training recurrent neural models apping lsm onto stpu fig liquid state machine composed three components set input neurons set recurrent spiking neurons set readout neurons plastic synapses read state neurons liquid table parameters synapses connections neurons liquid parameter type equation value equation synaptic weight training synaptic weights liquid readout nodes similar extreme machine learning use random neural network data assumed temporal integration encompassed liquid thus liquid lsm acts similarly kernel support vector machine streaming data employing temporal kernel general weights connections liquid change although studies looked plasticity liquid readout neurons neurons plastic synapses allowing synaptic weight updates via training using neurons firing state liquid temporal aspect learning temporal data transformed static learning problem temporal integration done liquid additional mechanisms needed train readout neurons classifier used often linear classifier sufficient training readout neurons done batch manner lsms successfully applied several applications including speech recognition vision cognitive neuroscience practical applications suffer fact traditional lsms take input form spike trains transforming numerical input data spike data data represented temporally nontrivial section implement lsm stpu previous implementations lsms hardware however cases fpga vlsi chip designed specifically hardware implementation lsm roy also zhang present vlsi hardware implementation lsm schrauwen implement lsm fpga chip contrast work stpu developed general neuromorphic architecture neuroscience work algorithms developed stpu spike sorting using spikes median filtering currently stpu simulator implemented matlab well implementation fgpa chip matlab simulator correspondence hardware implementation given constructs provided stpu lsm liquid composed lif neurons maps naturally onto stpu use synaptic response function equation based work zhang found response function produced dynamics liquid allowing neural signals persist longer input sequence finished lead improved classification results following zhang synaptic properties liquid including parameters connection probabilities liquid neurons defined equation synaptic weights given table two types neurons excitatory inhibitory observed brain liquid made network nuerons excitatory neurons inhibitory probability synapse existing two neurons weights neurons dependent types considered neurons denotes presynaptic postsynaptic neurons connected synapse example denotes connection excitatory presynaptic neuron inhibitory postsynaptic neuron excitatory neurons increase action potential target neurons positive synaptic weights inhibitory neurons decrease action potential negative synaptic weights connections generated neurons liquid neurons randomly connected according equation parameters given table input neuron randomly connected subset neurons liquid weight chosen uniformly random implement synaptic response function equation sampled discrete time steps multiplied synaptic weight value neurons specified table discretely sampled weights encoded via multiple weights corresponding cells temporal buffer postsynaptic neuron implementation synaptic delay set set excitatory neurons inhibitory neurons set respectively neurons set table iii eparation values average spiking rates classification accuracy different synaptic response functions synaptic res trainsep trainrate testsep testrate svm experiments dirac delta evaluate effect different parameters liquid state machine use data set spoken digit recognition arabic digits dataset composed time series cepstral coefficients mfccs utterances digit speakers repetitions per digit mfccs taken male female native arabic speakers ages dataset partitioned training set speakers test set speakers scale variables evaluate performance lsm examine classification accuracy test set measure separation liquid training set good separation within liquid state vectors trajectories class distinguishable measure separability liquid set state vectors liquid perturbed given input sequence follow definition norton ventura distance sep variance separation ratio distance classes divided class variance difference mean difference mass every pair classes center norm number classes center mass class given class variance mean variance state vectors thepinputs center mass class investigate various properties liquid state machine namely synaptic response function input encoding scheme liquid topology readout training algorithm also consider impact liquid neuron spike exceeds thus significant impact dynamics liquid beginning base value used zhang consider effects decreasing values default parameters use reservoir size feed magnitude inputs input neurons current injection linear svm train synapses readout neurons synaptic response functions first investigate effect synaptic response function using default parameters using average separation values average spiking rates classification accuracy linear svm given table iii highlighted bold synaptic function equation achieves largest separation values training testing lowest average spike rate highest classification accuracy average spike rate significantly higher response function response val plastic readout neurons connected neurons liquid training done using linear classifier average firing rate neurons liquid examine effect various linear classifiers time fig visualization response functions response function counterintuitive since response function perpetuates signal liquid longer however examining response functions shown figure shows response function larger initial magnitude quickly subsides secondorder response function lower initial magnitude slower decay giving consistent propagation spike time adjusting value accommodate behavior response function bottom three rows int table iii shows improvement made separation values spiking rate classification accuracy despite improvement response function achieve better performance response function classification accuracy response function get better separation score translate better accuracy input encoding schemes traditional lsms input temporally encoded form spike train unfortunately datasets temporally encoded rather numerically encoded spike train input aligns neuroscience practically encode information temporally brain therefore examine three possible encoding schemes rate encoding magnitude numeric value converted rate spike train rate fed liquid bit encoding magnitude numeric value converted bit representation given precision current injection rate encoding requires time steps encode single input converting magnitude rate similar binning information loss bit encoding requires one time step however requires inputs per standard input convert magnitude table separation values average spiking rates liquid classification accuracy examining liquid topologies different input encoding schemes values largest separation values accuracies encoding scheme bold encoding scheme table separation values average spike rate liquid using different liquid topologies largest separation values accuracies topology bold num neurons current injection bit encoding rate encoding representation set compared current injection execution time increases linearly number time steps rate encoding table shows separation values first row encoding scheme average spiking rates second row accuracy linear svm test set third row input encoding schemes various values average spiking rate gives percentage neurons firing liquid time series provides insight sparse spikes within liquid table shows representative subset values used bold values represent highest separation value classification accuracy encoding scheme results show value significant effect separation liquid well classification accuracy svm expected dynamics liquid dictated neurons fire lower threshold allows spikes indicated increasing values average spiking rates values decrease overall using rate encoding produces greatest values separation however significant variability values change rate encoding greatest accuracy svm achieved low separation value encoding schemes separation classification accuracy appear correlated greatest classification accuracy achieved current injection liquid topology topology liquid lsm determines size liquid influences connections within liquid distance neurons impacts connections made neurons cubic liquid densely connected compared column liquid section examine using liquids grids consider different values separation values average spike rates accuracy linear svm given table values provided largest separation values topology configuration encoding scheme combination current injection encoding scheme bit encoded rate encoded value significant impact separation liquid classification accuracy greatest separation values classification accuracies topology highlighted bold topologies current injection achieves highest classification accuracy interestingly separation values across encoding schemes topologies correlate accuracies within encoding scheme topology however accuracy generally improves separation increases current injection different topologies appear significant impact classification accuracy except topology decrease accuracy may due increased number liquid nodes used input svm converse true bit encoding topology achieves highest accuracy possibly due increased number inputs due bit representation input readout training algorithms plastic synapses trained significant effect performance lsm traditionally lsms use linear classifier based assumption liquid transformed state space problem linearly separable linear models represented set weights implemented neuromorphic hardware using linear model liquid classification done stpu avoiding overhead going chip make prediction consider four linear classifiers linear svm linear discriminant analysis lda ridge regression logistic regression algorithms use default parameters set statistics machine learning toolbox matlab examine classification linear classifiers table lassification accuracy test set different linear classifiers greatest accuracy topology bold linear model linear svm lda ridge regress logistic regress topologies values achieved highest classification accuracy linear svm previous experiments also limit examining current injection input scheme current injection consistently achieved highest classification accuracy results shown table lda consistently achieves highest classification accuracy considered classifiers highest classification accuracy achieved onclusion uture ork paper presented spiking temporal processing unit novel neuromorphic processing architecture well suited efficiently implementing neural networks synaptic response functions arbitrary complexity facilitated using temporal buffers associated neuron architecture capabilities stpu including complex synaptic response functions demonstrated implementing functional mapping implementation lsm onto stpu architecture neural algorithms grow scale conventional processing units reach limits moore law neuromorphic computing architectures stpu allow efficient implementations neural algorithms however neuromorphic hardware based spiking neural networks achieve low energy thus research needed understand develop algorithms eferences deng hinton kingsbury new types deep neural network learning speech recognition related applications overview proceedings international conference acoustics speech signal processing icassp ciresan meier schmidhuber deep neural networks image classification proceedings ieee conference computer vision pattern recognition ieee computer society socher perelygin chuang manning potts recursive deep models semantic compositionality sentiment treebank proceedings conference empirical methods natural language processing association computational linguistics october backus programming liberated von neumann style functional style algebra programs communications acm vol follett roth follett dammann white matter damage impairs adaptive recovery cortical damage silico model plasticity journal child neurology vol sejnowski time new neural code nature vol jul merolla arthur cassidy sawada akopyan jackson imam guo nakamura brezzo esser appuswamy taba amir flickner risk manohar modha million integrated circuit scalable communication network interface science august furber lester plana garside painkras temple brown overview spinnaker system architecture ieee transactions computers vol schemmel fieres meier integration analog neural networks ieee international joint conference neural networks june zhang jin choe digital liquid state machine biologically inspired learning application speech recognition ieee transactions neural networks learning systems vol maass markram computing without stable states new framework neural computation based perturbations neural computation vol verzi vineyard vugrin galiardi james aimone computation spiking neurons proceedings ieee international joint conference neural network accepted verzi rothganger parekh quach miner james aimone computing spikes advantage timing submitted dayan abbott theoretical neuroscience computational mathematical modeling neural systems ser computational neuroscience cambridge mass london mit press benjamin gao mcquinn choudhary chandrasekaran bussat arthur merolla boahen neurogrid multichip system neural simulations proceedings ieee vol schemmel briiderle griibl hock meier millner neuromorphic hardware system neural modeling proceedings ieee international symposium circuits systems may furber galluppi temple plana spinnaker project proceedings ieee vol severa carlson parekh vineyard aimone formal assessing strengths weaknesses neural architectures case study using spiking algorithm nips workshop computing spikes jaeger reservoir computing approaches recurrent neural network training computer science review vol august jaeger adaptive nonlinear system identification echo state networks advances neural information processing systems mit press huang zhu siew extreme learning machine theory applications neurocomputing vol norton ventura improving liquid state machines iterative refinement reservoir neurocomputing vol burgsteiner leopold steinbauer movement prediction images using liquid state machine applied intelligence vol buonomano maass computations spatiotemporal processing cortical networks nature reviews neuroscience vol roy banerjee basu liquid state machine dendritically enhanced readout neuromorphic vlsi implementations ieee transactions biomedical circuits systems vol schrauwen haene verstraeten stroobandt compact hardware liquid state machines fpga speech recognition neural networks hammami bedda improved tree model arabic speech recognition proceddings ieee international conference computer science information technology
| 9 |
effect phasor measurement units accuracy network estimated variables abdollahzadeh sangrody ameli power water university technology tehran iran power water university technology tehran iran ameli power water university technology tehran iran meshkatoddini commonly used weighted least square state estimator power industry nonlinear formulated using conventional measurements line flow injection measurements pmus phasor measurement units gradually adding improve state estimation process paper way corporation pmu data conventional measurements linear formulation state estimation using pmu measured data investigated six cases tested gradually increasing number pmus added measurement set effect pmus accuracy variables illustrated compared applying ieee test systems state estimation hybrid estimation linear formulation phasor measurement unit state introduction state estimation key element online security analysis function modern power system energy control centers function state estimation process set redundant measurements obtain best estimate current state power system state estimation traditionally solved weighted least square algorithm conventional measurements voltage magnitude real reactive power injection real reactive power flow recently synchronized phasor measurement techniques based time signal gps global positioning system introduced field power systems pmu placed bus measure voltage phasor bus well current phasors lines incident bus samples voltage current waveforms synchronizing sampling instants gps clock computed values voltage current phasors timestamped transmitted pmus local remote receiver traditional state estimation nature nonlinear problem commonly used approach weighted least squares converts nonlinear equations normal equations using taylor series however state estimation equations pmu measurements inherently linear equations research conducted try formulate mixed set traditional pmu measurements natural approach treat pmu measurements additional measurements appended traditional measurements causes additional computation burden calculation another approach use distributed scheme mixed state estimation problem finding optimal pmu locations power system state estimation well investigated literature paper shows effect pmus accuracy estimated variables six cases tested gradually increasing pmu numbers applying ieee test systems first case state estimation without pmu sixth case linear formulation state estimation using pmu measured data discussed four cases hybrid state estimation different number added pmus conventional measurement set tested weighted least squared state estimation method shown method minimizes weighted sum squares residuals equation measurement vector state vector standard deviation nonlinear function relating measurement state vector measurement covariance matrix given diag minimum value objective function firstorder optimality conditions satisfied expressed compact form follows nonlinear function expanded taylor series around state vector neglecting higher order terms iterative solution scheme known gaussnewton method used solve iteration index solution vector iteration called gain matrix expressed matrix state variables real reactive power injection bus pij qij real reactive power flow bus bus condition estimation relation measurement data state variables nonlinear final solution depend iterative solution scheme expressed hybrid state estimation one pmu measure voltage current phasors equivalent model line connecting buses assumption pmu connected bus shown fig yij jbij defined series admittance shunt admittance current phasor measurements written rectangular coordinates shown fig expressions cij dij ysi iterations going maximum variable cij ysi cos system buses state vector components composed bus voltage magnitudes phase angles iii conventional state estimation three commonly used measurement types conventional state estimation bus power injections line power flows bus voltage magnitudes measurement equations expressed using state variables jacobian matrix rows measurement columns variable considering power injection power flow fig matrix components corresponding measurements partial derivation variable yij cos yij cos difference satisfies condition max consider dij ysi sin yij sin viyij sin iij cij jdij yij yij jfi ysi ysi figure transmission line model entries measurement jacobian corresponding real reactive parts current phasors ysi cos yij cos yij cos ysi sin yij sin yij sin measurement data expressed rectangular coordinate system shown fig pmu located bus measured voltage line current voltage measurement expressed jfi current measurement expressed cij jdij condition estimation measurement vector sate vector cij fig line current flow expressed linear function voltages cij jdij jbij jbsi jfi jbij ysi sin yij sin yij sin ysi cos yij cos yij cos measurement vector contains cij dij well power injections power flows voltage magnitude measurements pinj qinj tflow qtflow cijt dijt jacobian matrix components expressed linear formulation state estimation using pmus measurement set composed voltage current measured pmus state estimation formulated linear problem state vector generally measurements received pmus accurate small variances compared variances conventional measurements therefore including pmu measurements expected produce accurate estimates bsi bij bij bsi estimated value obtained solving linear equation simple fast need iteration addition covariance matrix smaller covariance matrix conventional measurement estimated variables accurate simulation results investigate effect pmus accuracy estimated variables several cases tested different number added pmus conventional measurement set two different ieee test systems ieee ieee bus system tested different cases shown table fig fig show network diagrams system arrow circle bus means pair real reactive power injection measurements point transmission line means pair real reactive power flow measurements table case case case case case case six ifferent cases adding pmus conventional measurements pmus conventional measurements pmus bus number conventional measurements pmus bus number conventional measurements pmus bus number conventional measurements pmus bus number minimum pmus figure bus system diagram conventional measurements network voltage magnitude measurement connected bus table detailed information conventional measurement set installed networks setting error standard deviations power injection power flow voltage magnitude respectively pmu much smaller error deviations conventional measurements pmus located buses ieee bus system buses ieee bus system case one ways representing level state estimation accuracy refer covariance estimated variables covariance estimated variable vector obtained inverse diagonal elements gain matrix accuracy two variables voltage magnitude voltage angle investigated separately fig fig show accuracy estimated voltage magnitudes system fig fig show accuracy estimated voltage angles system table variable numbers measurement type numbers measurements figure bus system diagram conventional measurements network ieee bus system ieee bus system variables power injection power flow voltage magnitude total pmu pmu pmu pmu pmu pmu pmu pmu pmu pmu pmu standard diviation standard diviation pmu bus number figure accuracy bus system pmus pmu pmu pmu pmu pmu pmu standard diviation bus number figure voltage angle accuracy bus system pmus figures show effect pmus accuracy estimated variable average valued standard deviation variable percentage values shown tables iii percentage values tables mean values cases decreased compared case forced set case pmus becomes nearly zero average valued voltage magnitude voltage angle shown fig fig respectively table iii cases bus number figure accuracy bus system pmus pmu pmu pmu pmu pmu pmu pmus pmus pmus pmus pmus pmus average error standard deviations voltage magnitude ieee bus ieee bus average error percentage average error percentage standar diviation table cases bus number figure voltage angle accuracy bus system pmus pmus pmus pmus pmus pmus pmus average error standard deviations voltage angle ieee bus ieee bus average error percentage average error percentage average valued standard diviation voltage angle ieee bus sys tem ieee bus sys tem pmus pmus pmus pmus pmus pmus figure average voltage magnitude standard deviation two systems ieee bus system ieee bus system average valued standard diviation voltage magnitude pmus pmus pmus pmus pmus pmus phadke thorp synchronized phasor measurements applications springer zivanovic cairns implementation pmu technology state estimation overview ieee africon approach state estimation electric power systems proc ifac symposium identification system parameter estimation tbilisi ussr clements denison ringlee approach state estimation power system networks roc ieee power eng society summer meeting san francisco van cutsem horward static state estimator electric power systems ieee trans power apparat vol zhao abur multiarea state estimation using synchronized phasor measurements ieee trans power vol may weiqing jiang vittal heydt distributed state estimator utilizing synchronized phasor measurements ieee tran power vol may abur optimal placement phasor measurement units state estimation pserc fin proj abur observability analysis measurement placement systems pmus proc ieee pes power systems conf rakpenthai premrudeepreechacharn uatrongjit watson optimal pmu placement method measurement loss branch outage ieee trans power vol chakrabarti kyriakides optimal placement phasor measurement units power system observability ieee trans power vol yoon study utilization benefits phasor measurement units large scale power system state estimation master science univ december figure average voltage angle standard deviation two systems abdollahzadeh sangrody vii concultion paper way incorporating pmu data conventional measurements set discussed expected pmu measured data improve measurement redundancy accuracy due small error standard deviations pmu linear formulation state estimation investigated using pmu measured data linear formulation pmu data produce estimation result single calculation requiring iteration six cases tested gradually increasing number pmus added measurement set applying ieee test systems help advanced accuracy pmu seen estimated accuracy also gradually increase one interesting thing accuracy estimated variables improves effectively number implemented pmus around system buses references abur exposito power system state estimation theory implementation maecel real time dynamics monitoring system online available http received degree electrical engineering zanjan university iran currently pursuing master science electrical engineering power water university technology research interests include power system observability state estimation power systems ameli received degree electrical engineering technical college osnabrueck germany technical university berlin since teaches researches associated professor electrical engineering dept power water university technology teheran areas research power system simulation operation planning control power system usage renewable energy power system meshkatoddini received degrees respectively electrical engineering tehran polytechnic university iran degree electrical engineering paul sabatier university toulouse france since faculty member pwut meshkatoddini years experience areas power transformers surge arresters electric materials power network transients
| 3 |
quantification without adjustments work dirk aug august classification task predicting class labels objects based observation features contrast quantification defined task determining prevalences different sorts class labels target dataset simplest approach quantification classify count classifier optimised classification training set applied target dataset prediction class labels case binary quantification number predicted positive labels used estimate prevalence positive class target dataset since performance classify count quantification known inferior results typically subject adjustments however researchers recently suggested classify count might actually work without adjustments based classifier specifically trained quantification discuss theoretical foundation claim explore potential limitations numerical example based binormal model equal variances order identify optimal quantifier binormal setting introduce concept local bayes optimality side remark present complete proof theorem keywords classification quantification confusion matrix method bayes error introduction formal definition quantification machine learning task often credited forman wrote quantification task machine learning given limited training set class labels induce quantifier takes unlabeled test set input returns best estimate number cases class words quantification task accurately estimate test class distribution via machine learning without assuming large training set sampled random test distribution input quantifier batch cases whereas traditional classifier takes single case time predicts single class distribution classes reflecting uncertainty one case least since gart buck researchers practitioners aware need track changes prior probabilities prevalences classes different datasets machine learning community topic received renewed attention saerens suggested powerful alternative confusion matrix method considered standard approach time author currently works swiss financial market supervisory authority finma research paper done employee prudential regulation authority directorate bank england secondment bank research hub opinions expressed paper author necessarily reflect views finma bank england fawcett flach marked another milestone discussion deal changed prevalences noticed consequence different causalities different dataset shift regimes need tackled different ways since number papers published proposals categorise different types dataset shift storkey kull flach two types dataset shift training target dataset easily characterised covariate shift assumption posterior conditional class probabilities training target datasets however distribution covariates features may change change taken account already classifier learnt shimodaira sugiyama bickel easily characterised dataset shift type prior probability shift feature distributions conditional classes remain datasets switched typically case adjustments posterior class probabilities decision thresholds recommended literature elkan forman xue weiss hopkins king bella another approach direct estimation changed prior probabilities minimising distance feature distributions training target datasets saerens forman hofer krempl plessis sugiyama kawakubo types dataset shift less easy describe deal tasche defined invariant density ratio dataset shift generalises prior probability shift way ratios feature densities densities unchanged recent paper hofer outstanding dealing dataset shift weak assumptions structure shift esuli suggested specially trained classifiers called quantifiers used socalled classify count quantification forman without need adjustments estimates classify count means classifier optimised classification training set applied target dataset prediction class labels number predicted positive labels case binary quantification used estimate prevalence positive class target dataset practical implementation proposal binary classification quantification explored papers milli barranquero esuli sebastiani three papers emphasis put need quantifier properly calibrated training dataset sense number objects predicted positive equal true number positive objects experiments milli suggest classify count adjustments works better pure classify count contrast barranquero esuli sebastiani report classify count quantification performance specially trained quantifiers least comparable performance classifiers adjustments esuli sebastiani somewhat ambiguous respect clearly stating optimisation criteria different barranquero whose authors clearly say calibration criterion must supplemented condition enforcing good classification intuitive requirement particular complementarity two criteria calibration classification seems guarantee uniqueness optimal quantifier paper discuss theoretical foundations limitations quantification without adjustments illustrate insights classical example binormal model equal variances van trees focus analysis approach barranquero well intuitively documented first finding quantification without adjustments principle may work proper calibration training dataset ensured positive class prevalences training target datasets nearly however second finding approach deployed care quantifiers resulting may miscalibrated findings suggest potential applications quantification without adjustments rather limited paper organised follows section subsections theory binary classification revisited theoretically best quantifiers identified specific optimisation criteria particular introduce concept local bayes optimality discuss applications minimax tests two simple hypotheses optimisation context binary classification another application results section section experiment barranquero revisited reviewed fully controlled setting binormal model equal variances section concludes paper side remark present appendix complete proof theorem published key argument proof mention important steps classifying quantification order able appropriately assess merits limitations proposal barranquero adopt precise mathematical formalism based formalism characterise locally optimal binary classifiers closely related optimal bayes classifiers optimal classifiers van trees section concept local optimality allows characterise minimax tests two simple hypotheses way alternative scharf chapter provide alternative proof theorem classifiers section use results section explore barranquero proposal detail inspecting implementation binormal model equal variances locally optimal binary classifiers discuss binary classification properties classifiers probabilistic setting specified probability space done many authors see van trees probability space describes experiment choosing object object class label features features observed immediately depending whether probability space interpreted training sample target sample sometimes also called test sample label also observable observed delay interpret see billingsley section admissible events including events yet observed addition family events observed event reveals object class label occurs object got class label positive occurs object label negative assumption probability space space describes experiment selecting object population random observing features typically delay class label fixed event observed object class label otherwise observed object class label immediately observable events particular features binary classification problem setting typically random variables vector explanatory variables scores dependent class variable case finite uniform distribution provides setting often assumed machine learning papers setting assumption typically one wants predict object class label predict whether event occurred based observable information captured events events defines binary classifier following sense occurs object label predicted occurs object label predicted way binary classifiers identified elements therefore introduce extra notation classifiers note object class define classifier assumption define expected misclassification cost fixed according cost misclassification label correctly predicted event occurs predicted true label event false positive cost predicted true label event false negative cost misclassification cost expected cost classifier represented event section van trees section elkan know optimal choice bayes classifier minimising def arg min denotes conditional probability posterior probability given defined standard text books probability theory billingsley section following proposition shows sense classifiers shape def local minimisers proposition assumption let fixed define respectively let following two statements hold arg arg min min proof let given shown holds case implies case holds observation statement follows denotes indicator function set proposition locally optimal classifiers sense classifiers identical probability predicting compared state observation precisely item following remark remark defined hence case treated sition covered nonetheless worth noting proposition together imply holds arg min case proposition implies arg arg min min called false positive rate fpr iii case proposition implies arg max arg max called true positive rate tpr mentioned proposition may interpreted result characterisation optimal bayes classifiers optimal classifier test lemma following theorem gives precise statement observation theorem let measure space assume probability measures absolutely continuous respect assume furthermore densities respectively positive define likelihood ratio distribution continuous number min max proof define probability space projections well assumption satisfied chosen chosen construction follows probability conditional given implies fix let assumption continuity distribution implies min max proof therefore may assume without loss generality max hence assumption continuity distribution implies remark iii follows max max max max since holds max conclude min max min max intermediate value theorem implies since follows min max remark one interpretation theorem providing minimax test decision two simple hypotheses see chapter scharf test problem distinguish tests characterised observable sets means accept means reject favour however contrast setting lemma none two hypotheses considered important therefore expressed side optimal test meant minimise probabilities type errors time shows continuous setting optimal test criterion given side based likelihood ratio ratio densities tested probability measures hence structure optimal test optimal test bayes test minimax optimal bayes test see section van trees chapter scharf concept local bayes optimality proposition also applied question determine binary classifiers optimal respect criterion introducted van rijsbergen order avoid neglecting minority class learning binary classifiers given classifier defined recall precision classifier precision ratio number positively predicted true positive objects number positively predicted objects recall ratio number positively predicted true positive objects number true positive objects setting assumption precision recall denotes positive class stands classifier predicts positive note recall identical true positive rate defined remark rewrite definition fixed theorem observed classifiers optimal sense maximising constructed thresholding conditional class probability notation paper observation precisely stated follows sup sup max published important part appendix paper provide complete proof however case conditional class probability continuous distribution immediate consequence remark iii continuity distribution classifier number defined therefore remark iii implies application quantification prior probability shift contrast esuli sebastiani milli barranquero specify dataset shift problem going tackle proposal prior probability shift modify assumption accordingly assumption extend setting assumption assuming second probability measure evolves prior probability shift probabilities sets conditional probabilities sets conditional assumption describe classifier probability affine function practice true positive rate tpr false positive rate fpr estimated possibly large potential bias training set setting probability object classified positive prior probability shift estimated target set setting solved obtain estimate new prior probability positive class proof available appendix paper downloaded https approach called confusion matrix method saerens also described adjusted count approach forman deployed practitioners least since gart buck theory classifier provides adjustment needed obtain accurate estimate probability positive class potentially quite inaccurate estimate experiments research teams however cast doubt appropriateness approach good xue weiss hopkins king unsatisfactory performance saerens forman confusion matrix method reported papers report mixed findings bella hofer krempl plessis sugiyama valid prior probability shift assumption performance issues confusion matrix method surprise circumstances little evidence prior probability shift types dataset shift see taxonomy dataset shift types reports performance confusion matrix method refer controlled environments prior probability shift number reasons identified potentially negatively impact confusion matrix method performance among class imbalance training set forman issues accurate estimation tpr fpr training set esuli sebastiani although many approaches prior probability estimation proposed hofer krempl plessis sugiyama hofer kawakubo gold standard yet emerged approaches appear suffer numerical problems extent observation led authors suggest quantifiers classifiers specifically developed quantification might viable solution esuli milli barranquero esuli sebastiani notation paper classifiers quantifiers characterised observable events interpreted predict positive difference concepts classifier quantifier intended use explained quotation forman section classifiers deployed predicting class labels single objects therefore development classifier typically involves minimising expected loss decisions single objects see instance quantifiers deployed estimating prevalence class sample population barranquero argued different purpose reflected different objective function development quantifier suggest appropriate objective function adjustment like would needed paper barranquero suggest maximising criterion see experiments conducted prior probability shift setting see assumption similarly milli report experimental results prior probability shift setting different approach esuli suggest minimising distance observed class distribution predicted class distribution training set implicitly work support vector machines esuli sebastiani also apply classification optimisation criterion esuli sebastiani use natural datasets datashift environment characterised prior probability shift following focus analysis approach proposed barranquero deal prior probability shift analysis approach followed esuli sebastiani harder esuli sebastiani specify dataset shift assumption performance approach seems depend choice classifier development methodology support figure illustration prediction error function prediction error uniformly best locally best qbeta best test set prevalence positive class vector machines analysis potential criticism esuli sebastiani therefore undertaken paper results milli less controversial barranquero esuli sebastiani milli report superior quantification performance adjusted classifiers fixed unadjusted classifier specified set absolute prediction error prevalence positive class target dataset combination decreasing increasing straight line represented function true positive class prior probability otherwise see figure illustration absolute prediction error concept rationale three different curves explained section moment ignore question minimise absolute prediction error figure tells every classifier perfect predictor one positive class prevalence target dataset unfortunately helpful know perfection way broken clock perfectly right day hence worthwhile try find minimising error following immediately implies following result error bounds prediction positive class prevalence classifier proposition assumption following inequality holds max proposition shows classifier prediction error regard positive class prevalence controlled classifier false positive rate fpr false negative rate fnr theorem remark iii makes possible identify optimal case minimax classifier regard prediction positive class prevalence corollary assumption define classifiers assume addition distribution continuous number min max corollary nice telling classifier minimises time probabilities false negative false positive predictions target dataset whatever value positive class prevalence note similarity classifier classifier serving basis method forman also interesting see method max forman maximise tpr fpr special case follows prediction error zero prior positive class probability target dataset may seem unsatisfactory particular positive class prevalence training set different might appropriate prediction error zero positive class prevalence target dataset training set case following result applies corollary assumption define classifiers assume addition number holds min max max hrc easily checked corollary ehr defined approach barranquero define normalized absolute score nas measuring well classifier characterised set explained assumption predicts prior class probabilities binary classification setting described assumption def nas max definition nas otherwise range nas depends value implies nas implies nas implies nas implies nas dependence range nas value unsatisfactory makes comparison nas values computed different underlying values incommensurable potentially could entail bias nas used optimization criterion following alternative definition avoids issues max max def definition otherwise following use instead nas order make sure full potential approach barranquero realised barranquero suggest training reliable classifiers order predict prior unconditional class probabilities target dataset prior probability shift reliability mean classifiers question perform well terms good classification good quantification time order train optimal quantifier predicting class probability barranquero suggest maximise possible classifiers characterised sets whose outcomes trigger prediction class nas nas nas def definitions nas denominator side takes value representation implies limp therefore define weighted harmonic mean true positive rate normalized absolute score increasing nas barranquero suggest maximising fixed training set resulting classifier able provide good estimates target datasets possibly different prior class distributions using observation remark iii section demonstrate standard example binormal model optimal classifiers respect general best quantifiers first note remark iii implies following result proposition assumption define denote distribution continuous holds sup sup proof observe sup sup since fix continuity hence remark iii implies nas nas real random variable defined inf figure illustration proposition corollary qbeta beta beta predicted prevalence positive class implies sup sup however nas hence follows therefore see figure illustration proposition unfortunately practice time possible accurately estimate general posterior probabilities like led authors propose workarounds like one platt necessarily deliver good results however following corollary describes special setting side considerably simplified corollary assumption define make two additional assumptions real random variable continuous distribution continuous function function either strictly increasing strictly decreasing increasing holds sup sup otherwise decreasing holds sup sup proof follows proposition case increasing case decreasing corollary allows section replicate experiment barranquero fully controlled environment merits limitations approach carefully studied binormal case equal variances consider binormal model equal variances example fits setting assumption corollary denotes power set define projections let defined specifying marginal distribution defining conditional distribution given normal distributions equal variances assume implies distribution given mixture normal posterior probability setting given exp log replicate experiment barranquero setting look training set target dataset probability measures defined like possibly different values respectively denotes standard normal distribution function train classifier training set maximising given classifiers identified sets sense prediction positive class occurs negative class otherwise training binormal setting actually means make use corollary order identify optimal classifier following formulae used optimization max max quantile see footnote definition must numerically determined solving following equation optimization problem solution apply optimize development core team find optimal classifier evaluate classifier target dataset calculating compare value order check good classify count approach forman based compare performance two classifiers minimax classifier hmini corollary locally best classifier hloc corollary means need evaluate three classifiers purpose following formulae used hmini hmini hloc hloc calculations used following parameters results calculations shown figures figure presents graphs binormal setting section parameters chosen defined use instead nas solid curve equal weights put tpr dashed curve weight put four times weight tpr kinks graphs figure due fact mapping defined side differentiable function nas would differentiable either curves unique maximum hence graph maximum takes maximum value consequence case classifier identical locally best classifier according corollary contrast case slightly greater decline value rise value tpr maximum incurred consequence error prediction target dataset positive class prevalence displayed figure figure shows incident classifier close performance minimax classifier identified corollary nonetheless performance would deemed unsatisfactory true positive class prevalence target dataset nearly assumed positive class prevalence training dataset case clearly locally best classifier identified corollary would perform much better even perfectly training target prevalences conclusions investigated claim barranquero esuli sebastiani binary class prevalences target datasets estimated classifiers without adjustments classifiers developed quantifiers training datasets development quantifiers involves special optimisation criteria covering calibration number objects predicted positive equals true number positives classification power barranquero recommended optimisation criterion quantifier tested approach datasets fully clear however barranquero observations fundamental incidental paper therefore identified theoretically correct way determine best quantifiers according criterion replicated experiment barranquero fully controlled setting binormal model equal variances binary classification settings found quantification without adjustments principle may work proper calibration training dataset ensured positive class prevalences training target datasets nearly approach deployed care quantifiers resulting may miscalibrated findings suggest potential applications quantification without adjustments rather limited references barranquero del coz learning based reliable classifiers pattern recognition bella ferri quantification via probability estimators data mining icdm ieee international conference pages ieee bickel scheffer discriminative learning covariate shift journal machine learning research billingsley probability measure john wiley sons third edition plessis sugiyama learning class balance change distribution matching neural networks elkan foundations learning nebel editor proceedings seventeenth international joint conference artificial intelligence ijcai pages morgan kaufmann esuli sebastiani optimizing text quantifiers multivariate loss functions acm trans knowl discov data june issn doi url http esuli sebastiani abbasi sentiment quantification ieee intelligent systems fawcett flach response webb ting application roc analysis predict classification performance varying class distributions machine learning forman quantifying counts costs via classification data mining knowledge discovery gart buck comparison screening test reference test epidemiologic studies probabilistic model comparison diagnostic tests american journal epidemiology alegre class distribution estimation based hellinger distance information sciences hofer adapting classification rule local global shift unlabelled data available european journal operational research hofer krempl drift mining data framework addressing drift classification computational statistics data analysis hopkins king method automated nonparametric content analysis social science american journal political science kawakubo plessis sugiyama computationally efficient estimation class balance change using energy distance ieice transactions information systems kull flach patterns dataset shift working paper milli monreale rossetti giannotti pedreschi sebastiani quantification trees international conference data mining icdm pages ieee december doi raeder chawla herrera unifying view dataset shift classification pattern recognition platt probabilities machines bartlett schuurmans smola editors advances classifiers pages mit press cambridge development core team language environment statistical computing foundation statistical computing vienna austria url http saerens latinne decaestecker adjusting outputs classifier new priori probabilities simple procedure neural computation scharf statistical signal processing detection estimation time series analysis shimodaira improving predictive inference covariate shift weighting function journal statistical planning inference storkey training test sets different characterizing learning transfer sugiyama schwaighofer lawrence editors dataset shift machine learning pages cambridge mit press sugiyama krauledat covariate shift adaptation importance weighted cross validation journal machine learning research tasche exact fit simple finite mixture models journal risk financial management van rijsbergen foundation evaluation journal documentation van trees detection estimation modulation theory part john wiley sons xue weiss quantification classification methods handling changes class distribution proceedings acm sigkdd international conference knowledge discovery data mining pages new york chai lee chieu optimizing tale two approaches langford pineau editors proceedings international conference machine learning pages new york usa acm url http appendix optimal classifiers following give proof based idea fills gaps left proof proof makes use four following lemmata lemma let assumption probability space event following three properties measurable mapping satisfy assumption def iii exists proof lemma define let projection denotes stands uniform distribution addition let iii define lemma assumption fixed number proof choose lemma assumption fixed holds proof assumption implies lemma assumption holds max proof proof lemma essentially identical proof given theorem suggests proof incomplete since steps needed show implies fix distinguish two cases sake concise notation define case show note implies last row equation chain true proves case case show implies observe first secondly nothing left proved hence assume obtain implies otherwise would therefore follows hence true holds completes proof finishing proof fix need show number max since obvious case therefore may assume remainder proof notation hides fact depend classifier also class event probability measure matters next step proof replace provided lemma following using notation like implicitly assume ingredients calculation come probability space backdrop define denotes projection defined lemma easy see hence prove max immediately follows choose according lemma lemma iii exists event hence obtain lemma implies lemma follows max note construction got hence together imply
| 10 |
uniform symbolic topologies via multinomial expansions dec robert walker abstract noetherian commutative ring uniform symbolic topologies exist integer symbolic power prime ideals groundbreaking work extended hochster huneke schwede turn provides beautiful answer setting excellent regular rings natural sleuth analogues ring ideal containments improved using linear function whose growth rate slower manuscript falls overlap research directions working prescribed type prime ideal inside tensor products domains finite type algebraically closed field present multinomial expansion criteria containments type even better type final section consolidates remarks often utilize criteria presenting example introduction conventions paper given noetherian commutative ring integer depending symbolic power prime ideals positive integers short uniform symbolic topologies primes section theorem extended hochster huneke says regular ring containing field radical ideals max extent theme ring true rings mild stipulations local domain regular punctured spectrum uniform symbolic topologies primes cor paper makes first strides establishing affirmative cases questions rings singularities postpone sojourning wilderness section proven main revisit regular setting arbitrary field standard polynomial ring groundbreaking work implies symbolic power graded ideals integers particular holds graded ideals huneke asked whether improvement holds radical ideal defining finite set points building harbourne proposed dropping symbolic power bound conj graded ideal several scenarios improved containments hold instance hold monomial ideals field see also recent work mathematics subject classification keywords multinomial theorem singularities symbolic powers toric variety result extended excellent regular rings even mixed characteristic however dumnicki szemberg showed characteristic zero containment fail radical ideal defining point configuration harbourneseceleanu showed odd positive characteristic fail pairs ideals defining point configuration akesseh cooks many new counterexamples original constructions prime ideal counterexample found goal establish bounds growth symbolic powers certain primes noetherian rings containing field project first began theorems clarify normal affine semigroup rings domains generated laurent monomials arise coordinate rings normal affine toric varieties theorem thm let normal affine semigroup rings field built respectively strongly convex rational polyhedral cones rmi suppose integer monomial primes set max monomial prime normal affine semigroup ring prove theorem needed know first monomial primes expand monomial primes expressed sum monomial prime symbolic powers admit multinomial expansion terms symbolic powers ideas resurge general setup one drawback theorem covers finite collection prime ideals follows main result paper powerful variant theorem typically cover primes inside tensor product domains see remark details theorem let algebraically closed field let affine commutative domains suppose exists positive integer prime ideals either even stronger fix nnprime ideals consider expanded ideals affine domain along sum holds holds improves max proof theorem leverages multinomial formula symbolic powers prime ideal theorem nguyen trung trung recently announced binomial theorem symbolic powers ideal sums thm generalizing thm one takes two arbitrary ideals inside two noetherian commutative algebras common field whose tensor product noetherian see remark details however give proof multinomial theorem elementary conventions rings noetherian commutative identity indeed rings typically affine finite type fixed field arbitrary characteristic algebraic variety mean integral scheme finite type field acknowledgements thank thesis adviser karen smith sabbatical surrogate adviser mel hochster several patient fruitful discussions fall semester thank huy sharing preliminary draft section november thank grifo pires daniel jack jeffries luis felipe reading draft paper thank anonymous referee comments improving exposition paper acknowledge support nsf grf grant nsf rtg grant ford foundation dissertation fellowship multinomial theorem symbolic powers primes prime ideal noetherian ring symbolic power ideal component minimal primary decomposition smallest ideal containing separately set unit ideal note inclusion strict proceeding record handy asymptotic conversion lemma lemma lem given prime ideal noetherian ring torsion free modules noetherian domains module domain torsion free whenever either first record lemma torsion free modules used next subsection lemmas stacks project page torsion free modules lemma let noetherian domain let nonzero finitely generated following assertions equivalent torsion free submodule finitely generated free module associated prime assr working arbitrary field fix two affine domains tensor product affine domain algebraically closed milne prop note duly nice polynomial normal toric rings generally domain field record two additional lemmas lemma suppose three affine domains field finitely generated torsion free modules respectively finitely generated torsion free proof viewed vector spaces case torsion freeness vacuous assume three nonzero per lemma suppose embeddings apply functor first inclusion get turn contained tensoring inclusion thus isomorphism easily checked category spaces since direct sum commutes tensor product course inclusion holds category finitely generated since noetherian done invoking lemma lemma prime noetherian ring finitely generated module torsion free integers proof say killed means lifting localize means either otherwise ergo definition finally record consequence lemma important next subsection following proposition follows immediately lemmas proposition suppose three affine domains field fix two prime ideals respectively affine domain finitely generated torsion free pair nonnegative integers proving multinomial theorem working algebraically closed field fix two affine domains two prime ideals let tensor products affine domains algebraically closed extended ideals prime along sum relax assumption algebraically closed merely perfect instance perfect characteristic zero along ring extension zero ideal maximal extends radical ideal prime relative flat map noetherian rings define ideal ideal since two ideals share generating set define set prime ideals prime consist prime ideals extend along prime ideals record without proof handy proposition proposition prop suppose faithfully flat map noetherian rings prime ideal integer pairs polynomial ring finitely many variables inclusion spec possible spec proposition per example working field use proposition two affine affine domains algebraically closed domain spec spec ready prove binomial theorem symbolic powers theorem symbolic power proof drop notation assume nonzero justify effort set since note easy verify indeed generated elements form viewing elements need per proposition exist viewing elements overring indeed since prime either contradicting therefore means thus holds notably proper since contains smallest ideal containing opposite inclusion follow show set associated primes asst short exact sequences thus asst asst asst asst using fact given inclusion modules ass ass ass ass thus iterative unwinding using asst conclude asst asst taking direct sums tensor products series vector space isomorphisms prove first considering two chains symbolic powers ideal expressed direct sum spaces particular pairs pair prove killing common vector space first identifying repeated copies term since working vector subspaces ring straightforward check boxed sums equal thus canonical isomorphisms spaces therefore since natural surjective map hence map must injective per isomorphism thus asst asst asst turn identity implies one modules nonzero case asst asst explain equality pair proposition says finitely generated module thus asst lemma asst finally combining inclusion asst proper ideal conclude asst asst ideal shown thus indeed equality deduce multinomial theorem induction number tensor factors theorem let algebraically closed field let affine commutative domains fix prime ideals consider expanded ideals affine domain symbolic power proof induce number tensor factors base case theorem suppose assume result tensoring factors suppose expansion result form nonnegative integers primes sum prime along extensions given prime sum prime together extensions prime first equality holds theorem applying proposition extension second equality holds using fact whenever ideals commutative ring proves version hard inclusion proof theorem deducing opposite inclusion easy hence inclusion equality proving theorem use multinomial theorem deduce corollary note theorem version corollary tensor factors assumed satisfy uniform symbolic topologies primes corollary let algebraically closed field let affine commutative domains primes consider expanded ideals fix affine domain set suppose exists positive integer either pir even stronger pir holds holds improves max proof assume holds per theorem note indices must otherwise lie contradiction thus summand applying proposition hence also since arbitrary win holds pir equivalently per lemma proposition containments nonnegative integers per theorem since integer thus equivalently lemma remark get much stronger conclusion corollary holds give proof using lemma workaround less clear strongest conclusion shoot holds note holds corollary setting max one alternatively prove contradiction part proof simply adjust claim otherwise tuple satisfies contradiction remark hypotheses satisfied corollary typically applies infinite set prime ideals tensor product noetherian ring dimension least two noetherian ring dimension one infinitely many maximal ideals spec infinite see exercises suppose domains algebraically closed least one dimension one domain following set qed spec spec infinite remark let polynomial rings field original inspiration theorem following theorem thm bocci let squarefree monomial ideals let expansions symbolic power nguyen trung trung thm recently extended theorem case two nonzero ideals two noetherian commutative also noetherian general multinomial theorem follows adapting proof theorem one containment would require version lem combining multinomial expansion general versions lemma proposition proved prop lem prop one extend corollary form allowing instance proper ideals final note passing proof prop still works tweak multiplicative system opt define symbolic powers proper ideals using minimal associated primes rather using associated primes finale sample applications tensor power domains begin two results uniform linear bounds asymptotic growth symbolic powers equicharacteristic noetherian domains nice structure need regular first due huneke katz validashti second due ajinkya theorem cor let equicharacteristic noetherian local domain isolated singularity assume either essentially finite type field characteristic zero positive characteristic analytically irreducible exists prime ideals theorem thm cor see also thm suppose finite extension equicharacteristic normal domains regular ring generated elements invertible either essentially finite type excellent noetherian local ring characteristic exists prime ideals remark suppose coordinate ring affine variety perfect field whose singular locus zero dimensional tandem results theorem would yield uniform slope primes particular covers generally corresponds affine cone smooth projective variety indeed class rings theorems apply large applying theorem collection two rings remark remark says create infinite set vantage point data suggestive uniform symbolic topologies corresponding tensor product domain present since domain create singularities theorem literature affirming domain uniform symbolic topologies primes illustrate matters occur together example first fix algebraically closed field domain use tensor power notation denote domain obtained tensoring together copies presented quotients polynomial rings disjoint sets variables remark recall following set prime ideals qed spec spec example start fix algebraically closed field given integers least two consider affine hypersurface domain irreducible homogeneous polynomial degree isolated singularity origin consider varieties spec spec fan per remark theorem implies primes qed meanwhile terms cartesian products singular locus sing equidimensional dimension particular isolated singularity set qed infinite remark provides vantage point witnessing uniform linear bounds lurking asymptotic growth symbolic powers primes remark pointers results explicit value theorems given particular examples domains remark invite reader see recent survey paper thm cor along main results featured introductions papers section latter papers typically includes remarks designated value considered optimal closing remarks launching theorem introduction deduced powerful criterion proliferating uniform linear bounds growth symbolic powers prime ideals bounds setting domains finite type algebraically closed fields criterion contributes evidence huneke philosophy uniform bounds lurking throughout commutative algebra close goalpost question exceeds grasp present given role tensor products manuscript analogues criteria hold product constructions commutative algebra segre products rings fiber products toric rings references akesseh ideal containments flat extensions altman kleiman term commutative algebra worldwide center mathematics llc cambridge bauer rocco harbourne kapustka knutsen syzdek szemberg primer seshadri constants contemporary mathematics bocci cooper guardo harbourne janssen nagel seceleanu van tuyl waldschmidt constant squarefree monomial ideals algebr comb cox little schenck toric varieties graduate studies mathematics american mathematical society providence dao stefani grifo huneke symbolic powers ideals appear advances singularities foliations geometry topology applications springer proceedings mathematics statistics dumnicki szemberg counterexamples containment algebra ein lazarsfeld smith uniform bounds symbolic powers smooth varieties invent math fulton introduction toric varieties annals math studies princeton university press princeton grifo huneke symbolic powers ideals defining strongly rings international mathematics research notices https nguyen trung trung symbolic powers sums ideals harbourne seceleanu containment counterexamples ideals various configurations points pure appl algebra hochster huneke comparison ordinary symbolic powers ideals invent math huneke uniform bounds noetherian rings invent math huneke katz validashti uniform equivalence symbolic adic topologies illinois math huneke katz validashti uniform symbolic topologies finite extensions pure appl algebra schwede perfectoid ideals regular rings bounds symbolic powers milne algebraic geometry version http uniform bounds symbolic powers algebra stacks project authors stacks project walker rational singularities uniform symbolic topologies illinois math vol summer walker uniform bounds via flat extensions walker uniform symbolic topologies normal toric rings department mathematics university michigan ann arbor address robmarsw
| 0 |
multivariate integral perturbation techniques theory jan dash dash consultants submitted publication september abstract present perturbation expansion multivariate dimensional gaussian integrals perturbation expansion infinite series integrals simplest approximation perturbative idea also applied multivariate integrals evaluate perturbation expansion explicitly order discuss convergence including enhancement using approximants brief comments potential applications finance given including options models credit risk derivatives correlation sensitivities introduction evaluation multivariate integrals substantial interest various areas finance including options credit derivatives well science engineering techniques available special cases common procedure numerical integration via monte carlo simulation paper present technique believe new may prove useful based perturbation expansion perturbation expansion gives multivariate gaussian integral jan dash infinite series integrals simplest case integrals idea applicable also integrals probably others well possibly expansion clever choice initial term turn provide interesting viable numerical approach paper discusses theory subsequent papers report numerical aspects initial point expansion performed key always perturbation expansions work point constructed approximation original correlation matrix involving factorized expression one factor sums factorized expressions several factors paper focus approximation perturbation expansion found terms expectation values powers difference inverse original correlation matrix inverse approximate correlation matrix expectation values respect approximate probability density function readily obtained analytically give explicit expressions gaussian multivariate integrals arbitrary dimension using one factor second order expansion order single integral first order additional integrals second order additional integrals integrals grouped classes similar appearance integrals need programmed sect treat perturbation expansion multivariate gaussian integrals one factor sect outline procedure multiple factors sect contains perturbation expansion multivariate integrals sect details formalism gaussian perturbation theory sect deals perturbation results sect perturbation results sect presents cluster decomposition diagrammatic notation sect discusses approximate correlation matrix needed start perturbation analysis sections briefly discuss potential applications options credit risk derivatives sect discusses correlation sensitivity sect discusses convergence sect discusses enhanced convergence using approximants sect logical flow chart steps needed numerical investigations perturbation expansion multivariate gaussian integrals consider gaussian multivariate integral positivedefinite correlation matrix inverse matrix determinant max xnmax exp jan dash vector variables max upper limits assumed constant independent variables dxi matrix multiplication understood exponent consider approximation correlation matrix introduce numbers write matrix elements easy prove see curnow dunnett ref sect replace would become integral max exp xnmax exp max auxiliary variable one variable one factor corresponding single factorized term also standard cumulative normal imax ximax needs positive reproduce single integral done straightforwardly using good uniform algebraic approximations normal integral perturbation expansion follows trivial identity jan dash inserted original integral keep explicit dependence exponent ximax expand rest exponential integrand power series matrix defined get ximax ximax ximax xnmax exp include terms sum result must depend original correlation matrix dependence cancelling using standard functional techniques described sect evaluate given sum integrals get result normalization result ximax namely single integral ximax exp max normalization get integrals form jan dash ximax exp gij max various gij functions different according whether see sect perturbation matrix element note explicit minus sign two types integrals get integrals form see sect ximax exp ijkl class label corresponding seven different classes indices arise indices distinct one pair equal two pairs equal three values equal four values equal classes conveniently associated diagrams sort cluster decomposition iii various functions ijkl categorized using class label total integrals five different types indexing complicated products normal functions lumped ijkl multiple factors gaussian perturbation expansion using means take approximated form fij cluster decomposition cluster decomposition present paper similar spirit different detail cluster decomposition multivariate gaussian integrals envisioned late see ref iii jan dash leads integrals using auxiliary variables replace perturbation theory performed along lines tradeoff extra complexity multidimensional integrals versus better initial description correlation matrix principal components get explained sect restore original problem nothing accomplished shall however give results perturbations approximation paper perturbation expansion multivariate integrals next outline perturbation theory multivariate integrals produces integrals instead integrals approximation basic idea use integral representation order rewrite things exponential form use perturbation theory similarly gaussian case multivariate cumulative probability distribution dimensions degrees freedom isiv max xnmax use identity exp integral extra complication distribution make change variables yxi define uimax yximax obtain jan dash max max exp gaussian form thus use perturbation procedure sect expand powers perform integrals variables respect measure exp assuming one factor approximation yields integrals along integral gives perturbative result ximax terms sums dimensional integrals explicitly yximax ximax yximax gaussian multivariate changed upper limits fixed namely ximax yximax max max uimax exp use results previous analysis multivariate gaussian perturbation theory inserted get perturbation expansion multivariate integral including integral large well known approaches gaussian using wkb around eliminates integral yields result ximax wkb large ximax jan dash formalism gaussian perturbation theory section discusses details formalism gaussian perturbation theory first show extra variable arises start note identity exp exp exp identity introduces extra variable use change variables introducing eliminate integral becomes max exp exp max imax immediately yields consider average function respect approximation measure xnmax exp integrals handled using standard functional technique introduce currents functional derivatives making change variables get jan dash exp exp max upper limit parameters get restoring original ximax exp ximax exp problem hand set writing components matrices make replacement evaluate derivatives set sects contain results useful analytic inverse matrix isv jan dash given multiple factors inverse complicated unable find analytic inverse details order terms section give functions gij text first define exp imax also define imax equal indices gii unequal indices gij details order terms section discuss terms first define jan dash imax imax max max max max seven classes terms defined follows corresponding indices outer product two matrices elements two degenerate really five independent classes class two indices equal terms example corresponding function general gijkl class one pair indices equal one matrix others unequal unequal pair terms example corresponding function general giijk class one pair indices crosswise equal two matrices others unequal unequal pair jan dash terms example corresponding function degenerate general gijik giijk class pair indices matrix equal distinct terms examples corresponding function general giijj class two pairs indices crosswise equal distinct terms example corresponding function degenerate general gijij giijj class three indices equal different fourth example terms example corresponding function general giiij class indices equal terms example corresponding function general giiii total number terms classes clear discussion five types integrals different parameters jan dash cluster decomposition diagrammatic notation cluster decomposition useful device visual mnemonic various types terms ref iii index represented line going left right pair lines starting left going top bottom corresponds indices one matrices consider fig two indices equal make lines enter leave bubble like picture left two indices unequal bubble like picture right indices unequal indices equal fig diagrams seven classes order perturbation term two matrices four indices four lines drawn fig jan dash class class class class class class class fig diagrams choice parameters matrix choice parameters approximation matrix arbitrary course perturbation results given order depend choice one idea use principal component decomposition original correlation matrix keep first pcs thereby identify decomposition nxn matrix sum factorized terms identity reads jan dash eigenfunction eigenvalue already sum factorized terms matrix indices hence truncating sum term yields factor approximation exactly procedure used var calculations using svd get rid negative eigenvalues setting zero need renormalize eigenfunctions unit diagonal elements ref iii replace write fij identifying terms set positive definite positive eigenvalues nxn matrix positive positive eigenvalues zero eigenvalues one factor procedure produces fij useful however could really assume constant fij maintaining positive definiteness look best fit constant approximation matrix fij hence write set get abs abs means absolute value include expression always makes sense jan dash generalization nonconstant matrix done pulling sum index identifying given abs sgn sgn means sign included generality matrix elements signs procedure may seem arbitrary recall free choose approximate correlation matrix way want get final result one factor approximate matrix elements propose use abs abs sgn sgn visually amounts approximating original correlation matrix element geometric average signed averages matrix elements row column whose intersection contains note approximation either positive negative sign since obtained using average extent approximates depend internal differences matrix elements relatively constant matrix elements tractable approach getting may also useful equations nonlinear need solved numerically regardless requirement positive definite needs checked explicitly applications options perturbation expansion potential application options dependent several variables simplicity restrict discussion illustrative example jan dash consider expectation payoff function variables remain constants interlocked independent limits ximax variables use either gaussian multivariate distribution depending correlation matrix carrying perturbation procedure leads perturbation series expectation using approximate measure recall general nature expectation respect dependent approximate measure want use procedure similarly sect functions given approximation etc need evaluated including dependence payoff function application credit risk credit derivatives perturbation expansion developed paper potentially useful analysis credit risk credit derivativesvi portfolios determination credit risk obtained using historical correlations asset returns structural model appropriate analysis large correlation matrices could potentially numerically handled using perturbation methods paper trading portfolios models calibrate market cds cdo prices data appropriate model several factors could use perturbation expansion provide potentially useful evaluation tool credit derivatives options assumption true options options conditions interlock limits dependence handled changing variables integrals applying perturbation theory jan dash correlation sensitivity get straightforward approximations correlation sensitivity using perturbation expansion may prove useful consider function correlation matrix suppose correlation matrix changes resultant change function correlation sensitivity suppose choose approximate matrix unchanged approximate matrix subtracting two perturbation expansions get approximation correlation sensitivity first order dependence cancels except approximate measure convergence perturbation series section discuss convergence perturbation series although constructed formal proof two main points first function order exponential expansion contains factor eventually overwhelms powers function second rapid decrease integrand large values exists means integrals effectively finite interchange allowed intuition gained looking approximation max define average parameters cavg xavg details matter dependence integrand essentially form exp log max xavg cavg savg jan dash dimension normal integral differentiate eqn set point magnitude integrand largest equation determine ncavg savg max xavg cavg savg max xavg cavg savg derivative normal integral integrand pronounced bump falling rapidly either side similar remarks hold order integrands proportional although due cancellations terms may additional structure besides simple bump increases would appear increase factors decrease make relatively constant given bump structure intriguing speculate wkb approximation might useful rough guide without integrations also instructive look dominant order fixed dimension arguments similar yield rough estimate avg avg average matrix element matrix finally given order important dimension roughly given avg avg normal integral average parameters metrics convergence theory paper exact orders perturbation theory utilized however whole idea stop manageable point order hence error need characterize error understand approximation useful end define metrics numerical error function metrics jan dash since using approximation limited number variables correlation matrix elements approximately constant approximation good amount correlation matrix elements highly variable limit utility approximation hence anticipate internal correlation variance int useful metric int avg avg average matrix elements particular since approximate matrix obtained averaging elements internal variance int smaller int also anticipate int internal variance expansion matrix useful close singular one small eigenvalues large matrix elements therefore large elements convergence less rapid cutoffs minimum eigenvalue may applied cases correlation matrix may need regularized somewhat order apply method naturally correlations regularized matrix changed somewhat drawback hand known correlations practice highly unstable large uncertaintiesiii change due regularization may small compared uncertainties case procedure must consistent experimental uncertainties correlation matrix elements anticipate cutoff depend dimension useful metric correlation matrix distance matrix singular matrix call suggest defined jan dash eqn formally total resistance resistors parallel resistances taken matrix becomes singular example correlation matrix elements equal one one eigenvalue equal zero zero eigenvalue resistor circuit analogy corresponds short circuit maximum distance matrix singular case occurs eigenvalues equal since must get max note distance singular matrix decreases dimension increases enhancing convergence approximants section discuss enhancing convergence perturbation series using approximants common procedure scientific applications may familiar finance shall spend little time giving background shall also use idea somewhat extended fashion approximant defined rational function agrees perturbation expansion function given order approximantsvii enhance numerical convergence cases including applications theory approximants may numerically helpful may necessary small values matrix elements perturbation carried perturbation expansion converge rapidly problems anticipated however larger values convergence less rapid even appear diverge finite order calculated order claim perturbation series converges even perturbation series diverge example asymptotic series approximants provide method summing series consider simple case equal matrix elements perturbation expansion simplifying notation order perturbative term series expansion approximant pade matches sum perturbative jan dash terms order viz pade increases match perturbation series becomes exact expansion carried small values approximants analytically continued large values approximants effectively provide approximations arbitrarily perturbative terms without explicitly calculating results numerically better perturbation expansion approximants labeled orders numerator denominator define successive approximants pade order pade pade pade approximant type another approximant type defined pade often diagonal approximants used approximants useful note pade type use approximant formulae real case nonconstant matrix elements dropping restriction constant procedure goes beyond usual theory however assumption simple write tested numerically direct integration additional acceleration device may turn useful try parametrically approximately extrapolate approximants infinite order thereby obtaining final approximation original multivariate gaussian integral eqn consider ansatz jan dash pade parameter reasonable function pade might choose cos oscillating sequence known know pade pade see hence equations unknowns hence get final result determining numerical aspects logical flow chart numerical aspects theory presented paper currently investigated including error analysis function parameters reported separately viii preliminary results encouraging also indicate care required applying theory including approximants logical flow chart explicit steps order obtain gaussian ximax eqn using approximation obtain correlation matrix check positive definite may necessary impose eigenvalue cutoff make approximation accurate enough get inverse determinant get via least squares write get approximation eqn check calculate inverse matrix eqns determinant get subtract matrices element element specify upper limit values ximax may useful take maximum average two approximants jan dash integration sufficiently big number calculate imax eqn using normal integral approximationii eqns gij eqns order contribution ximax eqns eqns order contribution ijkl ximax calculate order contribution eqn calculate order contribution eqn obtain approximation order approx calculate order contribution eqn obtain approximation order approx calculate approximants pade eqns obtain extrapolation approximants using eqn suggested final result use internal correlation variance int distance singular matrix sect metrics application analysis acknowledgements thank kris kumar discussions regarding numerical aspects potential applications theory references jan dash curnow dunnett numerical evaluation certain multivariate normal integrals ann math references therein abramowitz stegun handbook mathematical functions national bureau standards applied mathematics series see sect iii dash quantitative finance risk management physicist approach world scientific printing isbn hodoshima effects nonnormality market model class elliptical distributions nagoya city preprint march chern shen geometry world scientific nankai tracts mathematics vol duffie singleton credit risk pricing measurement management princeton press smithson credit portfolio management wiley finance vii kleinert path integrals quantum mechanics statistics polymer physics edition world scientific honerkamp statistical physics advanced approach applications wikipedia http baker approximants encyclopedia mathematics applications cambridge press viii dash kumar multivariate integral perturbative techniques numerical aspects appear file dash multivariate integral perturbative techniques theory last accessed jan dash
| 5 |
sep error bounds approximations open quantum systems subspace truncation adiabatic onvaree techakesari hendra nurdin february abstract important class physical systems interest practice inputoutput open quantum systems described quantum stochastic differential equations defined underlying hilbert space commonly systems involve coupling quantum harmonic oscillator system component paper concerned error bounds finitedimensional approximations open quantum systems defined hilbert space develop framework developing error bounds time evolution state class quantum systems approximation subspace original initialized latter subspace framework applied two approaches obtaining approximations subspace truncation adiabatic elimination applications bounds physical examples drawn literature provided illustrate results keywords quantum stochastic differential equations open quantum systems approximations error bounds approximation errors introduction quantum stochastic differential equations qsdes developed independently hudson parthasarathy gardiner collett latter less general form former widely used describe models physical open markov quantum systems models describe evolution markovian quantum research supported australian research council techakesari nurdin school electrical engineering telecommunications unsw australia sydney nsw australia email systems interacting propagating quantum field quantum optical field frequently encountered quantum optics optomechanics related fields example quantum optics would cavity qed quantum electrodynamics system single atom trapped inside optical cavity interacts external coherent laser beam impinging optical cavity models subsequently played important role modern development quantum filtering quantum feedback control theory many types quantum feedback controllers proposed literature basis qsdes using measurementbased quantum feedback control coherent feedback control besides qsdes also applied various developments quantum information processing quantum computation technology see various physical systems interest one often deals systems include coupling quantum harmonic oscillator instance typical superconducting circuits interest quantum information processing consist artificial atoms coupled transmission line resonator former typically described using hilbert space latter quantum harmonic oscillator underlying hilbert space space functions real line another example proposed photonic realization classical logic based kerr nonlinear optical cavities built around quantum harmonic oscillator kerr nonlinear medium inside mathematical model quantum devices sufficiently simple often possible simulate dynamics system digital computer assess predicted performance actual device carried simulation carried typically stochastic master equation simulates stochastic dynamics quantum system one output observed via laboratory procedures homodyne detection photon counting see however since possible faithfully simulate quantum system hilbert space often simulations space truncated subspace two approaches often employed approximate quantum system defined space subspace truncation approximation adiabatic elimination also known singular perturbation subspace truncation approximation applied eliminate higher dimensions original hilbert space operator space approximated truncated operator form denotes orthogonal projection projector onto approximate subspace instance quantum harmonic oscillators commonly used space span finite number fock states hand adiabatic elimination often used simplify quantum systems comprising components evolve multiple approach faster variables eliminated mathematical model description systems despite ubiquity approximating hilbert spaces quantum systems subspaces simulations quantum systems best authors knowledge appear work tried obtain explicit bounds approximation error joint state system quantum field coupled work develops framework developing bounds error quantum state quantum system described qsde quantum state approximation described another qsde systems initialized state subspace central framework contractive semigroup associated unitary qsdes markov quantum systems error bounds developed adiabatic elimination subspace truncation approximations illustration results applied physical examples drawn literature prelimary results work announced conference paper results presented work significantly beyond particular treates subspace truncation approximation elements proofs omitted error bounds adiabatic elimination developed computability error bounds considered rest paper structured follows section present class open quantum systems associated qsdes describing markovian open quantum systems explicit error bounds subspace truncation approximation markovian open quantum system established section examples provided establish error bounds adiabatic elimination approximation section examples also provided finally concluding remarks close paper section preliminaries notation use let denote adjoint linear operator hilbert space well conjugate complex number denote matrix transposition denote kronecker delta function define linear operator write ker denote kernel ran range often write denote element hilbert space denote algebraic tensor product hilbert spaces subspace hilbert space write denote orthogonal projection operator onto hilbert space write denote linear operator denotes restriction use denote algebra bounded linear operators write notation used denote hilbert space norms operator norms denotes inner product hilbert space linear right slot antilinear left denotes outer product denotes indicator function finally denotes set positive integers open quantum systems consider separable hilbert space symmetric boson fock space multiplicity defined space see details use denote exponential vectors let loc admissible subspace sense contains least simple functions loc space locally bounded functions consider dense domain dense domain exponential vectors span consider open markov quantum system described set linear operators defined hilbert space hamiltonian operator vector coupling operators element iii unitary scattering matrix element sij moreover operators sij adjoints assumed common invariant dense domain description note corresponds number external bosonic input fields driving system bosonic input field described annihilation creation field operators bit bit respectively satisfy commutation relations bit bjs define annihilation process ait creation process ait gauge process bis bjs note processes adapted quantum stochastic processes vacuum sentation products forward differentials dait dait satisfy quantum table dakt dait dajt dakt dait dai bit dtt interpreted vacuum quantum white noise interpreted quantum realization poisson process zero intensity following time evolution markov open quantum system given adapted process satisfying left qsde dut dait sji sji dat quantum stochastic integrals defined relative domain left qsde evolution state vector given paper interested problem approximating system operator parameters open quantum system linear operator parameters defined closed subspace unitary consider dense domain operators adjoints assumed common invariant dense domain similar time evolution approximating system given adapted process satisfying left qsde sji dut sji dait quantum stochastic integrals equation defined relative domain similarly evolution state vector given associated semigroups let canonical shift also let denote second quantization denotes fock space note adapted process called contraction unitary cocycle contraction unitary strongly continuous let impose important condition open quantum systems consideration adopted condition contraction cocycle solutions qsde possesses unique solution extends unitary cocycle qsde possesses unique solution let define operator extends contraction cocycle via identity lemma condition operator strongly continuous contraction semigroup generator satisfies dom sji sji note dom dense likewise define operator replacing condition core generators core core condition ensures definition completely determines likewise completely determines sequel make use semigroups associated open quantum systems establishing model approximation error bound several sufficient conditions known guarantee qsde possesses unique solution extends unitary cocycle hilbert space infinitedimensional operator coefficients qsde unbounded see related discussion remark throughout paper assume conditions fulfilled error bounds subspace truncation approximations section consider problem space truncated subspace original operators approximated truncated operators form xph dimension increases moreover condition fact unitary hold immediately assumptions preliminary results assumption dom let ran supposing assumption holds also assume following assumption exists subspace ker hpk kpk kpk kuk htt kuk assumption exists lim moreover lim lim min let present useful lemmas lemma suppose assumption holds holds kuk proof first note definition strongly continuous semigroup dtd assumption ndk solving ode gives kpk kpk second step follows assumption lastp step follows kpk kuk contraction semigroup noticing kuk substituting side kuk taking square root sides equation get kuk repeat application steps establish lemma statement lemma suppose assumption holds holds kuk proof similar lemma using assumption htt solving ode kpm lemma statement established following similar arguments lemma error bounds approximations begin defining min establish error bound two semigroups associated open quantum systems lemma suppose assumptions hold holds kuk proof first note definition strongly continuous semigroup dom since properties assumption write note assumption bounded operator since due assumption generator semigroup hilbert space dom class continuously differentiable functions unique solution exists given thm assumption definition using bounds established lemmas respectively applying assumption kuk result follows substitution identity integration establishes lemma statement let denote dense set simple functions exists sequence constants let proceed derive error bounds approximations subspace truncation lemma suppose assumptions hold let sequence proof first recall admissible subspace contains hence quantum stochastic integrals well defined also recall using cocycle properties condition well definitions identity likewise respectively replaced immediately obtain note bound established lemma fact semigroups contractions kuk bound follows substituting establishes theorem statement corollary suppose assumptions hold sequence constants moreover fixed positive integer lim proof recall unitary contraction condition triangle inequality inequality note result follows bound established lemma show recall dense therefore exists kfj moreover exists suppose otherwise corollary statement becomes trivial may choose choosing first followed finally assumption bound established lemma find sufficiently large larger choices establishes corollary statement theorem suppose assumptions hold let consider also consider let positive integer sequence constants let kuk coherent state amplitude kut unitary following bound holds moreover fixed lim remark note stronger result strong convergence uniformly compact time established proposition intervals employing theorem however error bound finite value previously established proof first note since unitary contraction also result follows bound substituting kuk unitary bound following analogous calculations yields leading alternative bound show let fixed suppose may choose since finite sufficiently large choose finally corollary since finite find sufficiently large max choices taking square roots sides since theorem statement holds trivially completes proof discussion error bounds presented theorem order starting error bound right hand side sum three terms first term bound error committed approximating simple function second term bounds error approximating finite sum terms given finally last term gives upper bound magnitude inner product error term first final terms computable however second term difficult note last line use identity however difficult compute involves semigroup acts infinite dimensional space thus alleviate difficulty turn alternative bound first third terms however second term identity derived manner however unlike involves semigroup acts finite dimensional hilbert space thus quantity much easier compute remains construct suitable approximation one way choose locally minimize right hand side equivalently term square root unfortunately although principle possible general challenging computationally intensive optimization problem demonstrate optimization example follow subspace truncation examples optical cavity consider optical cavity coupled single external coherent field used construction photonic logic gates presented let space infinite sequences orthonormal fock state basis basis annihilation creation number operators cavity oscillator defined see satisfying respectively similar examples set span optical cavity described show conditions hold cavity consider span system approximation form hph lph conditions hold immediately finite dimensional note hence assumption holds recall ran see span consider span show assumptions hold optical cavity approximation assumption note thus assumption holds assumption assumption follows defined ker span assumption first note also identities fact hpk hpk kpk kpk notice also note kpk kpk kpk therefore assumption holds defined defined assumption hpm implies htt also lpm therefore htt similarly previous derivation using contraction see assump tion holds defined defined assumption defined see assumption holds finally assumptions hold lemma corollary theorem applied obtain error bounds approximations numerical example optical cavity consider parameters used also consider input field constant amplitude let different values compute error bound established theorem fact simple function table numerical computation error bounds nonlinear optical cavity error bound using find appropriate set cost function choose local minimizer computational simplicity let fix take take initialize guess using general purpose unconstrained tion function fminunc matlab local minimizer found cost consider corresponding various values shown table using error bounds recall dimension reduced subspace model consider atom coupled optical cavity coupled single external coherent field let use denote canonical basis vectors also consider denoting normalized fock state system basis vector previous example let span system described following parameters lemma condition holds model consider span system approximation form approximating dynamics harmonic oscillator recall conditions hold immediately finite dimension also see hence assumption holds similar previous example note consider using similar derivation previous example show assumptions hold model approximation assumption note thus assumption holds assumption assumption follows defined ker span assumption note hph also identities fact hpk hpk kpk kpk noticing also note kpk kuk kpk therefore assumption holds defined defined assumption note lpm also thus htt similar previous derivation using contraction see sumption holds defined assumption defined see assumption holds finally assumptions hold lemma corollary theorem applied obtain error bounds approximations error bounds adiabatic elimination approximations open quantum system comprises subsystems evolving two timescales system dynamics approximated eliminating fast variables model description method known adiabatic elimination physics literature singular perturbation applied mathematics literature section establish error bounds type finite dimensional approximation open quantum systems slow subsystem lives subspace let satisfying describe time evolution original markov open quantum system approximated set also let satisfying describe time evolution adiabatic elimination approximation defined subspace setting note conditions hold immediately finite dimensional assume following assumption singular scaling exists operators wij common invariant domain kfj sji wij assumption structural requirements subspace closed subspace exists common invariant domain assumption limit coefficients approximating system operators sji let important result lemma lemma suppose assumptions hold linear operators defined assumption common invariant domain operator unitary let define wij wij introduce additional assumption required obtaining error bound results assumption boundedness operators finite lemma suppose assumptions hold holds kuk moreover also norm continuous kuk kuk continuous nonnegative func tions kuk proof first note definition strongly continuous semigroup since dom properties assumption write since generator semigroup hilbert space dom due assumption class continuously differentiable functions unique solution exists given thm assumption note using assumptions shown identities assumption also note using integration parts similar manner used similarly substituting three identities used assumption kuk result follows fact contraction semi groups moreover also norm continuous satisfy kuk kuk stipulated lemma contractivity follows kuk establishes lemma statement recall set simple functions dense similar previous section let lemma suppose assumptions hold moreover also norm continuous kuk kuk continuous nonnegative functions proof proof follows similar arguments proof lemma using result established lemma corollary suppose assumptions hold kke kke sequence addition also norm continuous kuk kuk continuous nonnegative functions kke moreover holds lim proof proof follows similar arguments proof corollary using established lemma theorem suppose assumptions hold let consider also consider let positive integer sequence let kuk coherent state amplitude kut addition also norm continuous kuk kuk continuous nonneg ative functions moreover holds kut lim remark theorem stronger strong convergence uniformly result compact time intervals established theorem adiabatic elimination based theorem without error bounds finite values proof proof follows similar arguments proof theorem using established corollary adiabatic elimination examples elimination harmonic oscillator consider class open quantum systems comprises atomic system coupled harmonic oscillator driven external coherent fields originally presented let hilbert space space infinite sequences similar example let orthonormal fock state basis basis annihilation creation number operators defined see satisfying respectively following choose dense domain let define consider system operators defined sji kfj wij wij bounded operators space consider harmonic oscillator eliminated model forced ground state limit quantum optics process adiabatic elimination optical cavity strong damping limit consider approximation system operators defined sji stress unitary proposition shown conditions hold systems original system satisfies assumption suppose bounded inverse assumption satisfied defined operators also satisfy assumption remains show assumption holds note let let wij note bounded operators hilbert space since wij bounded operator wij identities see assumption holds bounded inverse wij bounded operators defined finitedimensional subspace also bounded operator thus conditions lemma lemma corollary theorem verified model example consider system consisting atom coupled optical cavity cavity uncoupled leg atom driven external coherent field let previous example consider orthonormal fock state basis use denote canonical basis vectors basis let define also define span rotating wave approximation rotating frame reference system described following operators amplitude external coherent field driving cavity uncoupled leg atom consider span cavity oscillator excited state atom eliminated model limit consider approximating system described operators defined easily verified unitary shown systems satisfy conditions see assumption let define span assumption holds defined respect basis also seen assumption holds defined thus remains show assumption holds note thus note relations fact defined subspace bounded operator assumption holds numerical example model consider let compute error bound different values established theorem fact simple function kut similar example kut bounded find appropriate let define cost plto find function local minimizer numerical optimization table numerical computation error bounds elimination approximation example error bound adiabatic computational simplicity fix set note simultaneously optimizing time intervals computationally intensive thus simplify computation optimizing sequentially blocks time intervals time thus optimization done blocks optimization first block initialized optimization result block used initialize optimization next block sequence local minimizer found using matlab given overall optimized cost using error bounds using various values shown table conclusion work developed framework developing error bounds finite dimensional approximations quantum stochastic models defined underlying hilbert spaces possibly unbounded coefficients qsdes framework exploits contractive semigroup associated qdess gives first time error bound expressions two types approximations often employed literature subspace truncation adiabatic elimination bounds principle computable vanish limit parameter representing dimension approximating subspace case subspace truncation approximation large scaling parameter case adiabatic elimination goes theory developed applied physical examples taken literature several directions investigation along theme initiated paper devising efficient method computing bounds term subspace truncation kut adiabatic elimination beyond computationally intensive optimization based approach considered herein important tensor network methods recently met lot success efficient simulation one dimensional systems could potentially important purpose also remains question conservatism error bounds could tighter bounds achieved using different set assumptions numerical example adiabatic elimination bound employed rather potentially less conservative latter requires determining whether norm continuous semigroup finding bounding function task general deserves investigation moreover would interesting see exactly solvable qsde models physical system system hilbert space initial states lie subspace original conservatism error bounds assessed authors currently unaware exactly solvable models acknowledgements authors grateful support australian research council discovery project references references hudson parthasarathy quantum ito formula stochastic evolution commun math vol gardiner collett input output damped quantum systems quantum stochastic differential equations master equation phys rev vol wiseman milburn quantum measurement control versity press cambridge belavkin edwards quantum filtering optimal control quantum stochastics information statistics filtering control university nottingham july belavkin guta eds singapore world scientific nurdin james doherty network synthesis linear dynamical quantum stochastic systems siam control vol bouten van handel james introduction quantum filtering siam control vol bouten van handel separation principle quantum control quantum stochastics information statistics filtering control university nottingham july belavkin guta eds singapore world scientific james nurdin petersen control linear quantum stochastic systems ieee trans autom control vol nurdin james petersen coherent quantum lqg control automatica vol kerckhoff nurdin pavlichin mabuchi designing quantum memories embedded control photonic circuits autonomous quantum error correction phys rev vol duan kimble scalable photonic quantum computation interaction phys rev vol mabuchi nonlinear interferometry approach photonic sequential logic appl phys vol gardiner zoller quantum noise handbook markovian nonmarkovian quantum stochastic methods applications quantum optics berlin new york techakesari nurdin error bounds approximations open quantum systems proceedings ieee conference decision control osaka ieee meyer quantum probability probabilists verlag bouten van handel silberfarb approximation limit theorems quantum stochastic models unbounded coefficients funct vol fagnola quantum stochastic differential equations unbounded coefficients probab rel fields vol curtain zwart introduction linear systems theory ser text applied mathematics new york lasiecka manitius differentiability convergence rates approximating semigroups retarded function differential equations siam numer vol ito kappel theorem approximation pdes mathematics computation vol
| 3 |
multipair massive mimo relay systems hardware impairments aug ying liu xipeng xue jiayi zhang linglong dai shi jin impairments phase noise quantization errors noise amplification baneful effects wireless communications paper investigate effect hardware impairments multipair massive mimo fullduplex relay systems scheme specifically novel approximate expressions spectral efficiency derived obtain important insights practical design considered system number relay antennas increases without bound propose hardware scaling law reveals level hardware impairments tolerated roughly proportional new result inspires design practical multipair massive mimo relay systems moreover optimal number relay antennas derived maximize energy efficiency finally simulation results provided validate analytical results ntroduction relay system ideally achieve almost twice spectral efficiency achieved traditional scheme since relay transmit receive signals simultaneously however practical implementation relay challenging due severe caused recently massive mimo proposed efficient approach suppress relay systems spatial domain different existing works consider systems deploying ideal hardware components paper consider multipair massive mimo relay system hardware suffers hardware impairments practical systems cost power consumption increase number radio frequency chains order achieve higher energy efficiency lower hardware cost chain use cheap hardware components however hardware particularly prone impairments transceivers quantization errors work supported part national natural science foundation china grant fundamental research funds central universities grant nos corresponding author jiayizhang liu xue zhang school electronic information engineering beijing jiaotong university beijing china dai department electronic engineering tsinghua university beijing china jin national mobile communications research laboratory southeast university nanjing china analog digital converters adcs phase noise although influence hardware impairments mitigated compensation algorithms residual impairments still exist due random hardware characteristics effect hardware impairments massive mimo relay system recently studied focused scheme relay however signal processing complexity scheme much higher scheme implementation massive mimo relay systems therefore scheme attractive practical system design best authors knowledge performance based multipair massive mimo relay systems hardware impairments investigated literature partially due difficulty manipulating products hardware impairments vectors motivated aforementioned consideration natural question whether hardware deployed based multipair massive mimo relay system without sacrificing expected performance gains paper try answer question following contributions analytical approximation multipair massive mimo relay systems hardware impairments derived effect number relay antennas level transceiver hardware impairments investigated hardware scaling law presented show one tolerate larger level hardware impairments number antennas increases analytic proof considered system deployed hardware components sum ses systems compared different levels hardware impairments interesting find system hardware impairments achieve system larger loop interferences finally derive optimal number relay antennas maximize ystem odel consider massive mimo relay system pairs devices two sides communicate single relay devices denoted tai tbi respectively devices could sensors exchange small amount information small cell base stations need high throughput links relay equipped antennas antennas used transmission antennas used reception device equipped one receive one transmit antenna addition relay devices assumed work mode transmit receive signals time assume direct communication link pair devices due heavy shadowing large path loss device interfered devices side channel model block fading considered paper means ergodic process static channel realization coherence block realizations blocks independent define guk huk gui hui denote uplink channels tai tbi respectively addition downlink channels tai tbi given gdk hdk gdi hdi respectively furthermore assumed follow independent identically distributed rayleigh fading elements guk huk gdk hdk random variables furthermore expressed sgu dgu shu dhu sgd dgd shd dhd respectively sgu shu sgd shd stand fading elements random variables hand dgu dhu dgd dhd diagonal matrices representing fading kth diagonal elements denoted respectively furthermore let grr denote matrix transmit receive arrays relay due mode row grr grri denotes channel ith receive antenna transmit antennas relay interference channel coefficient ith device kth device note denotes kth device elements grr random variables following complex gaussian distribution respectively hardware impairments shown residual hardware impairments transmitter receiver modeled additive distortion noises proportional signal power thus additive distortion term describes residual impairments receiver relay proportional instantaneous power received signals relay antenna diag wii matrix pcovariance ith diagonalh elementh pthe grrj grrj huj huj guj power constraint device transmit power relay furthermore proportionality coefficient describes level hardware impairments related received error vector magnitude evm note evm common quality indicator signal distortion magnitude defined ratio signal distortion signal magnitude example evm relay defined kxk denotes set channel realizations furthermore lte suggests evm smaller signal transmission time instant devices tai tbi transmit signals xai xbi relay respectively broadcasts processed previously received signal devices first assume xai xbi gaussian distributed signals due mode also receives signal broadcasted devices thus time instant received signal given grr xta xtb xak denotes xbk additive white gaussian noise awgn vector analyze received signal devices time instant relay using simple protocol amplifies previously received signal broadcasts devices therefore transmit signal vector relay given precoding matrix amplification factor broadcasts devices however due hardware impairments chains transmitter actually broadcasts devices pnr proportionality parameters characterizing level hardware impairment transmitter assume antenna power obtain perfect channel state information csi according uplink pilots devices devices obtain csi channel reciprocity due power constraint relay normalized instantaneous received signal power kfak pnr kfgrr relay adopt scheme suitable massive mimo deployment therefore precoding matrix written gdk hdk best knowledge challenging analyze residual loop interference power substituting iteratively however residual loop interference modeled additional gaussian noise due fact loop interference significantly degraded residual loop interference weak applying loop interference mitigation schemes following similar steps approximated gaussian noise source pnr furthermore tai tbi receive combined signal xak nai zai gdi zbi htdi xbk nbi noise nai nbi awgn nai nbi respeci tively following discuss analytical result tai corresponding result tbi obtained replacing tai tbi note relay receive signal transmission part keeps silent first time slot received signals relay devices respectively given zai gdi nai simplicity time label omitted following substituting combined received signal zai expressed fhui xbi zai fhuj xbj gdi fguj xaj desired signal interference fgui xai fgrr xak loop interference interference mode gdi fnr nai hardware impairments compound noise use set notation represent devices bothsides relay note one set devices exchange information set directly find zai composed seven terms signal tai desires receive interference due devices signal device loop interference relay interference caused devices due mode distortion noise induced hardware impairments relay compound noise power constraint relay perfect csi relay take advantage massive antennas simple cancellation sic schemes eliminate furthermore interference noise power obtained taking expectation respect interference noise within one coherence block channel fading result tai given rai sinrai sinrai denotes plus noise ratio sinr expressed sinrai gdi fhui gdi gdi git git fgj git fhj git fgrr respectively iii erformance nalysis best authors knowledge exact derivation really difficult herein consider asymptotic scenario large system limit utilizing convexity jensen inequality lower bound rai written rai sinrai based considering devices sides obtain sum multipair massive mimo relay system rsum note remainder paper show analytical results rai since formula rbi symmetric rai following present lemma lemma hardware impairments processing relay approximated hui hui hui gdi gdi hui identical scaling finish proof fulfilling max corollary reveals large level hardware impair ments compensated increasing number antennas relay multipair massive mimo relaying tems furthermore evm relay defined evm considering condition corollary proof please refer appendix easy lemma clear see rai increases means evm increased proportionally number antennas insights thus negligible loss replace gained investigating terms antennas evm antennas respectively first focus evm encouraging result enable reducing interference term caused broadcasting signal power consumption cost multipair massive mimo relay rai increases enlarge values relay system indicates reducing channel following evaluate multipair fading ith device pair however rai decrease massive mimo relay system number enlarge means reducing relay antennas becomes large defined ratio channel fading device pairs except ith device sum total power consumption system pair finding consistent result considering classical architecture antenna furthermore lemma reveals consists connected one chain total power consumption transmit power tbi channel fading tbi system modeled therefore increase transmit power tbi ptotal decrease increase rai find rai increases transmit power transmit power devices increase decreases power chains transmitter becomes large moreover clear see receiver respectively moreover denotes power detrimental effect hardware impairments static circuits term total power rai finally loop interference due power amplifiers devices relay denotes mode also reduce efficiency power amplifier chain thus order show fast hardware impairments considered system given rsum increase maintaining constant rate establish important hardware scaling law following umerical esults corollary section derived results multipair massive corollary suppose hardware impairment parameters mimo relay system hardware impairments replaced initial schemes validated simulavalue given scaling exponent tions averaging independent channel samples processing converges similar previous works set normalize limit furthermore without loss generality simply set values loop interferences gdi hui log respectively simulated analytical asymptotic sum plotted function half number antennas relay fig simulation results validate tightness derived approximations moreover fig validates hardware scaling law established corollary grows low levels hardware impairments however curve asymptotically bend toward zero scaling law satisfied note analytical curves plotted fig proof substituting always simulated curves due reason tend utilize large number law derive zero moreover behaves behaves relatively small low order term make numerator denominator omitted thus analytical result little sum approximation approximation approximation simulation number antennas approximation approximation approximation approximation asymptotic limits sum approximation approximation approximation approximation number antennas fig hardware scaling law multipair massive mimo relay systems different number antennas relay level loop interference fig sum multipair massive mimo relay systems hardware impairments different levels loop interference larger corresponding simulation result however even smaller curves approximation simulations close compared scheme signal processing achieve smaller sum however complexity based system relay much lower based system fig shows approximation asymptotic limit levels loop interferences system model also plotted baseline comparison since system utilizes two phases transmit receive signal inherent loop interference exist therefore systems constant fig fig multipair massive mimo relay systems hardware impairments different number antennas relay first observation fig asymptotic limit systems outperforms one systems small moderate levels interference ideal hardware hardware explained half time required mode compared mode interestingly multipair massive mimo relay system hardware impairments larger one ideal hardware small moderate levels loop interference however large value loop interference decreases sum systems moreover gap curves systems increases level hardware impairments approximation function number antennas relay plotted fig similar set clear see decreases due distortion noise caused hardware impairments moreover exists optimal number antennas nopt reach corresponding maximum nopt improved increasing however nopt increasing reduce since addition power consumption chains static circuits dominate performance onclusions paper investigate multipair massive mimo relay systems hardware impairments effect investigated deriving approximate expression addition optimal number relay antennas derived maximize also find massive mimo system hardware impairments outperforms system level loop interference small moderate finally useful hardware scaling law established prove hardware deployed relay due huge brought massive antennas ppendix rewrite sinrai gdi fgrr sin rai gdi gdi gdi gdi gdi gdi gdi gdi fhuj gdi fguj gdi gdi gdi fhui moreover following approximations kfak kfgrr kfk substituting complete proof simplifications eferences wong schober wang key technologies wireless systems cambridge university press ngo suraweera matthaiou larsson multipair relaying massive arrays linear processing ieee sel areas vol xie peng wang poor channel estimation relay networks presence synchronization errors ieee trans signal vol according law large numbers zhang chen shen xia spectral energy efficiency relay systems massive mimo ieee sel areas vol may gdi fhui kgdi khui zhang dai jin performance analysis massive mimo systems rician fading channels ieee kgdi hui huj gdi gdj gdi fhuj sel areas vol jun zhang dai sun wang spectral efficiency massive mimo systems adcs ieee commun kgdi hui guj gdi hdj gdi fguj vol zhang dai zhang wang achievable rate rician mimo channels transceiver hardware khdi kgui impairments ieee trans veh vol hoydis kountouris debbah massive mimo systems hardware energy efficiency estimation hui huj khuj gdi gdj capacity limits ieee trans inf theory vol khui khui kgdi zhang dai matthaiou masouros jin spectral efficiency massive mimo linear guj receivers proc ieee icc xia zhang hardware impairments khui khui aware transceiver massive mimo relaying ieee trans signal vol chen lei zhang yuen mimo relaying khui techniques physical layer security ieee trans wireless vol gdi matthaiou debbah massive mimo arbitrary arrays hardware scaling laws hui gdi design ieee trans wireless vol ngo larsson marzetta energy spectral effit gdi ciency large multiuser mimo systems ieee trans vol apr gdi fhui jin liang wong gao zhu ergodic rate analysis multipair massive mimo relay networks ieee trans gdi fgrr wireless vol mar zhang letaief throughput energy efficiency gdi analysis small cell networks base stations ieee trans wireless vol may gdi
| 7 |
new compiler curry sprite sergio antoy andy jost aug computer science portland state university oregon antoy ajost abstract introduce new native code compiler curry codenamed sprite sprite based fair scheme compilation strategy provides instructions transforming declarative programs certain class imperative deterministic code outline salient features sprite discuss implementation curry programs present benchmarking results sprite operationally complete implementation curry preliminary results show ensuring property incur significant penalty keywords functional logic programming compiler implementation operational completeness introduction language curry syntactically small extension popular functional language haskell seamless combination functional logic programming concepts gives rise hybrid features encourage expressive abstract declarative programs one example feature functional pattern functions invoked sides rules intuitive way construct patterns features puts patterns even footing expressions curry patterns composed refactored like code encapsulation used hide details illustrate function get defined finds values associated key list pairs get key key value value operation generates lists containing anonymous variables indicated place holders expressions used function listappending operator used side rule get operation produces pattern matches list containing thus second argument get list list containing pair key value repeated variable material based upon work partially supported national science foundation grant key implies constraint case ensures values associated given key selected similar means may identify keys key key returns key given list example one many features make curry appealing choice particularly desired properties program result easy describe set instructions obtain result difficult come paper describes work towards new curry compiler call sprite sprite aims first operationally complete curry compiler meaning produce values source program within time space constraints compiler based compilation strategy named fair scheme sets rules compiling program form graph rewriting system abstract deterministic procedures easily map instructions programming language section introduces sprite high level describes transformations performs section describes implementation curry programs imperative code section contains benchmark results section describes curry compilers section addresses future work section contains concluding remarks sprite curry compiler sprite native code compiler curry like compilers sprite subjects source programs series transformations begin external program used convert curry source code desugared representation called flatcurry sprite transforms custom intermediate representation call icurry following steps laid fair scheme sprite converts icurry graph rewriting system implements program system realized lowlevel language provided compiler infrastructure library llvm code optimized lowered native assembly ultimately producing executable program sprite provides convenience program scc coordinate whole procedure icurry icurry stands imperative form curry programs suitable translation imperative code icurry inspired flatcurry popular representation curry programs successful variety tasks including implementations prolog flatcurry provides expressions resemble functional program may include local declarations form let blocks conditionals form case constructs possibly nested although strategy made explicit case expressions flatcurry declarative icurry purpose represent program convenient imperative form convenient since sprite ultimately implement imperative language imperative languages local declarations conditionals take form statements expressions limited constants calls subroutines possibly nested icurry provides statements local declarations conditionals provides expressions avoid constructs directly translated expressions imperative language icurry including implicit highlevel features functional patterns expressed choices choice archetypal function indicated symbol defined following rules use choices made possible part duality choices free variables language feature expressed choices implemented free variables vice versa algorithms exists convert one meaning free choose convenient representation sprite finally flatcurry strategy icurry made explicit guided definitional tree structure made stepwise case distinctions combines rules function illustrate zip function defined zip zip zip zip corresponding definitional tree shown might appear icurry zip case case zip evaluating icurry understood evaluate side efficiently spineless tagless stg instance task properties programs complicate matters evaluate zip first argument must reduced form purely functional language root node form always data constructor symbol assuming partial application implemented object else computation fails programs two additional possibilities must considered leading extended case distinction zip case implied implied case infrastructure executing kind pattern matching efficiently means dispatch tables described shortly note two things first need icurry spell extra cases generated compiler second presence calls expanded notion computation allows additional node states sprite hosts computations graph whose nodes taken four classes constructors functions choices failures constructors functions provided source program choices failures denoted arise incompletely defined operations head function returns head list example head rewrites simple replacement therefore propagates failure needed arguments roots choices execute special step called steps lift needed positions prevent completion pattern matches result choice two expressions step shown zip zip zip pattern match proceed first argument zip matching rule function definition one exist choice symbol disallowed sides want choose choice would reconsidered avoid losing potential results transformation pulls choice outermore position producing two new subexpressions zip zip evaluated fact shared result illustrates desirable property node duplication minimal localized involves technicalities address later complete details due extra cases additional node types especially unusual mechanics steps chose develop sprite new evaluation machine scratch rather augment existing one stg property evaluation nested expressions fundamentally changes computation existing functional strategies difficult apply sprite implemented novo evaluation mechanism runtime system based fair scheme topic next section implementation section describe implementation curry programs imperative code sprite generates llvm code assume readers familiar rather presenting generated code describe implemented programs terms familiar concepts appear directly llvm way reader think terms unspecified target language one similar assembly implements concepts facilitate following description indicate parentheses similar feature exists programming language target language values strongly typed types include integers pointers arrays structures functions programs arranged compilation units called modules contain symbols symbols visible modules control access one marked internal static external extern control flow within functions carried branch instructions include unconditional branches goto conditional branches indirect branches goto target every branch instruction address label call stack provided manipulated call return instructions enter exit functions respectively calls normally executed fresh stack frame target language also supports explicit tail recursion sprite puts good use expression representation expressions evaluated program graphs consisting labeled nodes zero successors node belongs one four classes discussed previous section constructors functions node labels equivalent symbols defined source program failures choices labeled reserved symbols successors references nodes number successors equals arity corresponding symbol fixed compile time partial applications written form sprite implements graph nodes heap objects layout heap object shown fig label implemented pointer static info table described later sprite emits exactly one table symbol curry program successors implemented pointers heap objects evaluation evaluation sprite repeated execution rewriting steps implemented two interleaved activities replacement replacement produces new graph previous one replacing subexpression matching side rule corresponding side instance might replaced replacement implemented overwriting heap object root subexpression replaced key advantage destructive update pointer redirection def required rewrite step reusing heap object also advantage saving one memory allocation deallocation per replacement requires every heap object capable storing node whatever arity sprite meets requirement providing heap heap object info pointer payload info table step function fig heap object layout objects fixed amount space capable holding small number successors nodes successors would fit space payload instead contains pointer larger array approach simplifies memory management heap objects since size single memory pool suffices arities known compile time runtime checks needed determine whether successor pointers reside heap object consists cascading case distinctions root symbol expression matched culminate either replacement patter match subexpression fair scheme implements according strategy guided definitional trees encoded icurry case distinction exemplified assumes expression matched rooted function symbol thus node needed complete match labeled function symbol expression rooted node evaluated labeled symbol node evaluated target function called step function performs pattern match replacement curry function gives rise one target function pointer stored associated info table see fig operationally amounts evaluating nested case expressions similar one shown sprite implements mechanism call tagged dispatch approach compiler assigns symbol tag compile time tags sequential integers indicating four classes discussed earlier node belongs three lowest tags reserved functions choices failures functions tag constructors tag additionally indicates constructor type symbol represents see works consider following type definition data abc abc comprises three constructors order fixed order would distinguish sprite tags sequential numbers starting integer follows reserved tags tag one less tag one less tag values unique within type throughout program first constructor type instance always tag following rules easy see every case discriminator node tagged one consecutive integers number constructors type compile case expression sprite emits jump table transfers control code block appropriate handling discriminator tag example block handles failure rewrites failure block handles choices executes shown schematically fig general impossible know compile time constructors may encountered program runs jump table must complete functional logic program define branch constructor function completely defined branch constructor rewrite failure implement tagged dispatch sprite creates code blocks labels constructs static jump table containing addresses executes indirect branch instructions based discriminator tag table figure shows fragment code approximates case distinction occurs variable list type two constructors nil cons five labeled code blocks handle five tags may appear case discriminator static array label address implements jump table example assumes function choice failure nil cons tags take values zero four respectively jump table contains one extra case depicted discriminator function step function discriminator root label applied many times necessary discriminator class longer function discriminator range function choice failure nil cons fig schematic representation sprite tagged dispatching mechanism distinction list type completeness consistency sprite aims first complete curry compiler informally complete means program produces intended results source program precisely especially infinite computations arbitrary value eventually produced given enough resources difficult problem computation obtaining one result could block progress computation would obtain another result example following program result obtained couple steps existing curry compilers fail produce loop loop main loop loop fair scheme defines complete evaluation strategy creates work queue containing expressions might produce result times expression head queue active meaning evaluated initially work queue contains goal expression whenever places choice root expression expression forks removed queue two alternatives added whenever expression produces value removed queue avoid endlessly working infinite computation program rotates active computation end work queue every often sprite guarantees expression ignored forever hence potential result lost proof correctness compiled programs provided abstract formulation compiler fair scheme domain correctness property executable program produces values intended corresponding source program delicate point raised step may duplicate clone choice following example shows cloned choices static void entry goto discriminator goto execute pull tab rewrite failure rewrite process nested case expression fig illustrative implementation case expression shown code fragment would appear body step function zip variable discriminator refers case discriminator label entry indicates entry point case expression seen single choice thus computation reduces choice right alternative also reduce clone choice right alternative likewise left alternative computations obeying condition called consistent xor xor xor example step applied choice leads duplication evaluating either alternative topmost choice consistent strategy must recognize remaining choice already made instance evaluating xor value left alternative left alternative already selected obtain xor keep track clones fair scheme annotates choices identifiers two choice nodes identical identifiers represent choice fresh identifiers assigned new choices arise replacement steps copy existing identifiers every expression work queue owns fingerprint mapping choice identifiers values set left right either fingerprint used detect remove inconsistent computations work queue possible syntactically steps case statement one could implement defining appropriate righthand side rule choice branch fact major competing implementation curry exactly disadvantage approach choice identifers must appear citizens program propagated steps using additional rules encoded source program believe efficient embed choice identifiers choice nodes implementation detail process steps dynamically section compares two approaches greater detail performance section present set benchmark results programs previous used compare three implementations curry mcc pakcs shall use perform direct comparisons since compares favorably others mention relative performance others compiles curry haskell uses glasgow haskell compiler ghc produce executables ghc shown produce efficient code like sprite uses evaluation strategy unlike sprite form work queue hence incomplete faced programs instead builds tree containing values program executes lazily interleaved steps search algorithm major highlight purely functional programs compile straight haskell thus incurring overhead due presence unused logic capabilities available https program palifunpats lastfunpats last permsortpeano permsort expvarfunpats half reverse reverseuser reversebuiltin reverseho primes sharenondet primesbuiltin primespeano queensuser queens takpeano tak type sprite fig execution times set functional programs taken benchmark suite times seconds final column reports negative positive factor sprite relative system configuration intel cpu ubuntu linux sprite enjoys property little room improve upon ghc functional programs beneficiary exponentially effort goal functional programs therefore simply measure minimize penalty running sprite programs utilize logic features emits haskell code simulates cases room improvement since example sprite avoid simulation overhead directly implementing logic features functional programs execution times set programs taken benchmark shown fig results arranged order greatest improvement greatest degradation execution time striking feature clear division functional deterministic subsets consistent expectations average sprite produces relatively slower code functional programs relatively faster code ones calculate averages geometric mean since method strongly influenced extreme results either direction functional subset runs average slower sprite compared figures published downloaded https indicate pakcs mcc run slower respectively programs take results indication functional parts sprite parts responsible rewriting memory management optimization although ghc counterparts still compare favorably mainstream curry compilers note sprite currently perform optimizations deforestation unboxing optimizations icurry could potentially impact benchmark results inspecting output ghc reveals tak program incidentally sprite optimized ghc fullyunboxed computation see llvm stacks rewrote program converted llvm using clang language llvm compiled native code measured execution time found ghc time therefore see fundamental barrier reducing sprite penalty zero program perhaps others reason optimistic implementing optimizations source icurry levels without fundamentally changing core sprite yield substantive improvements sprite programs subset fig shows sprite produces relatively faster code faster average published comparisons indicate compared pakcs slower mcc faster programs first thought seeing result sprite might enjoy better algorithmic complexity completed work reduce sprite complexity processing choices perhaps thought work surpassed set test selecting program dominated choice generation running different input sizes without recent modifications sprite results shown fig contrary expectation sprite exhibit strikingly similar complexity fit exponential curve excess slope coefficients differ less better explanation difference constant factor exists steps sprite faster could account factor believe best explanation overhead simulating haskell alluded end sect see need look detail uses helper functions sect generate choice identifiers thisid idsupply leftsupply idsupply idsupply rightsupply idsupply idsupply purpose functions ensure choice identifiers never reused type choice identifier idsupply opaque purposes function might produce choice implicitly extended accept supply function example program using linux time command whose resolution seconds fig complexity analysis permsort execution times shown range problem sizes horizontal axis indicates number integers sort method bool main xor false true compiled main let leftsupply rightsupply leftsupply rightsupply xor choice thisid false true clearly conversion haskell introduces overhead point simply see compiled code involves five calls helper functions present source program reflect cost simulating purelyfunctional language sprite fresh choice identifiers created reading incrementing static integer compared approach fewer parameters passed fewer functions called similar approach could used haskell implementation curry would rely impure features adding another layer complexity perhaps interfering optimizations contrast sprite approach extreme simplicity executes machine instructions remote possibility computation could exhaust supply identifiers since type integer finite uses list structure choice identifiers suffer potential shortcoming certainly choice identifiers could made arbitrarily large increases memory usage overhead better approach believe would compact set identifiers garbage collection idea whenever full collection occurs sprite would renumber choice identifiers service time fall contiguous range potential optimization illustrates benefits total control implementation since case makes modifying garbage collector viable option related work several curry compilers easily accessible notably pakcs mcc compilers implement lazy evaluation strategy based definitional trees executes needed steps differ control strategy selects order alternatives choice executed pakcs mcc use backtracking attempt evaluate values left alternative choice turning right alternative backtracking simple relatively efficient incomplete hence benchmark compilers may interesting understand differences backtracking assess efficiency sprite contrast control strategy uses hence computations executed much closer sprite compiler translates curry source code haskell source code processed ghc mainstream haskell compiler compiled code benefits variety optimizations available ghc section contains detailed comparison sprite exist functional logic languages whose operational semantics abstracted needed narrowing steps graph rewriting system ideas could applied almost changes implementation languages comparison graph machines functional languages problematic best despite remarkable syntactic similarities curry syntax extends haskell single construct free variable declaration semantic differences profound purely functional programs whose execution produces result curry terminate haskell sect furthermore functional logic computations must prepared encounter free variables hence situations goals significantly differ future work compilers among complex software artifacts often bundled extensions additions optimizers profilers tracers debuggers external libraries application domains databases graphical user interfaces given reality countless opportunities future work plans time choose one extensions additions listed optimizations mentioned earlier unboxing integers appealing would improve benchmark thus overall perceived performance compiler may contribute marginally efficiency realistic programs extensions additions aids tracing debugging execution external libraries may better contribute acceptance work conclusion presented sprite new native code compiler curry sprite combines best features existing curry compilers similar sprite strategy based hence inherent loss completeness compilers based backtracking pakcs mcc similar mcc sprite compiles imperative target language hence amenable machine optimization differently existing compilers sprite designed ensure operational values expression eventually produced given enough computational resources sprite main intermediate language icurry represents programs graph rewriting systems described implementation curry programs imperative code using concepts target language graph nodes represented memory heap objects efficient mechanism called tagged dispatch used perform pattern matches finally discussed mechanisms used sprite ensure completeness consistency presented empirical data set benchmarking programs benchmarks reveal sprite competitive leading implementation curry references antoy definitional trees kirchner levi editors proceedings third international conference algebraic logic programming pages volterra italy september springer lncs antoy correctness tplp antoy hanus declarative programming function patterns int symp program synthesis transformation lopstr pages london september springer lncs antoy hanus overlapping rules logic variables functional logic programs twenty second international conference logic programming pages seattle august springer lncs antoy hanus functional logic programming comm acm april antoy johannsen libby needed computations shortcutting needed steps middeldorp van raamsdonk editors proceedings international workshop computing terms graphs vienna austria july volume electronic proceedings theoretical computer science pages open publishing association antoy jost compiling functional logic language fair scheme int symp program synthesis transformation lopstr pages madrid spain dpto systems informaticos computation universidad complutense madrid hanus reck new compiler curry haskell proc international workshop functional constraint logic programming wflp pages springer lncs brassel huch tighter integration functional logic programming aplas proceedings asian conference programming languages systems pages berlin heidelberg caballero editors toy multiparadigm declarative language version available http clang language family frontend llvm available http echahed janodet graph rewriting systems technical report imag available ftp glasgow haskell compiler available http gill launchbury peyton jones short cut deforestation proceedings conference functional programming languages computer architecture pages acm glauert kennaway papadopoulos sleep dactl experimental graph rewriting language prog hanus editor curry integrated functional logic language vers available http hanus flatcurry intermediate representation curry programs available http hanus functional logic programming theory curry programming logics essays memory harald ganzinger pages springer lncs hanus editor pakcs portland aachen kiel curry system available http peyton jones compiling haskell program transformation report trenches programming languages pages springer peyton jones santos compilation transformation glasgow haskell compiler functional programming glasgow pages springer lattner adve llvm compilation framework lifelong program analysis transformation proceedings international symposium code generation optimization runtime optimization cgo pages san jose usa mar extra variables eliminated functional logic programs electron notes theor comput toy multiparadigm declarative system proceedings tenth international conference rewriting techniques applications rta pages springer lncs lux editor muenster curry compiler available http marlow peyton jones making fast curry higherorder languages proceedings ninth acm sigplan international conference functional programming icfp pages new york usa acm partain nofib benchmark suite haskell programs functional programming glasgow pages springer peyton jones salkild spineless tagless proceedings fourth international conference functional programming languages computer architecture pages acm
| 6 |
feb stability cohomology vanishing groups marcus chiffre lev glebsky alexander lubotzky andreas thom abstract several open questions groups common form groups approximated asymptotic homomorphisms symmetric groups sym case dimensional unitary groups hyperlinear case case question asked respect metrics norms paper answers time one versions showing exist presented groups approximated respect frobenius norm tij strategy show higher dimensional cohomology vanishing phenomena implies stability every homomorphism unitary groups close actual homomorphism combined existence results certain central extensions lattices simple lie groups groups act high rank bruhattits buildings satisfy needed vanishing cohomology phenomenon thus stable introduction since beginning study groups groups studied looking orthogonal unitary representations natural relax notion representation require group multiplication preserved little mistakes suitable metric first variations topic appeared already work turing later ulam chapter theme knows many variations ranging approximations introduced gromov approximations appeared theory operator algebras questions related connes embedding problem see details case approximation properties groups studied relative particular class metric groups let countable group let sequence metric groups metrics say exists separating sequence chiffre glebsky lubotzky thom asymptotic homomorphisms sequence maps becomes multiplicative sense lim also separating bounded away zero see section precise several examples situation studied literature see survey sym symmetric group set normalized hamming distance case approximated groups called sofic see arbitrary group equipped metric case approximated groups called weakly sofic depending particular restricted family groups interesting connection group theory recent advances found iii unitary group hilbert space metric induced normalized hilbertschmidt norm case approximated groups sometimes called hyperlinear metric induced operator norm case groups groups called see metric induced unnormalized norm also called frobenius norm speak groups context note approximation properties local sense many group elements relations considered stark contrast uniform situation starting work kazhdan much better understood see longstanding problems albeit mathematics ask group exists approximated either settings setting gromov question whether groups similar question context iii closely related connes embedding problem indeed existence group whould answer connes embedding problem negative kirchberg asked whether stably embeddable matrix algebras implying positive answer approximation asymptotic representations problem sense group recent breakthrough results imply amenable group approximated sense see paper want introduce conceptually new technique allows provide groups approximated sense show presented groups approximated unitary groups frobenius norm techniques apply directly context iii say anything conclusive connes embedding problem since norms iii related normalization constant believe provide promising new angle attack start explaining strategy notation let state main results article theorem exist finitely presented groups groups construct central extensions cocompact lattices simple lie groups take certain central extensions large enough prime prove theorem use notion stability group called every asymptotic homomorphism necessarily separating one close true homomorphism see definition one easily deduces must residually basic observation suggests way groups group stable residually method failed far two reasons prove stability directly even case stability proven see references therein well proven way completely asymptotic homomorphism shown close genuine homomorphisms thus groups already approximated shown stable far main technical novelty paper following theorem provides condition group without assuming priori group approximated theorem let finitely presented group every unitary representation asymptotic homomorphism frobenius norm asymptotically close sequence homomorphisms appearance vanishing second cohomology groups may look surprising sight inf fact one translate question chiffre glebsky lubotzky thom approximating asymptotic homomorphism true homomorphism question splitting exact sequence norm submultiplicative case frobenius norm normalized norm kernel splitting problem abelian see section vanishing second cohomology abelian means splitting suitable exact sequences hence relevant question stability also interesting observe second cohomology already appeared work kazhdan context uniform compact amenable groups concept related asymptotic representations abeit essentially recall classical kazhdan property equivalent statement unitary representations say group every unitary representation theorem simply says every group frobenius stable thus prove theorem groups residually seminal work garland extended others see section details shows every large enough cocompact arithmetic lattices simple lie groups rank every fact variant used give examples groups property groups linear potentially also residually using exotic buildings rank see want prove existence groups catch least result tits asserts exotic buildings dimension standard ones coming lie groups provide lattices residually work around point imitate result method proof deligne deligne showed lattices simple lie groups central extensions residually raghunathan extended also cocompact lattices spin examples became famous toledo used provide examples fundamental groups algebraic varieties residually last section explain deligne method applied also cocompact lattices certain lie groups along way use solution congruence subgroup problem lattices provided rapinchuk tomanov way get central extensions certain cocompact lattices residually anymore finally easy spectral sequence argument shows central extension group also thus central extensions asymptotic representations abovementioned lattices provide group promised theorem along way section also provide examples residually groups generated groups currently unclear maybe amenable even solvable groups moreover open problem decide class groups closed central quotients crossed products compare results article part phd project named author notation given set let denote free group let denote normal subgroup generated let group generators relations use convention let denote complex group unitary matrices identity matrix denoted recall collection subsets implies implies iii holds say existence ensured axiom choice view nonprincipal additive probability measure subsets taking values giving value subsets throughout whole paper given statement use wording holds given bounded sequence real numbers denote limit along formally limit unique real number unbounded sequences limit takes value extended real line adopt landau notation given two sequences real numbers write exists cyn exists third sequence real numbers unitarily invariant norms recall norm called unitarily invariant chiffre glebsky lubotzky thom important examples norms operator norm frobenius norm also known unnormalized norm normalized norm given denotes adjoint matrix called selfadjoint matrix called unitary recall basic facts unitarily invariant norms set matrices write positive eigenvalues proposition let unitarily invariant norm holds positive matrices proposition let unitary unitarily invariant norms proof unitary invariance may assume diagonal matrix denote diag let one readily sees thus diag unitary proposition second property important submultiplicativity property turns banach algebra operator norm frobenius norm enjoy property normalized norm ultraproducts need ultraproduct banach spaces metric groups respectively first let sequence banach spaces consider product banach space bounded sequences closed subspace nullsequences lim asymptotic representations ultraproduct banach space name suggests ultraproduct banach space banach space norm induced moreover banach algebras hilbert spaces ultraproduct let family groups equipped metrics case subgroup lim direct product normal metric ultraproduct note contrast banach space require sequences bounded worth noting albeit relevant purposes metric lim min induces metric relevant following setting let sequence natural numbers consider family matrix algebras mkn equipped unitarily invariant submultiplicative norms usually omit index denote norms let equipped metrics induced norms consider ultraproduct banach space mkn metric ultraproduct submultiplicativity norms see lim lim bounded sequences mkn thus left multiplication induces left action unitary invariance norms see action isometric similarly right action right multiplication another left action conjugation isometric chiffre glebsky lubotzky thom asymptotic homomorphisms section let presented group let class groups equipped metrics map uniquely determines homomorphism also denote definition let let maps defect def max distance dist max homomorphism distance homdist inf dist definition sequence maps called asymptotic homomorphism def mainly concerned dimensional asympotic representations asymptotic homomorphisms respect class unitary groups dimensional hilbert spaces equipped metrics coming family unitarily invariant norms class unitary groups metrics coming denoted uop ufrob uhs might also need quantify definition let homomorphism map def literature many inequivalent notions almost asymptotic homomorphisms one would precise notion asymptotic homomorphism could called local discrete asymptotic homomorphism local since interested behaviour set relations compare uniform situation discrete family homomorphisms indexed natural numbers definition let two sequences called asymptotically equivalent dist asymptotic homomorphism equivalent sequence genuine representations call trivial liftable come two central notions study paper notion stability approximability class metric groups asymptotic representations definition group called asymptotic homomorphisms equivalent sequence homomorphisms lim homdist def definition presented group called capproximated exists asymptotic homomorphism lim mainly concerned ufrob ufrob stability paper convenience often speak context definition group called residually homomorphism following proposition see evident nevertheless central observation work proposition let finitely presented group must residually particular class consists unitary groups finitely presented group residually finite section basic lemma important part statement lemma depend lemma constant groups metric maps holds def proof determine note def chiffre glebsky lubotzky thom thus using get def letting done group cohomology convenience recall one construction group cohomology primarily need second cohomology group unitary representation completeness give general let group let abelian group together left action consider chain complex set functions together coboundary operator also let thus ker one checks cohomology recall given extension groups abelian action induced conjugation action fixing section quotient map map solution straightforward check exactly extension splits homomorphism asymptotic representations assume countable banach space norm separating family max easy see respect family space one even take separating family acts isometries map bounded examples non groups part aim provide large class groups let start giving examples groups stable show group giving concrete examples asymptotic representations equivalent genuine representations also exploit latter example provide example group see section voiculescu proved matrices exp representation precisely def see also inequalities conclude def also representation particular neither uop worth noting actually uhs see quantitative proof chiffre glebsky lubotzky thom turn attention group see original reference generators satisfy equation also well known hard check generators satisfy aba indeed follows easily description hnnextension hand recall following proposition let residually finite group satisfy also satisfy proof indeed order conjugate order even thus power conclude commute mal cev theorem immediately obtain following consequence corollary let unitary matrices satisfy also satisfy last corollary also proven directly linear algebra methods see quantitative aspects aproximability studied corollary order show sequence pairs unitary matrices satisfy equation far satisfying equation study approximation properties goes back focus approximation normalized norm going prove following result theorem group theorem direct consequence following lemma lemma exist proof omit index write let exp consider hilbert space orthonormal basis previous example plan compose direct sum two ways restriction well restriction act approximately asymptotic representations multiplication letter stands square stands cube construct let start detailed construction span span use ordered base resp appears let resp restriction resp observe diag diag exp exp exp exp let unitary form unitary claim indeed obtain entails claim consider unitary given matrix let diag diag exp exp exp exp hard check chiffre glebsky lubotzky thom direct calculations show since lemma follows example finitely generated finite group note example provides homomorphism ultraproduct frob image clearly clearly residually since construction elements satisfy sense artefact every group group quotient seems quite likely construction enough show indeed even though proof assertion spelled full detail appears construction shows note follows work kropholler residually solvable hence see diminishing defect asymptotic representations section contains key technical novelty article associate element mkn asymptotic representation prove defect diminished sense equivalent asymptotic representation better defect precisely def def assumptions section section following presented group sequence natural numbers family submultiplicative unitarily invariant norms denoted asymptotic representation respect metrics associated recall ultraproduct notation introduced section mkn recall since submultiplicative acts multiplication asymptotic representation induces homomorphism asymptotic representations level group thus acts mind also want following section natural surjection particular sequence every def particular sequence lift note given section sequence lift exists section case lemma holds def proposition unitaries def letting get desired map cohomology class asymptotic tion want element associated end mkn def def otherwise next proposition collection basic properties maps proposition let maps satisfy following equations furthermore every chiffre glebsky lubotzky thom proof def ghk ghk ghk ghk proves equation second line equations immediate fact last assertion note since follows lemma def thus follows using equation def def follows every bounded sequence sequence map map cocycle sense explained section next corollary states map map cocycle equivalent picture hochschild cohomology turns calculations natural also work map even though suppress notation keep mind depend lift def corollary map respect isometric action proof given ghk ghk ghk used homomorphism proposition call cocycle associated sequence asymptotic representations proposition assume represents trivial cohomology class exists map satisfying furthermore choose proof equation immediate equation follows proposition follows proposition last claim possibly need alter little note also indeed proves whence two follow thus replacing see equations still correction asymptotic representation let let mkn lift exp def unitary every sequence maps exp def note since proofs proposition lemma make use two basic inequalities hold exp exp exp exp simple consequences exp triangle inequality submultiplicativity norm chiffre glebsky lubotzky thom proposition notation every def precisely def proof let unitary invariance submultiplicativity get exp def def exp def since bounded sequence def exp def result follows follows asymptotic representation def def prove defect actually def lemma def proof let def let let whence follows def submultiplicativity follows def show def amounts following calculations def def def def asymptotic representations equation fact submultiplicativity norm implies bounded proof last asymptotic representation reach desired conclusion def def let reference sake formulate result properly theorem let finitely presented group let asymptotic representation respect family submultiplicative unitarily invariant norms assume associated trivial exists asymptotic representation dist def def def proof adopt notation assertion follows proposition let written reduced word iteration lemma using takes unitary values unitarily invariant see def def since done converse theorem also valid following sense proposition let finitely presented group let asymptotic representations respect family submultiplicative unitarily invariant norms suppose dist def def def associated trivial particular sufficiently close homomorphism trivial proof def nothing prove let assume case let induced maps get section explained beginning section note sequences induce map limit def def otherwise bullet assumptions essentially bounded element chiffre glebsky lubotzky thom prove follow easily satisfy first note follows second bullet assumptions every thus def def def def result follows dividing def possible taking limit clear need large classes groups general vanishing results second cohomology banach hilbert space proven subject next section let mention alternative approach used prove theorem asymptotic representations extensions mentioned section second cohomology characterizes extensions abelian kernel picture coboundaries correspond splitting extensions thus theorem corollary show improved equivalent splitting certain extension connection asymptotic representations extensions seen directly without going computations idea actually used prove theorem since approach illustrative shows instance clearly submultiplicativity plays sketch proof retain assumptions section introduce notation letting def similarly saw totic representation induces homomorphism asymptotic representations lemma actually implies existence induced homomorphism observe existence dist def def def theorem equivalent existence lift also see map following commutative diagram pullback combining two observations easily follows improved bottom row latter diagram splits since submultiplicative group actually abelian indeed whence claim follows hence explained section extension corresponds element conclude improved exactly theorem little one prove real banach space real hilbert space case isometric existence equivariant homomorphism chiffre glebsky lubotzky thom remark note approach also works part submultiplicative case however group abelian second cohomology much less tractable general alternative approach problem hand rather conceptual elegant also proof chose present detail merits cocycle computed directly cases associated computed explicitly gives explicit expression cohomology vanishing examples groups recall generally compactly generated group kazhdan property cohomology every unitary representation hilbert space see proof background information consider groups higher cohomology groups vanish higher dimensional vanishing phenomena studied various articles see example propose following terminology definition let group called vanishes unitary representations call strongly kazhdan classical property see discussions related higher dimensional analogues property central proof application open mapping theorem vanishing cohomology hilbert space implies cocycles coboundaries control norms explained following proposition corollary use terminology introduced equation proposition let let countable group let unitary representation assume every finite set exist finite set constant every cocycle element proof topology basic open sets given since map linear bounded surjective open mapping theorem applies see asymptotic representations words proves claim need fact vanishes universally set bound chosen universally unitary representations consequence easy diagonalisation argument corollary let countable group every finite set finite set constant unitary representations cocycles element also observe following extension proposition proposition consider short exact sequence groups strongly also nkazhdan particular applies finite proof spectral sequence enough show vanishes vanishes fix set vectors hilbert space induced action unitary representation conclude vanishes view previous section natural ask exists group vanishes algebras equipped action automorphisms able answer questions however one show vanish group makes positive answer somewhat unlikely view respect right translation action indeed let proper metric cocycle boundary element higher rank lattices finally section provides examples every groups nkazhdan results essentially known recall detail convenience let local residue class ring integers unique maximal ideal let simple group assume group acts associated bruhattits building information theory buildings see chiffre glebsky lubotzky thom latter contractible pure simplicial complex dimension acts transitively chambers topdimensional simplices let uniform lattice discrete cocompact subgroup also torsion free always achieved replacing index subgroup quotient simplicial complex particular group presented use following theorem essentially appears work ballmann building previous work garland theorem every natural number exists following holds strongly particular recall equivalent kazhdan property well known property every quite plausible also true context preceding theorem note contains torsion free group proposition implies prove one assume torsion free theorem dimensional hilbert spaces theorem seminal paper garland general case stated last paragraph section page work deduced theorem theorem asserts garland desired cohomology vanishing follows sharp estimates spectral gap local laplacians laplacians proper links complex estimates called also curvature given lemma lemma altogether theorem proven method estimates garland used also recently let give reader notational warning say rank following common practice nowadays mean group denoted follows dimension associated building equal garland refers rank tits system notation denotes hence equal natural wonder happens analogous real case worth noting already sln large enough thus sln fails large enough similarly note natural generalization higher rank lattices real lie groups formulated carefully maybe excluding explicit list unitary representations question sln least large asymptotic representations proofs main results order proofs theorem theorem need show presented groups frobeniusstable residually main result follows corollary constructions section groups consider groups asymptotic representations respect frobenius norm mkn hilbert space techniques section applied defect every asymptotic representation diminished start completing proof theorem theorem let finitely presented group proof let mentioned ultraproduct mkn hilbert space acts space invertible isometries unitaries frob vanishes corollary together bounds equation constant asymptotic representations respect choose associated quantity homdist def map note asymptotic representation equality holds equivalent sequence homomorphisms sequence strictly positive real numbers let sequence natural numbers need prove sequences representations quantity tends set homomorphisms compact since continuous def maximizes evidently asymptotic representation thus proposition theorem asymptotic representation dist def def def chiffre glebsky lubotzky thom particular also representation follows homdist homdist def furthermore maximality homdist def homdist def putting estimates together get homdist def homdist def def case def really representation homdist conclude since chosen maximal conclude representations def def remark note proof still valid one replaces submultiplicative norm assumption suitable cohomology vanishing assumption instance gives condition stability respect operator norm one could assume vanishing second cohomology seems prove existence group properties task already occupy remaining sections hilbert space case remark note theorem together proposition imply virtually free groups fact seems cumbersome establish directly sake reference formulate following dichotomy immediate corollary theorem explicitly corollary let finitely presented group either residually finite techniques section rely submultiplicativity norm thus directly applied normalized norm worth noting though since get following immediate corollary theorem corollary let finitely presented group let sequence maps def asymptotic representations defect measured respect either equivalent sequence homomorphisms proof let norm question def def words asymptotic representation respect theorem representations preceding corollary provides quantitative information connes embedding problem indeed presented nonresidually group uhs upper bound quality approximation terms dimension unitary group needless say would interesting decide groups uhs finite groups section present examples presented groups hence proof theorem note examples presented section residually section show central extensions cyclic group residually strongly every holds proposition hence may combine results section results previous section obtain examples groups residually construction imitate construction deligne central extensions arithmetic lattices real lie groups see also work raghunathan central extensions constructed uniform lattices spin examples later used toledo famous work showing existence algebraic varieties fundamental groups short readable exposition deligne argument given examples analogues deligne examples original proof actually works assumed algebraic group isotropic hence got lattices time congruence subgroup property known cases nowadays argue general lattices along lines chiffre glebsky lubotzky thom let standard quaternion algebra set arbitrary unital commutative ring hamiltonian division algebra whereas dqp second isomorphism basically consequence fact congruence solved modulo consider also standard involution let canonical hermitian form consider note simply group formed entries associated map preserves form functor absolutely almost simple simply connected group hence type see embedding one show isomorphic simply connected compact lie group type namely quaternionic unitary group let rational prime since group becomes split group isomorphic group sits diagonally lattice however since compact yields also lattice standard fact lattices cocompact basically since admits basis neighborhoods identity consists torsion free subgroups concrete case identify group proved rapinchuk tomanov group congruence subgroup property let explain means adelic language group subgroup two topologies follows arithmetic topology arithmetic subgroups subgroups commensurable serve fundamental system neighborhoods identity second congruence topology take basis neighborhoods identity arithmetic groups contain natural number one principal congruence subgroups ker completion respect arithmetic denote topology completion respect congruence asymptotic representations topology canonical surjective homomorphism result rapinchuk tomanov combined work says case isomorphism topological groups strong approximation theorem isomorphic denotes restricted product usual subring restricted product primes particular get result prasad see also deodhar deligne says every universal central extension denotes group roots unity cyclic group inverse images order denote quotient map extension residually claim group contains proposition every finite index subgroup unique subgroup index particular residually finite proof prove lift arithmetic topology follows arithg central extension topology subgroups commetic topology serve fundamental system neighborhoods mensurable completion clear identity denote exists central extension topological groups quotient say quotient homomorphism ker exactly intersection ultimate goal show index subgroups ker would show residually chiffre glebsky lubotzky thom observe maps onto kernel finally set group central extension kernel isomorphic moreover also see natural sends diagonal map hence factors homomorphism shows central extension splits subgroup note since perfect applies result going back moore split groups general case asserts universal central extension splits case kernel order basically since groups roots unity hence conclude proves part shows kernel map completion realised subgroup completion realised subgroup order hence every contains index subgroup center index subgroup residually proves theorem proposiin conclusion since tion corollary proof theorem acknowledgments last named authors supported erc consolidator grant second named author supported erc visiting hebrew university jerusalem third author supported erc nsf bsf last author wants thank david fisher fruitful discussions back march idea cohomological obstruction stability asymptotic representations found later independently second third author grateful pierre pansu especially andrei rapinchuk useful remarks references thank isaac newton institute cambridge hospitality workshop approximation deformation quasification supported epsrc grant asymptotic representations part program curvature group actions cohomology references peter abramenko kenneth brown buildings graduate texts mathematics vol springer new york theory applications goulnara arzhantseva asymptotic approximations finitely generated groups extended abstracts fall free groups trends math res perspect crm vol springer cham goulnara arzhantseva liviu almost commuting permutations near commuting permutations funct anal uri bader piotr nowak cohomology deformations topol anal werner ballmann jacek property automorphism groups polyhedral cell complexes geom funct anal gilbert baumslag donald solitar nonhopfian groups bull amer math soc oren becker alexander lubotzky andreas thom stability invariant random subgroups available bachir bekka pierre harpe alain valette kazhdan property new mathematical monographs vol cambridge university press cambridge bruce blackadar eberhard kirchberg generalized inductive limits math ann armand borel stable real cohomology arithmetic groups ann sci norm sup armand borel nolan wallach continuous cohomology discrete subgroups representations reductive groups mathematical surveys monographs vol american mathematical society providence kenneth brown cohomology groups graduate texts mathematics vol new york corrected reprint original marc burger narutaka ozawa andreas thom ulam stability israel math john carrion marius dadarlat caleb eckhardt groups quasidiagonal funct anal alain connes classification injective factors cases ann math marcus narutaka ozawa andreas thom operator algebraic approach inverse stability theorems amenable groups available https pierre deligne extensions centrales non finies groupes acad sci paris french english summary vinay deodhar central extensions rational points algebraic groups bull amer math soc jan dymara tadeusz januszkiewicz new kazhdan groups geom dedicata chiffre glebsky lubotzky thom cohomology buildings automorphism groups invent math ruy exel terry loring almost commuting unitary matrices proc amer math soc tobias fritz state spaces math phys howard garland curvature cohomology discrete subgroups groups ann math lev glebsky almost commuting matrices respect normalized hilbertschmidt norm available https lev glebsky luis manuel rivera sofic groups profinite topology free groups journal algebra mikhael gromov endomorphisms symbolic algebraic varieties journal european mathematical society karsten grove hermann karcher ernst ruh jacobi fields finsler metrics compact lie groups application differentiable pinching problems math ann anna gundert uli wagner eigenvalues random complexes israel math david kazhdan israel math peter kropholler groups groups cohomological dimension two comment math helv calvin moore group extensions adelic linear groups inst hautes sci publ math nikolay nikolov jakob schneider andreas thom remarks finitarily approximated groups available https izhar oppenheim vanishing cohomology property groups acting weighted simplicial complexes groups geom dyn narutaka ozawa mikael yasuhiko sato elementary amenable groups quasidiagonal geom funct anal pierre pansu formules matsushima garland pour des groupes agissant sur des espaces des immeubles bull soc math france french english french summaries vladimir pestov hyperlinear sofic groups brief guide bull symbolic logic vladimir platonov andrei rapinchuk algebraic groups number theory pure applied mathematics vol academic press boston gopal prasad deligne topological central extension universal adv math gopal prasad andrei rapinchuk computation metaplectic kernel inst hautes sci publ math florin von neumann algebra finite baumslag group embeds hot topics operator theory theta ser adv vol theta bucharest madabusi raghunathan torsion cocompact lattices coverings spin math ann asymptotic representations corrigendum torsion cocompact lattices coverings spin math ann andrei rapinchuk congruence subgroup problem algebraic groups dokl akad nauk sssr russian english soviet math dokl walter rudin functional analysis international series pure applied mathematics new york aaron tikuisis stuart white wilhelm winter quasidiagonality nuclear ann math andreas thom examples hyperlinear groups without factorization property groups geom dyn domingo toledo projective varieties finite fundamental group inst hautes sci publ math george tomanov problem anisotropic algebraic groups number fields reine angew math alan turing finite approximations lie groups annals mathematics ulam collection mathematical problems interscience tracts pure applied mathematics interscience publishers new dan voiculescu asymptotically commuting finite rank unitary operators without commuting approximants acta sci math szeged dave witte morris lattice torsion free subgroup finite index deligne june informal discussion university chicago andrzej property kazhdan constants discrete groups geom funct anal dresden germany address universidad san luis address glebsky hebrew university israel address alexlub dresden germany address
| 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.