text
stringlengths 16
1.15M
| label
int64 0
10
|
---|---|
submitted annals statistics prediction error model search xiaoying tian harris feb department statistics stanford university estimation prediction error linear estimation rule difficult data analyst also use data select set variables construct estimation rule using selected variables work propose asymptotically unbiased estimator prediction error model search additional mild assumptions show estimator converges true prediction error rate number data points estimator applies general selection procedures requiring analytical forms selection number variables select grow exponential factor allowing applications data also allows model misspecifications requiring linear underlying models one application method provides estimator degrees freedom many discontinuous estimation rules like best subset selection relaxed lasso connection stein unbiased risk estimator discussed consider prediction errors work extension errors low dimensional linear models examples best subset selection relaxed lasso considered simulations estimator outperforms cross validation various settings introduction paper consider homoscedastic model gaussian errors particular feature matrix considered fixed response noise level considered known fixed note mean function necessarily linear prediction problems involve finding estimator fits data well naturally interested performance predicting future response vector generated mechanism mallows provided unbiased estimator prediction error estimator linear msc subject classifications primary secondary keywords phrases prediction error model search degrees freedom sure xiaoying tian harris matrix independent data often referred hat matrix recent context unrealistic data analyst use data build linear estimation rule words depends case still hope get unbiased estimator prediction error article seek address problem examples theory apply following context model selection data analyst might use techniques select subset predictors build linear estimation rules techniques include principled methods like lasso tibshirani best subset selection forward stepwise regression least angle regression efron heuristics even combination selection step simply project data onto column space submatrix consists columns use estimation rule specifically selection rule projection matrix onto column space case selected lasso known relaxed lasso solution meinshausen general range collection subsets assume hat matrix depends data sense abstraction part paper study prediction error paper want estimate prediction error err kynew ynew several major methods estimating efron penalty methods akaike information criterion aic add penalty term loss training data penalty usually twice degrees freedom times stein unbiased risk estimator stein provides unbiased estimator estimator smooth data estimation rules use perturbation techniques approximate covariance term general estimators prediction error model search nonparametric methods like cross validation related bootstrap techniques provide risk estimators without model assumption methods like assume fixed model specifically degrees freedom defined fixed stein unbiased risk estimator sure allows risk estimation almost differentiable estimators addition computing sure estimate usually involves calculating divergence difficult explicit form special cases considered works zou tibshirani taylor computed degrees freedom lasso estimator lipschitz general estimators form depends might even continuous data thus analytical forms prediction error difficult derive tibshirani mikkelsen hansen nonparametric methods like cross validation probably ubiquitous practice cross validation advantage assuming almost model assumptions however klement shows cross validation inconsistent estimating prediction error high dimensional scenarios moreover cross validation also includes extra variation different validation set different fixed setup work efron also points methods like aic sure offer substantially better accuracy compared cross validation given model believable work introduce method estimating prediction errors applicable general model selection procedures examples include best subset selection prediction errors difficult estimate beyond orthogonal matrices tibshirani general require analytical forms main approach apply selection algorithm slightly randomized response vector similar holding validation set cross validation distinction split feature matrix construct unbiased estimator prediction error using holdout information analogous validation set cross validation note since would select different model estimator unbiased prediction error however perturbation small repeat process multiple times randomization averages get asymptotically unbiased consistent estimator prediction error original target estimation moreover since estimator model based also enjoys efficiency cross validation fact prove mild conditions selection procedure xiaoying tian harris estimator converges true prediction error rate automatically implies consistency estimator estimator hand converges rate fixed hat matrix compared estimator pays small price protection manipulations choosing hat matrix linear estimation rules organization rest paper organized follows section introduce procedure unbiased estimation slightly different target achieved first randomizing data constructing unbiased estimator prediction error slightly different estimation rule address question randomization affects accuracy estimator estimating true prediction error clear respect amount randomization derive upper bounds bias variance section propose optimal scale randomization would make estimator converge true prediction error since unbiased estimator constructed section uses one instance randomization reduce variance estimator averaging different randomizations section propose simple algorithm compute estimator averaging different randomizations also discuss condition estimator equal sure estimator sure difficult compute terms analytical formula simulation estimator easy compute using relationship prediction error degrees freedom also discuss compute search degrees freedom term used tibshirani refer degrees freedom estimators model search finally include simulation results section conclude discussions section method estimation first assume homoscedastic gaussian model model selection algorithm assume fixed often use shorthand assume finite collection models potentially interested definition models quite general refer information extract data common model described introduction prediction error model search subset predictors particular interest value observation maps set also inverse image induces partition discuss partition section however instead using original response use randomized version case takes selected variables note space variable selection fixed using select model define prediction errors analogous defined ynew kynew subscript denotes amount randomization added note although randomization noise added selection integrates randomization thus random prediction error err defined corresponds case set section show get unbiased estimator introduce unbiased estimator first introduce background randomization randomized selection might seem unusual use model selection actually using randomization model selection fitting quite common common practice splitting data training set test set form randomization although stressed split usually random thus using random subset data instead data model selection training idea randomization model selection new field differential privacy uses randomized data database queries preserve information dwork particular additive randomization scheme discussed tian taylor work discover additive randomization allows construct vector independent model selection independent vector analogous validation set data splitting address question effect randomization prove err close small mild conditions selection procedures words since unbiased estimator goes bias err diminish well details see section addition section also provides evidence simulations xiaoying tian harris unbiased estimation construct unbiased estimator first construct following vector independent note construction also mentioned tian taylor using property gaussian distributions calculating covariance easy see independent thus independent selection event state first result constructs unbiased estimator theorem unbiased estimator suppose homoscedastic gaussian model err unbiased proof first notice let note first define following estimator err claim err unbiased prediction error conditional formally prove err kynew see first rewrite kynew prediction error model search consider conditional expectation err note equalities use decomposition well fact comparing easy see moreover marginalizing unbiased easy see err fact using proof theorem even stronger result unbiasedness err unbiased prediction error marginally remark err also unbiased conditional selected event err prediction error formally kynew err easy see err err resemblance usual simple form err formula prediction error estimation usual correction term degrees freedom estimator additional term helps offset larger variance randomization investigate effect randomization section particular interested difference err small variance function estimator err simple intuition effects randomization estimation prediction error since randomized response vector use model selection close randomization scale small intuitively closer err decreases hand independent vector xiaoying tian harris use construct estimator prediction error variant small thus clear choice seek find optimal scale unbiased estimation introduced section place assumptions hat matrix selection procedure however section restrict hat matrices constructed selected columns design matrix much restriction since hat matrices probably interest formally restrict selection procedure selects important subset variables without loss generality assume surjective thus number potential models choose finite moreover map induces partition space particular assume different models choose easy see assume int measure lebesgue measure assume hat matrix constant matrix partition particular hmi common matrix probably projection matrix onto column space spanned subset variables formally assume assumption assume submatrix selected columns easy see symmetric moreover also assume select many variables include model rank matrix rank less number variables specifically prediction error model search assumption rank furthermore assume grows following rate log number columns assumption requires none models large however size grow polynomial rate penalized regression problems choices penalty parameter solution sparse studied negahban also assumption allows grow exponential factor number data points thus allows applications high dimensional setting also assume model selection procedure reasonable accuracy identifying underlying model assumption suppose satisfies small constant assumption assumes subspace spanned good representation underlying mean require subspace namely allow model misspecifications remark context sparse linear models assume support assuming normalized column length log log xiaoying tian harris thus assumption close placing condition like analogous equation log conditions show following bias theorem variance theorem clear bias variance regard choice discuss detail section proofs theorems uses well known results extreme value theories introduced fact selection bias bias err performed randomized version fact source bias however small perturbations resulting bias small well formally following theorem theorem suppose assumptions satisfied bounded bias err err universal constant small constant defined assumption essential proof theorem performance estimation rule resistant small perturbations true assumptions introduced beginning section formally lemma suppose hat matrix form assumptions satisfied universal constant small constant defined assumption first expectation taken second expectation taken lemma easy prove theorem prediction error model search proof first notice thus proof lemma relies following lemma also section proofs used bound variance err lemma lemma deferred section lemma value suppose hat matrix takes hmi total number potential models choose value may depend assume satisfies assumptions log variance section discuss variance estimator previously discussed beginning section intuitive increase decreases establish variance err respect first quantitative results variances err state result variances estimators provide baseline comparison increase variance due model selection procedure formally suppose hat matrix constant independent data estimator unbiased prediction error moreover xiaoying tian harris lemma jection matrix suppose constant var furthermore assume var proof lemma uses following lemma whose proof defer section lemma let fixed matrix var prove lemma proof var var var var last equality per assumption easy see also projection matrix finally since easy deduce second conclusion lemma states variation order compared following seek establish inflated variance err theorem gives explicit upper bound variance err respect fact simply order formally lemma inspired talk given professor lawrence brown although author able find formal proof reference prediction error model search theorem suppose assumptions satisfied err var compared variance pay price allows hat matrix dependent data particularly choose estimator prediction error err diminishing variance consistent prove theorem use lemma stated section proof first notice thus therefore var var err first using decomposition conditional variance var var var var note last inequality used lemma well independence relationships furthermore assumption var xiaoying tian harris moreover per lemma var log number potential models finally since would choose models size less assumption rate specified assumption conclusion theorem choice establishing theorem theorem combine results summarize following corollary corollary suppose assumptions satisfied bias var furthermore choose err proof first part corollary straightforward combination theorem theorem moreover choose err easy see optimal rate strikes balance bias variance exactly offer guidance choice practice properties applications section show diminishing variances chosen properly however err computed using one instance randomization since err variance reduced aggregate different randomizations furthermore following section show uniform minimum variance unbiased marginalization err umvu estimators prediction error conditions prediction error model search variance reduction techniques umvu estimators first reduced troduce following lemma shows variance err assumption lemma following estimator unbiased err furthermore smaller variance var var err lemma easily proved using basic properties conditional expectation practice approximate integration repeatedly sampling taking averages specifically algorithm provides algorithm computing err algorithm algorithm computing err input initialize err draw compute compute use equation compute err err err return err expectation err smaller variances since err easy deduce corollary err also converges err rate least proper scaling furthermore show estimators umvu estimators parameter space contains ball lemma parameter space contains ball err umvu estimators proof without loss generality assume density respect lebesgue measure density respect xiaoying tian harris lebesgue measure proportional exp however since exponential family sufficient statistics moreover parameter space contains ball sufficient complete thus taking unbiased estimator integrating conditional complete sufficient err statistics umvu estimators relation sure estimator section reveal equal sure estimator prediction error estimator err parameter space contains ball first notice prediction error although might discontinuous actually smooth data see note due smoothness summation finite sum smooth therefore theory use stein formula compute estimate prediction error note estimator would depend complete sufficient statistics exponential family parameter space contains ball thus also umvu estimator lemma uniqueness umvu estimators sure estimator conclude err however sure estimator quite difficult compute regions may complex geometry explicit formulas hard derive mikkelsen hansen moreover difficult even use samplers approximate integrals since sets might hard describe integrals evaluate making computationally expensive provides unbiased estimator much lower contrast err computational cost need sample time average major computation compute err prediction error model search involved model practice choose number samples less number data points computation involved even less cross validation prediction error model selection one key message work estimate prediction error estimation rule even used model selection procedure construct hat matrix practice however need priori information compute several methods consistent estimation low err dimensional setting simply use residual sum squares divided degrees freedom estimate high dimensional setting problem challenging various methods derived including reid sun zhang tian also want stress prediction error defined work prediction error assumes fixed setup mallows sure stein prediction errors discussed efron good estimator prediction error allow evaluate compare predictive power different estimation rules however cases might interested prediction errors prediction errors measured new dataset xnew ynew xnew ynew xnew case assuming observe new feature matrix xnew interested prediction error errout xnew xnew model selection procedure depends data analogous define errout xnew xnew want point place assumption feature matrix sampled specifically need assume xnew sampled distribution rather condition newly observed matrix xnew distinction cross validation assumes xiaoying tian harris rows feature matrix samples distribution assumption may satisfied practice low dimensional setting able construct unbiased estimator errout lemma suppose rank assume linear model underlying coefficients assuming homoscedastic model dout err unbiased errout xnew xnew proof lemma analogous theorem noticing xnew xnew xnew lemma provides unbiased estimator errout linear function bound difference errout errout might need assume conditions similar introduced beginning section case might still hope matrices close projection matrices almost satisfy assumptions thus intuitively errout errout close estimator dout good estimator errout simulations err see setting performance errout comparable cross validation however setting estimation errors remains challenging problem seek address scope work search degrees freedom close relationship insample prediction error degrees freedom estimator fact consistent estimator prediction error err get consistent estimator degrees freedom prediction error model search framework stein unbiased risk estimator estimation rule err cov coordinate almost differentiable stein showed covariance term equal cov sum covariance terms properly scaled also called degrees freedom cov however many cases analytical forms hard compute none cases computation divergence feasible special zou moreover discontinuous consideration work mikkelsen hansen showed correction terms account discontinuities general correction terms analytical forms hard compute intuitively due search involved constructing larger degrees freedom treats hat matrix fixed adopt name used tibshirani call search degrees freedom circumvent difficulty computing providing asymptotically unbiased estimator err formally defined using discussion section err choose notice approach specific particular model search procedures involved constructing thus offers unified approach compute degrees freedom satisfying appropriate assumptions section illustrate flexibility computing search degrees freedom best subset selection explicitly computable formula xiaoying tian harris prediction error estimates may also used tuning parameters example model selection procedure associated regularization parameter find optimal minimizes prediction error min ynew expectation taken ynew shen shows model tuning criterion yield adaptively optimal model achieves optimal prediction error tuning parameter given advance using relationship easily see type criterion equivalent aic criterion using definition degrees freedom analogously also propose bic criterion log bic yang points compared aic criterion bic tends recover true underlying sparse model recommends sparsity major concern simulations work propose method risk estimation class select estimate estimators one remarkable feature method provides consistent estimator prediction error large class selection procedures general mild conditions demonstrate strength provide simulations two selection procedure various setups datasets two estimators ols estimator best subset selection relaxed lasso denote particular selected best subset selection lasso fixed respectively using original data lagrangian forms best subset selection lasso fixed written min best subset selection lasso thus showing good performances simulation estimator prediction error model search believe good performance would persist nonconvex optimization problems simulation always marginalize different randomizations reduce variance specifically use comparisons use algorithm compute err following simulations compare bias variances estimator cross validation well estimator err parametric bootstrap method proposed efron particular ensure fairness comparison use cross validation simulations simulations prediction errors exceptions comparing estimator err section cross validation estimating prediction errors establish known truth compare use mostly synthetic data synthetic datasets generated diabetes dataset following simulations call estimator additive due additive randomization used estimation cross validation abbreviated true prediction error evaluated sampling since access true underlying distribution assume variance unknown estimate ols residuals setting use methods reid estimate relaxed lasso estimator perform simulation studies prediction error degrees freedom estimation relaxed lasso estimator unless stated otherwise target prediction error estimation prediction error err kynew ynew according framework sure stein degrees freedom estimator defined cov target estimation first study performance prediction error estimation prediction error estimation following describe data generating distribution well parameters used simulation xiaoying tian harris feature matrix simulated covariance matrix normal entries correlation generated sparse linear model snr snr snr ratio sparsity fit lasso problem level noise noise starts enter lasso path negahban choose parameter defined taken approximately compare performances estimators different settings take sparsity since dense signal situation take setting setting randomization parameter see figure settings err provides unbiased estimator small variance remarkably notice variance estimator comparable dotted black lines standard error true prediction error estimated sampling probably best one hope clearly outperforms cross validation performance err comparable parametric bootstrap estimator sparse scenario although parametric bootstrap seems extreme values estimator also performs slightly better dense scenario panel figure dense signal situation model selected lasso often misspecified suspect situation parametric bootstrap overfits data situation causing slight bias downwards estimator always biased take account degrees freedom used model search hand cross validation upward bias prediction error however bias two fold first extra randomness new feature matrix cause prediction error higher however comparing panel figure see signal dense panel cross validation much larger bias dimension prediction error model search additive bootstrap additive bootstrap additive bootstrap additive bootstrap fig comparison different estimators different snr red horizontal line true prediction error estimated simulation dashed black lines denoting standard deviation higher panel suggests cross validation might susceptible model misspecifications well less sparse signals model selected lasso stable consistent causing cross validation behave wildly even leave one observation time provides unbiased contrast four settings estimator err estimator small variance phenomenon persists vary penalty parameter grid varying see figure cross validation error always overestimates prediction error moreover amount estimation highly depends data generating distribution panels figure snr difference sparsity figure figure using dimensions seek control extra randomness using different validation set however change sparsity level alone huge impact cross validation estimates prediction error curve cross validation also xiaoying tian harris truth addtive prediction error prediction error truth addtive fig estimation prediction errors different cross validation always biased upwards however bias depends data generating distribution hugs kinky due bigger variance however scenarios err true prediction error degrees freedom section carry simulation study estimate degrees freedom relaxed lasso estimator take predictors diabetes dataset efron feature matrix include interaction terms original ten predictors positive cone condition violated predictors efron zou use response vectors compute ols estimator synthetic data generated choose different ratios figure shows estimates degrees freedoms method naive compared truth computed naive estimate sampling naive estimator always underestimate degrees freedom taking account inflation degrees freedom model search however estimator defined provides unbiased estimation true degrees freedom relaxed lasso estimator prediction errors finally test unbiasedness proposed estimator section prediction error compare cross validation setting prediction error model search truth addtive degrees freedom fig comparison estimates degrees freedom cross validation naive different xiaoying tian harris additive additive cross validation refig sample prediction error err spectively respectively section target prediction error errout xnew xnew relaxed lasso estimator nonzero set lasso solution still abbreviate estimator additive compare prediction error cross validation see figure estimator proposed section roughly unbiased prediction error performance comparable cross validation settings slightly larger variance however pointed section estimator assume assumptions underlying distribution feature matrix best subset selection estimator originally proposed picking model size best subset selection one aspect often gets neglected number features choose one models size choose best subset size already includes selection procedure needs prediction error model search prediction error truth addtive subset size fig comparison different estimates prediction errors adjusted illustrate problem generate feature matrix dimension standard normal entries generated linear model subset size estimate prediction error best subset size using err true prediction error evaluated using sampling figure see indeed estimate prediction error best subset selection bias bigger potential submodels select contrast err hugs true prediction error every subset size discussion work propose method estimating prediction error data snooping selecting model remarkably estimation specific particular model selection procedures long select many variables include model picks signals data different examples considered xiaoying tian harris following propose two aspects problem deserve attention seek address work mainly focus prediction errors exception section pointed section although provide consistent estimator prediction error high dimensions true errors klement points difficulty exists cross validation well assumptions provide good estimator prediction error high dimensions remains interesting question throughout work assume data comes homoscedastic normal model simulations show performance estimator persists noise data subgaussian authors tian taylor pointed important tail randomization noise heavier data since add gaussian noise randomization suspect normal assumption data replaced subgaussian assumption alternatively may investigate randomization noise may add data heaviertailed data proof lemmas following lemmas essential proving main theorems lemmas introduce first lemma suppose necessarily independently distributed let log proof integer wnk max prediction error model search thus positive integer sterling formula exp exp choose minimize bound right hand side let log log log log take derivatives respect log log easy see maximum attained log minimum log log log log log since holds integer easy see log log based lemma prove lemma xiaoying tian harris proof first since take values hmi max khmi use short hand denote hmi since eigen decomposition dvit diag vit rank thus easy see khi dvit note assume independence structure fact likely independent combining max therefore max thus using lemma easy get conclusion lemma finally prove lemma follows proof first notice hat matrix form khi short hmi density let defined assumption define prediction error model search note differentiable respect moreover khi exp khi using assumptions assume universal constant log thus assuming max last inequality uses lemma fact moreover note distribution mean thus var xiaoying tian harris combining inequalities max therefore take max universal constant assuming fixed prove lemma proof using singular value decomposition easy see reduce problem case diagonal matrix thus without loss generality assume diag see thus deduce var acknowledgement author wants thank professor jonathan taylor professor robert tibshirani frederik mikkelsen professor ryan tibshirani useful discussions project prediction error model search references dwork differential privacy survey results international conference theory applications models computation springer efron estimation prediction error journal american statistical association efron hastie johnstone tibshirani least angle regression annals statistics klement mamlouk martinetz reliability svms low sample size scenarios international conference artificial neural networks springer mallows comments technometrics meinshausen relaxed lasso computational statistics data analysis mikkelsen hansen degrees freedom piecewise lipschitz estimators arxiv preprint negahban wainwright ravikumar unified framework analysis decomposable regularizers advances neural information processing systems reid tibshirani friedman study error variance estimation lasso regression arxiv preprint shen adaptive model selection journal american statistical association stein estimation mean multivariate normal distribution annals statistics sun zhang scaled sparse linear regression biometrika tian loftus taylor selective inference unknown variance via lasso arxiv preprint tian taylor selective inference randomized response arxiv preprint tibshirani regression shrinkage selection via lasso journal royal statistical society series tibshirani degrees freedom model search arxiv preprint tibshirani taylor degrees freedom lasso problems annals statistics yang strengths aic bic shared conflict model indentification regression estimation biometrika measuring correcting effects data mining model selection journal american statistical association zou hastie tibshirani degrees freedom lasso annals statistics serra mall stanford california usa xtian
| 10 |
nov explicit correlation amplifiers finding outlier correlations deterministic subquadratic time matti karppa petteri kaski jukka kohonen padraig abstract derandomize valiant acm art algorithm finding outlier correlations binary data derandomized algorithm gives deterministic subquadratic scaling essentially parameter range valiant randomized algorithm precise constants save quadratic scaling modest main technical tool derandomization explicit family correlation amplifiers built via family expanders reingold vadhan wigderson ann math say function correlation amplifier threshold error strength even positive integer pairs vectors holds implies implies keywords correlation derandomization outlier similarity search expander graph ams classification introduction consider task identifying pairs large collections weakly correlated binary vectors precise terms interested following computational problem problem outlier correlations given input two sets two thresholds outlier threshold background threshold task output outlier pairs subject assumption pairs satisfy remark setting binary vectors pearson correlation directly motivated among others connection hamming distance indeed two vectors hamming distance way solve problem compute inner products filter everything outliers interest algorithms scale subquadratically bounded slowly growing functions seek running times form helsinki institute information technology hiit department computer science aalto university helsinki finland last author current address pocathain research funded european research council european union seventh framework programme erc grant agreement theory practice advanced search enumeration academy finland grants work done part second author visiting simons institute theory computing matti karppa petteri kaski jukka kohonen padraig constant furthermore seek without priori knowledge running times form constant immediately obtainable using techniques seminal hashing indyk motwani variants see however algorithms converge quadratic running time unless bounded positive constant interest algorithms avoid curse weak outliers run subquadratic time essentially independently magnitude provided sufficiently separated ability identify weak outliers large amounts data useful among others machine learning noisy data one strategy circumvent curse weak outliers pursue following intuition partition input vectors buckets vectors aggregate bucket single vector taking vector sum iii compute inner products pairs aggregate vectors sufficient separation inner products aggregates large every outlier pair discoverable among input pairs correspond large inner product aggregates furthermore strategy form oblivious actually start searching inside buckets enables adjusting based number large aggregate inner products randomized amplification bucketing strategies studied help randomization valiant presented breakthrough algorithm bucketing replaces input vector randomly version pth kronecker power identity yip ratio outlier background correlations gets amplified essentially pth power assuming sample large enough sufficient concentration bounds hold high probability amplification makes outliers stand background even bucketing enables detection subquadratic time using fast matrix multiplication subset present authors improved valiant algorithm modified sampling scheme simultaneously amplifies aggregates input use fast matrix multiplication improvement problem solved subquadratic time logarithmic ratio log log bounded constant less also improved algorithm relies randomization explicit amplification paper seek deterministic subquadratic algorithms earlier randomized algorithms seek map ddimensional input vectors higher dimension inner products sufficiently amplified process towards end interested explicit functions approximate identity definition correlation amplifier let positive integers even let function correlation amplifier parameters pairs vectors dimension reduced subsampling full kronecker power large manipulated explicitly yield subquadratic running times explicit amplifiers outlier correlations remark correlation amplifier guarantees correlations absolute value stay bounded correlations least absolute value become positive governed approximation multiplicative error particular implies correlations least mask outliers bucketing correlations get positive sign amplification immediate correlation amplifiers exist example take even obtain correlation amplifier present purposes however seek correlation amplifiers substantially smaller furthermore seek constructions explicit form exists deterministic algorithm computes individual coordinate time poly log accessing poly coordinates given follows explicitness always refers strong form results main result paper sufficiently powerful explicit amplifiers exist find outlier correlations deterministic subquadratic time theorem explicit amplifier family exists explicit correlation amplik fier parameters whenever positive integers corollary obtain deterministic algorithm finding outlier correlations subquadratic time using bucketing fast matrix multiplication let write limiting exponent rectangular integer matrix multiplication constants exists algorithm multiplies integer matrix integer matrix arithmetic operations particular known theorem deterministic subquadratic algorithm outlier correlations constants exists deterministic algorithm solves given instance problem time assuming parameters satisfy following three constraints remarks observe particular subquadratic regardless magnitude provided separation via constants optimized beyond comparison weaker form explicitness could require example exists deterministic algorithm computes entire vector given time log technical constraint affects inputs dimension grows tially root function since matti karppa petteri kaski jukka kohonen padraig desired goal obtaining deterministic subquadratic running time bounded slowly growing functions particular gives substantially worse subquadratic running times compared existing randomized strategies algorithm theorem needs priori knowledge oblivious starts searching inside buckets overview discussion techniques straightforward application probabilistic method establishes lemma correlation amplifiers obtained subsampling uniformly random dimensions tensor power long sample size large enough thus essence theorem amounts derandomizing subsampling strategy presenting explicit sample error bounds indistinguishable perfect amplifier taking inner products construction underlying theorem amounts composition explicit squaring amplifiers increasingly strong control error interval amplification successive composition towards end require flexible explicit construction squaring amplifiers strong control error interval obtain construction explicit family expander graphs lemma obtainable explicit constructions reingold vadhan wigderson particular key controlling error interval expander family gives concentration normalized second eigenvalue increasing degree essence since working vectors increasing degree use expander mixing lemma lemma concentration control lemma well restriction edges expander graph approximates full tensor square taking inner products construction motivated paradigm gradually increasing independence design pseudorandom generators indeed obtain final amplifier gradually successive squarings taking care degree expander apply squaring increases similar squaring schedule given simultaneously control error interval bound output dimension roughly square degree last expander sequence term gradual particularly descriptive since growth successive squaring amounts doubly exponential growth number squarings yet growth seen gradual controlled following sense obtain strong amplification compared final output dimension precisely first squarings come free multiplicative terms essentially taking sum powers exponent analogy pseudorandom generators fact pushed somewhat namely correlation amplifier roughly seen pseudorandom generator seeks fool truncated family uniform combinatorial rectangles control requested truncation threshold see rough analogy let hadamard product vectors observe seeks approximate multiplicative error expectation uniform random entry length kronecker power instead taking expectation explicit sample given kronecker power uniform special ramanujan graphs see would give somewhat stronger concentration hence improved constants however aware sufficiently family explicit ramanujan graphs comfortably support successive squaring explicit amplifiers outlier correlations case combinatorial rectangle formed kronecker product truncation means seek appd proximation cases accordingly want constructions take truncation seek fool combinatorial rectangles accordingly want stronger control dimension seed length log review state art pseudorandom generators refer gopalan kane meka kothari meka goal obtain small output dimension roughly corresponds optimizing seed length pseudorandom generator explicit construction reach exact output dimension obtainable lemma observed parameter range interest constant constant form constants hidden asymptotic notation differ explicit nonconstructive bounds moreover using results alon show lower bound lemma output dimension correlation amplifier namely range governed log log thus viewed pseudorandom generator seed length log theorem essentially admit improvement except possibly multiplicative constants related work applications problem basic problem data analysis machine learning admitting many extensions restrictions variants large body work exists studying approximate near neighbour search via techniques hashing recent work aimed derandomization see pagh pham pagh resource tradeoffs see kapralov particular however techniques enable subquadratic scaling bounded positive constant whereas algorithm theorem remains subquadratic even case weak outliers tends zero increasing long separated ahle pagh razenshteyn silvestri show subquadratic scaling possible log unless orthogonal vectors conjecture strong exponential time hypothesis fail small dimensions alman williams present randomized algorithm finds exact neighbours setting analogous problem subquadratic time dimension constrained log recently chan williams show derandomize related algorithm designs also alman chan williams derandomize probabilistic polynomials symmetric boolean functions used achieving deterministic subquadratic batch queries small dimensions one special case problem problem learning weight parity function presence noise light bulb problem problem light bulb problem valiant suppose given input parameter set vectors one planted pair vectors inner product least absolute value vectors chosen independently uniformly random task find planted pair among vectors remark hoeffding bound follows exists constant log planted pair high probability increases unique pair input maximum absolute correlation matti karppa petteri kaski jukka kohonen padraig problem whose instances drawn random ensemble say algorithm solves almost instances problem probability drawing instance algorithm fails tends zero increases paturi rajasekaran reif dubiner may ozerov present randomized algorithms used solve almost instances light bulb problem subquadratic time assume bounded positive constant tends zero algorithms converge quadratic running time valiant showed randomized algorithm identify planted correlation subquadratic time almost inputs even tends zero increases corollary theorem derandomize valiant design still retain subquadratic running time worse constant almost inputs except extremely weak planted correlations amplifier general able amplify sufficiently low output dimension enable overall subquadratic running time corollary deterministic subquadratic algorithm light bulb problem constants exists deterministic algorithm solves almost instances problem time assuming parameters satisfy two constraints log corollary extends parity functions larger constant weight presence noise generalized version problem follows problem learning parity noise let support parity function noise level task determine set drawing independent random examples thatqx chosen uniformly random label independent random variable information trivial solution enumerate subsets locate support blum kalai wasserman provide solua tion runs time sample complexity poly positive integers log constant independent assert constant independent trivial complexity drops exponential seek lower coefficient exponent randomized solutions constant include valiant breakthrough algorithm subsequent randomized improvement runs time constant present contribution deterministic algorithm learning constantweight parity functions noise interest case noise level approaches accordingly assume bounded constant less say deterministic algorithm solves almost instances problem probability drawing instance algorithm fails tends zero perspective event drawn examples uniquely identify explicit amplifiers outlier correlations corollary deterministic algorithm learning parity noise constants exists constant deterministic algorithm constants draws examples finds support almost instances problem time assuming parameters satisfy constraints log algorithms learning parity functions enable extensions classes boolean functions sparse juntas dnfs preliminaries vectors paper vector denote entry two vectors write inner product write log logarithm base logarithm base exp proofs need following bound due hoeffding theorem provides exponentially small upper bound deviation sum bounded independent random variables expectation theorem hoeffding theorem let independent ranpd dom variables satisfying let following holds exp explicit amplifiers approximate squaring section proves theorem start preliminaries expanders show approximate squaring identity using expander mixing rely repeated approximate squaring main construction proof completed routine preprocessing preliminaries expansion mixing work undirected graphs possibly multiple edges graph every vertex incident exactly edges present counting one edge suppose vertex set let set labels incident vertex labeled unique labels rotation map rotg bijection rotg edge incident vertex labeled leads vertex label let write set edges one end end suppose vertices let eigenvalues adjacency matrix let say graph vertices excellent survey expansion expander graphs refer hoory linial wigderson matti karppa petteri kaski jukka kohonen padraig lemma expander mixing lemma lemma work following family graphs obtained product reingold vadhan wigderson particular lemma gives enable control relative inner products increasing lemma integers exists graph whose rotation map evaluated time poly proof see appendix main construction main objective section prove following lemma augment theorem routine preprocessing input dimension lemma repeated approximate squaring exists explicit correlation amplifier parameters whenever positive integers approximate squaring via expanders vector let write kronecker product construction correlation amplifiers rely approximating squaring identity vectors precise terms let let vector contains coordinate exactly edge joins vertex vertex equivalently let rotg rotation map define given rotg particular exactly coordinates lemma approximate squaring hxg proof let let write since hxg observing applying lemma four times hxg reingold vadhan wigderson work eigenvalues normalized adjacency matrix whereas follow hoory linial wigderson work unnormalized adjacency matrices manuscript proper appendix works normalized adjacency matrices compatibility reingold vadhan wigderson explicit amplifiers outlier correlations amplifier function construct amplifier function uses approximate squarings graphs drawn graph family lemma accordingly assume vectors lengths positive integer powers input amplifier dimension positive integer suppose vector let positive integer whose value fixed later let unique positive integer note particular divides since power let lemma take copies obtain vector let amplifier outputs since graph family lemma admits rotation maps computed time poly observe explicit indeed construction immediate compute single coordinate suffices perform total evaluations rotation map graph access coordinates since log compute coordinate time poly log accessing coordinates parameterization analysis fix parameterize amplifier remains fix values let track pair vectors proceeds approximate squarings start observing copying preserves relative inner products pair vectors hxi easy manipulation lemma using parameters lemma gives additive control approximate squaring via inner products absolute value threshold want turn additive control multiplicative control via let insist multiplicative control holds whenever threshold parameter defined enforcing via threshold let assume next lemma confirms assuming gives control inner products retained next approximate squaring following lemma shows small inner products remain small lemma proof observe thus conclude matti karppa petteri kaski jukka kohonen padraig converse direction conclude lemma proof since conclude let make sure holds solving particular make sure hence holds simply choosing large enough large enough proceeding precise choice let analyze relationship amplifier using lemma lemma let two vectors given input outputs satisfy following two lemmas control via lemma proof use induction lemma gives inductive step lemma proof let show induction base case immediate two cases consider first suppose lemma since next suppose lemma since lemma lemma follows meets required amplification constraints let complete parameterization derive upper bound take smallest nonnegative integer satisfies since hence recall max since follows repeatedly taking two copies output necessary obtain correlation amplifier parameters completes proof lemma explicit amplifiers outlier correlations preprocessing input dimension still want remove assumption lemma input dimension positive integer power following preprocessing sufficient towards end let let positive integer define vector concatenating copies one another truncating result first coordinates obtain let study map operates pair vectors notational compactness let work relative inner products lemma implies implies proof let unique integers since leaving coordinates suppose observe since hypothesis thus similarly hypothesis max thus completing proof theorem let parameters meeting constraints theorem particular constraint construct required amplifier preprocess input vector obtaining vector length apply amplifier given lemma symbols define immediate lemma lemma resulting composition explicit begin relating given parameters theorem lemma take select minimal value constraint lemma satisfied constrained follows substituting upper bound bound lemma get lower bound matti karppa petteri kaski jukka kohonen padraig observe integer satisfying also satisfies attempted optimise construction prefer statement theorem reasonably clean sufficient prove theorem let study map operates pair vectors notational compactness work relative inner products observe notation proof lemma lemma proof first show dividing cases lemma complete proof condition lemma applies otherwise lemma lemma proof convenient split analysis according whether positive negative suppose first lemma since lemma applies yielding substitute bound substituting observing provides required bound case essentially follows multiplying inequalities positive case satisfies lemmas respectively completes proof theorem deterministic algorithm outlier correlations section proves theorem start describing algorithm parameterize establish correctness finally proceed analyze running time explicit amplifiers outlier correlations algorithm fix constants theorem based constants fix constants fix precise values later analysis algorithm stress depend given input suppose given input parameters requirements theorem hold work correlation amplifier parameters fix precise values parameters later analysis algorithm originates theorem algorithm proceeds follows first apply vector obtain sets let second partition vectors buckets size take vector sum vectors bucket obtain sets third using fast rectangular matrix multiplication compute matrix whose entries inner products fourth iterate entries whenever detection inequality holds search outliers among inner products corresponding pair buckets output outliers found parameterization correctness let parameterize algorithm establish correctness since constant assuming large enough theorem select integer power recall write exponent rectangular matrix multiplication apply fast rectangular matrix multiplication third step algorithm want recalling suffices require let assume time justify assumption later choose value let unique positiveinteger power log log log log later fixing make sure side least exists positive let consider single entry analyze corresponding inner products two buckets input vectors relate detection inequality make two claims claim background case inner products hold algorithm search inside pair buckets claim used control running time claim follows directly since inner products matti karppa petteri kaski jukka kohonen padraig claim outlier case least one inner products holds algorithm searches inside pair buckets guarantees outliers detected note third case namely inner products none make claim whether holds algorithm required search inside pairs buckets since outliers may without hindering overall running time bound proceed parameterize algorithm claim holds outlier case least one inner product remaining inner products thus outlier case claim need detection inequality hold whenever holds towards end suffices require rearranging solving require log log thus see suffices log log log log equivalently log log log let derive lower bound side fix constant log log assumptions lower bound log log log log log log log log log log log log log log thus holds large enough require since holds set also observe equivalently holds choice fixed observe terms assumption thus assumption statement theorem guarantees side least required existence completes parameterization algorithm explicit amplifiers outlier correlations running time let analyze running time algorithm first second steps run time since log originates theorem hence explicit since holds third step algorithm runs time constant free choose since large enough choose thus first second third steps together run time fourth step runs time indeed observe claim detection inequality holds entries completes running time analysis proof theorem applications section proves corollaries light bulb problem useful variant problem asks outlier pairs distinct vectors drawn single set rather two sets observe variant reduces instances variant numbering vectors binary numbers splitting two sets based value ith bit proof corollary reduce version problem apply theorem towards end theorem set suppose given instance problem whose parameters satisfy constraints set observe constraints theorem satisfied since holds assumption holds since iii constants match theorem constraint log log implies log log claim almost instances problem whose parameters satisfy constraints corollary indeed hoeffding bound union bound probability pair planted pair instance inner product exceeds absolute value exp exp log high probability increases claimed running time follows substituting chosen constants learning parities noise generalize result parity functions larger constant weight prove corollary proof corollary fix constants fix value constant later let constant algorithm first draws examples given instance problem transforms two collections vectors feed algorithm theorem proceed mimic proof corollary let first set notation let dev note symmetric difference let boolean let product elements indexed observe let write set suppose given input instance problem noise level satisfies furthermore assume part input case cost increasing time complexity search matti karppa petteri kaski jukka kohonen padraig using geometric progression limit objective eventually applying theorem set particular since let least positive integer satisfies log constant whose value fix later draw given instance pairs use define two collections vectors sizes examples respectively immediate particular assume set consists vectors aji xji vectors set consists bji xji let study distribution inner products vectors write random variable sum independent random variables takes value probability value otherwise observe expectation let support parity function unknown recall xsi getting value probability xji xji xji xsi xji observe two distinct cases xji hence task finding support reduces locating inner products distribution among argue choices suffice algorithm theorem distinguish two cases almost draws examples stress algorithm deterministic randomness draw examples perspective algorithm theorem suffices pair exceeds inner product least one explicit amplifiers outlier correlations pairs inner product least control observe exp log exp since pairs observe union bound holds high probability increases since control select fixed pair exp log exp thus holds high probability increases remains verify constraints parameters theorem suppressing constants choice log theorem apply must bounded holds holds assumption sufficiently large select constraint holds choose assumption required since also assumption required constants match theorem furthermore choice log log log log required constraints theorem satisfied brevity let take thus claimed running time follows observing subsumes time takes construct collections together time takes search pairs buckets inside algorithm theorem inserting choices approximating wards yields log matti karppa petteri kaski jukka kohonen padraig nonconstructive existence lower bound section shows nontrivial correlation amplifiers exist establishes lower bound output dimension correlation amplifier former done routine application hoeffding bound latter applying results alon amplifiers exist combining hoeffding bound union bound observe amplifiers exist lemma existence exists correlation amplifier parameters whenever positive integers satisfying proof let function maps onto entries chosen independently random entry vector product entries chosen independently uniformly random let fixed pair vectors set suppose following inequality holds observe implies final inequality holds logically equivalent similarly implies following upper bound also obtain lower bound fact implies conditions definition function satisfies correlation amplifier use theorem bound probability fails take union bound range establish existence result sufficiently large explicit amplifiers outlier correlations define random variable since restriction onto entries chosen uniformly random observe product ith entries particular holds summing probability fails hold bounded taking union bound exists correlation amplifier parameters whenever solving get simplifying expression approximating completes proof lower bound output dimension next show lower bound output dimension correlation amplifier parameters given proof based taking collection vectors pairs background threshold bounding number images whose absolute pairwise correlations required definition lemma collection exp vectors proof show probabilistic argument call pair vectors bad let collection vectors chosen uniformly random consider pair let zij hxi zij sum independent random variables zij applying hoeffding bound observe pair bad probability zij exp since less exp pairs vectors expected number bad pairs less thus least one collection bad pairs bound number image vectors use combinatorial result alon bound rank correlation matrix lemma let aij real symmetric matrix aii rank proof apply alon lemma lemma let bij matrix rank let bkij positive integer rank proof apply alon lemma polynomial next lemma essence alon theorem modified avoid asymptotic notation logarithms base matti karppa petteri kaski jukka kohonen padraig lemma let bij real symmetric matrix bii rank log log proof choose stated note assumed range let particular let aij bkij since elements satisfy follows choice elements satisfy combining lemma lemma rank taking logarithms rearranging inequality obtain log log log log implying log observing log log get since implies stated remark parameter measures sense distance case extremely low correlation requirement tends infinity exponent approaches matching asymptotic form given alon however small exponent diminishes reaching limiting case limiting case direct application lemma would give better linear bound combine lemma get lower bound output dimension lemma lower bound output dimension output dimension correlation amplifier parameters bounded log log proof lemma collection exp vectors correlations absolute value definition images correlations absolute value explicit amplifiers outlier correlations consider correlation matrix bij hui real symmetric diagonal elements bii satisfying observe rank applying lemma log log log log claimed log log remark limiting case exp bound becomes exp greater limit one essentially map exp input vectors orthogonal output vectors dimension using hadamard matrix case holds arbitrary appendix expander family section proves lemma following reingold vadhan wigderson present proof completeness exposition claim originality following reingold vadhan wigderson work normalized eigenvalues avoid confusion unnormalized treatment manuscript proper say graph graph vertices unnormalized second eigenvalue defined manuscript proper refer sections reingold vadhan wigderson definition square graph tensor product graphs zigzag product graphs following omnibus result collects elements propositions proposition theorem theorem sufficient control second normalized eigenvalue present purposes choose omit details rotation maps understanding found lemma reingold vadhan wigderson following bounds hold max let study following sequence graphs let let let lemma easily seen defined max matti karppa petteri kaski jukka kohonen padraig lemma reingold vadhan wigderson theorem rotation map rotgt computed time poly log making poly evaluations roth lemma proof conclusion immediate suppose conclusion holds need show conclusion holds induction suffices show observing holds yields desired conclusion proof identical finally construct expanders require manuscript proper lemma lemma stated normalized eigenvalue notation integers exists whose rotation map evaluated time poly proof take proposition reingold vadhan wigderson obtain whose rotation map computed time poly indeed observe irreducible polynomial perform required arithmetic finite field order constructed deterministic time poly algorithm shoup let study sequence given time complexity rotation map follows immediately lemma since lemma gives take observe since thus references thomas ahle rasmus pagh ilya razenshteyn francesco silvestri complexity inner product similarity join arxiv josh alman timothy chan ryan williams polynomial representations threshold functions algorithmic applications arxiv josh alman ryan williams probabilistic polynomials hamming nearest neighbors proc annual ieee symposium foundations computer science focs pages los alamitos usa ieee computer society noga alon problems results extremal combinatorics discrete alexandr andoni piotr indyk huy nguyen ilya razenshteyn beyond localitysensitive hashing proc annual symposium discrete algorithms soda pages philadelphia usa society industrial applied mathematics alexandr andoni thijs laarhoven ilya razenshteyn erik waingarten optimal approximate near neighbors arxiv alexandr andoni ilya razenshteyn optimal hashing approximate near neighbors proc acm annual symposium theory computing stoc pages new york usa association computing machinery avrim blum adam kalai hal wasserman learning parity problem statistical query model acm elisa celis omer reingold gil segev udi wieder balls bins smaller hash families faster evaluation siam timothy chan ryan williams deterministic apsp orthogonal vectors quickly derandomizing robert krauthgamer editor proc annual symposium discrete algorithms soda pages arlington usa society industrial applied mathematics moshe dubiner bucketing coding information theory statistical problem ieee trans inf theory explicit amplifiers outlier correlations vitaly feldman parikshit gopalan subhash khot ashok kumar ponnuswami agnostic learning parities monomials halfspaces siam aristides gionis piotr indyk rajeev motwani similarity search high dimensions via hashing malcolm atkinson maria orlowska patrick valduriez stanley zdonik michael brodie editors proc international conference large data bases vldb pages edinburgh scotland morgan kaufmann parikshit gopalan daniek kane raghu meka pseudorandomness via discrete fourier transform proc ieee annual symposium foundations computer science focs pages berkeley usa ieee computer society parikshit gopalan raghu meka omer reingold david zuckerman pseudorandom generators combinatorial shapes siam elena grigorescu lev reyzin santosh vempala learning sparse parities related problems proc international conference algorithmic learning theory alt pages berlin germany springer wassily hoeffding probability inequalities sums bounded random variables amer statist shlomo hoory nathan linial avi wigderson expander graphs applications bull amer math russell impagliazzo ramamohan paturi complexity comput syst piotr indyk rajeev motwani approximate nearest neighbors towards removing curse dimensionality proc annual acm symposium theory computing stoc pages new york usa association computing machinery daniel kane raghu meka jelani nelson almost optimal explicit johnsonlindenstrauss families proc international workshop approximation randomization combinatorial optimization random international workshop algorithms techniques approx pages princeton usa michael kapralov smooth tradeoffs insert query complexity nearest neighbor search proc acm symposium principles database systems pods pages new york usa association computing machinery matti karppa petteri kaski jukka kohonen faster subquadratic algorithm finding outlier correlations proc annual symposium discrete algorithms soda pages arlington usa society industrial applied mathematics pravesh kothari raghu meka almost optimal pseudorandom generators spherical caps proc annual acm symposium theory computing stoc pages portland usa gall faster algorithms rectangular matrix multiplication proc annual ieee symposium foundations computer science focs pages los alamitos usa ieee computer society lubotzky phillips sarnak ramanujan graphs combinatorica alexander may ilya ozerov computing nearest neighbors applications decoding binary linear codes proc eurocrypt annual international conference theory applications cryptographic techniques pages berlin germany springer elchanan mossel ryan donnell rocco servedio learning functions relevant variables comput syst rajeev motwani assaf naor rina panigrahy lower bounds locality sensitive hashing siam discrete ryan donnell yuan zhou optimal lower bounds hashing except tiny acm trans comput theory article rasmus pagh hashing without false negatives proc annual acmsiam symposium discrete algorithms soda pages philadelphia usa society industrial applied mathematics ramamohan paturi sanguthevar rajasekaran john reif light bulb problem proc annual workshop computational learning theory colt pages new york usa association computing machinery ninh pham rasmus pagh scalability total recall fast coveringlsh arxiv omer reingold salil vadhan avi wigderson entropy waves graph product new expanders ann matti karppa petteri kaski jukka kohonen padraig victor shoup new algorithms finding irreducible polynomials finite fields math gregory valiant finding correlations subquadratic time applications learning parities closest pair problem acm article leslie valiant functionality neural nets proc annual workshop computational learning theory colt pages new york usa association computing machinery
| 8 |
thermophysical phenomena metal additive manufacturing selective laser melting fundamentals modeling simulation experimentation christoph meiera ryan pennya zoua jonathan gibbsa john harta sep mechanosynthesis group department mechanical engineering massachusetts institute technology massachusetts avenue cambridge usa abstract among many additive manufacturing processes metallic materials selective laser melting slm arguably versatile terms potential realize complex geometries along tailored microstructure however complexity slm process need predictive relation powder process parameters part properties demands development computational experimental methods review addresses fundamental physical phenomena slm special emphasis associated thermal behavior simulation experimental methods discussed according three primary categories first macroscopic approaches aim answer questions component level consider example determination residual stresses dimensional distortion effects prevalent slm second mesoscopic approaches focus detection defects excessive surface roughness residual porosity inclusions occur mesoscopic length scale individual powder particles third microscopic approaches investigate metallurgical microstructure evolution resulting high temperature gradients extreme heating cooling rates induced slm process consideration physical phenomena three length scales mandatory establish understanding needed realize high part quality many applications fully exploit potential slm related metal processes introduction additive manufacturing offers opportunity produce parts high geometric complexity without requirement dedicated tooling processes polymer parts quite well established yet processes metals still exhibit severe practical challenges many resulting high melting temperatures metals relatively low viscosities selective laser melting slm metals arguably prominent representative powder bed methods manufacturing task digitally segmented thin layers solid part simply formed selectively melting contours successive layers powder using focused laser beam one layer powder scanned regions melted laser form final part subsequently underlying build platform lowered layer powder deposited means powder coater mechanism procedure successively repeated final geometry completed remaining unfused powder removed see figure context build direction denotes direction normal powder bed orientation part geometry respect vertical build direction crucial influence resulting part properties slm metals offers significant advantages including production without need expensive molds time consuming high material utilization rate highest production flexibility essentially production leads nearly unlimited freedom design enables generation highly complex geometries obtained conventional manufacturing processes paradigm shift mechanical design allows integrate complex substructures geometries enabling lightweight corresponding authors email addresses chrmeier christoph meier ajhart john hart preprint submitted annual review heat transfer september mesoscale roller atmospheric gas pressure laser beam gas flow macroscale powder reservoir melt pool cyclic heating surface tension heat flux concentration spatter ejected particle vapor recoil melting wetting solidification microscale platform segregation powder layers columnar grains slm part support structure build platform heterogeneous nucleation figure experimental setup slm process well schematic visualization macroscale mesoscale microscale view yet sufficiently stiff components possible fields application include aerospace medical engineering basically industries requiring highly complex individualized parts see figure figure exemplary metal parts fabricated aircraft bracket approx width airbus acetibular cup human hip implant approx diameter arcam injection mold tool internal cooling passages approx width sodick parts including support removal machining polishing addition process construction method overall slm process highly complex governed variety competing physical mechanisms important effects physical phenomena occurring powder bed melt pool solidified phase typical slm systems summarized following three subsections well figure typical schematic parametrization powder layer laser beam well typical local scan pattern shown figure physical phenomena within powder bed incident laser beam collimated polarized monochromatic wave wavelength range spatial power density distribution incident radiation powder bed commonly assumed follow gaussian distribution associated value typically taken laser beam spot size typical values nominal laser power employed laser beam velocities range effective laser beam absorption within powder bed governed multiple reflections incident laser rays within system powder bed partial absorption incident radiation laser beam penetrate considerable depths even reach range powder layer thickness thus net absorptivity powder beds considerably higher value known contour scan current melt track subsequent melt track infill hatching powder layer substrate figure typical geometrical dimensions process parameters left well typical scan pattern right characterizing slm process flat surfaces moreover laser beam energy source must thought volumetric heat source distributed powder bed thickness opposed surface heat source factors influencing overall absorption local energy distribution numerous including laser beam power wavelength polarization angle incidence powder temperature surface roughness surface chemistry oxidation contamination issues powder bed morphology determined particle shape size distribution packing density also central radiative transfer important factor given heat transfer within powder bed heat transfer typically governed gas powder bed pores commonly negligible overall conductivity contributions contact points long loose mechanically compressed powder layers considered consequently thermal conductivity loose powder comparable conductivity gas orders magnitude smaller conductivity solidified phase also considering heat transfer observed time scales governing process typically larger time scales governing particle melting words typical slm process conditions enough time conductive homogenization energy temperature distributions across powder bed also across individual particles consequence partially molten particles may cause defects pores inclusions characteristic slm process powder bed considering homogenized continuum shows mesoscopic heterogeneities form individual particles order magnitude relevant macroscopic process length scales powder layer thickness laser beam spot size typically powder particle sizes range layer thicknesses range laser beam spot sizes range employed also standard linear scan patterns distance two successive laser tracks denoted hatch spacing important process parameter typically chosen range sufficient overlap remelting two subsequent tracks guaranteed figure good overview typically applied range process parameters also given large size individual powder grains compared powder layer thickness laser beam spot size typically leads energy distributions across entire powder bed also across individual particles may considerable influence resulting melting behavior melt pool hydrodynamics furthermore comparatively large heterogeneities cause differences resulting temperature fields melt track shapes considering different samples stochastically equivalent powder layers identical particle shape size distribution powder layer density thickness consequently variance process results due stochastic nature powder layer considerably greater comparable processes laser beam welding lbw see besides energy input laser beam also possible energy losses form thermal radiation emission thermal convection heat conduction solidified material underlying built platform play important role overall slm process information powder bed radiation heat transfer context slm interested reader referred physical phenomena within melt pool soon melting temperature reached local positions powder grain surface phase transition solid liquid well formation melt pool ideally continuous melt track induced driven surface tension capillary forces tending minimize surface energy coalescence individual melt drops reshaping resulting melt pool initiated addition wetting behavior melt underlying substrate solidified material formed previous layers surrounding powder grains influences resulting melt pool shape continuity adhesion previous layer wetting behavior crucially depends material temperature surface roughness surface chemistry oxidation powder grain substrate surfaces either due contaminated primary powder material due thermally induced oxidation process known considerably decrease wetting behavior melt might result instable balled melt pools rough surfaces pores delamination due insufficient adhesion prevalent length time scales essentially determine physical effects govern process negligible typically viscous gravity forces considered secondary effects surface tension capillary forces wetting behavior also inertia effects primary driving forces influence melt pool dynamics shape well surrounding powder morphology attracting rejecting individual grains heat transfer within melt pool governed convection rather heat conduction marangoni convection melt flow hot cool regions induced surface tension playing prominent role process depending amount absorbed energy density surrounding atmospheric pressure peak temperature within melt pool might exceed boiling temperature considerable material evaporation may take place evaporation well gas flow induced evaporation may influence melt pool overall process consequence evaporative mass loss additional cooling recoil pressure considerably distorting melt pool surface representing means transport potential pollutants melt drops spattered pool even powder particles ejected away direct vicinity laser beam driven keyholing mechanism cavitation resulting evaporated material might considerably contribute overall material porosity even burst surrounding solidified material due thermal expansion trapped gas soon melt pool solidified evolution possible defects mesoscopic scale virtually established discussion physical phenomena governing melt pool found physical phenomena within solidified phase solidification melt pool development metallurgical microstructure crucially determining macroscopic properties final part begins evolution microstructure characterized grain size grain shape morphology grain orientation texture governed prevalent spatial temperature gradients cooling rates well velocity solidification front slm processes two regimes distinguished first regime given temperature field direct vicinity laser beam heat affected zone haz controlled highly complex mechanisms radiation absorption heat conduction powder bed well convective heat transfer within melt pool individual physical phenomena process parameters influence discussed two previous paragraphs material region subject rapid heating melting temperature due absorption laser energy powder grains high velocity melt pool front induced laser beam velocity well rapid solidification molten material heat source moved direct consequence large ratio solid material hot molten material pronounced conditions lead microstructures compositions resulting phases well smaller grain sizes typically result higher material strengths compared traditional manufacturing processes casting second regime thermal evolution prevalent previously deposited material layers located current layer away heat source solidified location experiences repeated heating cooling cycles decreasing amplitudes laser processes adjacent scan tracks processes consecutive new layers heat transfer regime rather determined global part properties global laser beam scanning strategy build direction fixation part built platform temperature built platform also part porosity metallurgic microstructure distribution influence effective thermal conductivity course microstructure also depends specific part geometry due heat flux concentration transition region bulk material slender columns thin walls see figure surrounded low conductivity unfused powder also evolution columnar grain structures oriented direction main temperature gradients usually build direction typical slm processes often yields strongly anisotropic macroscopic material behavior higher material strength build direction increasing distance top powder layer maximal temperature values gradients experienced material layer repeated thermal cycles decrease similar heat treatment cycles might lead coarsening microstructure reduction brittle phases consequently ductile material characteristics consequence effects final part typically exhibit change microstructure built direction one hand initial creation fine grain structures phases pronounced first material layers deposited direct vicinity build platform higher thermal conductivity faster cooling rates prevalent hand initially deposited powder layers exposed heat treatment repeated heating cooling cycles longer times might lead longer evolution times solid phase transformations grain coarsening apart microstructure evolution considered far high temperature gradients direct vicinity melt pool also locations heat flux concentration giving rise considerable thermal strains induced successive thermal expansion shrinkage material thermal strains result thermal stresses within kinematically constrained slm part magnitude stresses essentially determined underlying solid microstructure resulting macroscopic material behavior ductile material compensate thermal strain variations means local plastic flow contrary brittle material behavior fostered small grain sizes existence certain phases local segregation alloying elements might result cracks locations stress concentration residual pores inclusions complexity coupling increased fact microstructure influence amount residual stresses also prevalence residual stresses example known mechanically stabilize metastable austenitic phase steels influence evolution microstructure order reduce residual stresses large surface areas surfaces typically subdivided smaller islands completed successively figure right illustrates processing single island consisting contour scan infill hatching slm process amplitude residual stresses might also decrease since stress relaxation likely occur repeated heating cooling cycles lower temperature levels furthermore annealing typically applied final part removing support structures order relief residual stresses neglect support structures would result reduced residual stresses advantage paid dimensional warping often observed parts made slm information material aspects microstructure evolution context slm interested reader referred exemplary references focusing investigation residual stresses resulting slm process concluded entire thermal history solidification cooling ambient temperature governed many heating cooling cycles different temperature levels time scales considerably determines resulting metallurgic microstructure well macroscopically observable material properties ductility micro hardness yield strength tensile strength well spatial distribution possibly inhomogeneous anisotropic manner one hand restriction overall process parameters required order avoid undesirable material characteristics possible part defects excessive residual stresses dimensional warping crack propagation delamination layers effects might destroy slm part build process least reduce mechanical resilience final part considerably order fully exploit efficiency potential slm result achieved without need required alternative processes selective laser sintering sls see hand flexibility slm process offers unique opportunity optimize process parameters order manufacture parts prescribed inhomogeneous anisotropic microstructures macroscopic material properties controlled manner contributing paradigm shift design enabled slm differentiation related additive manufacturing processes metals references comparisons metal processes electron beam melting ebm selective laser sintering sls useful time time foregoing discussion similar slm ebm process represents powder additive manufacturing process contours selectively melted successively deposited powder layers slm applies laser beam energy source ebm based electron beam ebm applicable electrically conductive materials performed environment order avoid interaction electron beam surrounding gas molecules hand slm also suitable dielectric materials performed inert gas atmosphere order prevent surface oxidation metallic powder substrates also way energy transfer powder bed different considering one individual powder grain laser beam radiation good approximation absorbed powder grain surface ebm electrons penetrate powder grain surface process kinetic energy electrons dissipated leading melting grain however energy electron beam typically completely deposited powder grain first incidence contrary part laser beam energy directly absorbed powder grain first incidence remaining part reflected leading considerably higher powder bed penetration depths due multiple reflections open pore system provided powder bed eventually laser beam slm suitable smaller focused spot sizes well smaller powder grains sizes enables finer geometrical resolutions improved surface quality higher concentration incident energy slm might yield higher local cooling rates thus smaller grain sizes resulting metallurgical microstructure similar slm sls based laser beam energy source however contrast slm ebm particles within powder bed fully molten context solid liquid state sintering distinguished solid state sintering governed thermally activated diffusion atoms temperatures melting point leading slowly growing necks adjacent powder particles however slow time scales governing process resulting low output rates make often infeasible economic point view contrary liquid state sintering aims partial melting powder molten liquid typically spread almost instantaneously unmolten particles acts binder order achieve defined ratios molten unmolten material either second material species lower melt point employed binder powder mixtures one material used smaller grains melt earlier larger ones incident energy density adjusted top surface powder grains melts core remains solid state sintering processes often result green parts high porosity requires subsequent heat treatment processing related processes directed energy deposition ded laser electron beam additive manufacturing methods share similarities slm ebm yet generally operate larger length scales scale molten trajectory also processes laser beam welding lbw electron beam welding ebw considered points work since underlying physical mechanisms similar slm ebm slm ebm aim additively generating solid material structures layers loose powder lbw ebw intend connection individual solid parts partially melting contact surface organization article remainder article structured follows section governing physical mechanisms detailed modeling well numerical analytical solution procedures context slm processes reviewed findings derived approaches discussed section focuses modeling radiation heat transfer powder beds sections comprehensive overview existing approaches classified means three main categories slm modeling approaches found literature namely macroscopic mesoscopic microscopic models given section focuses experimental studies especially relevant understand thermophysical mechanisms following similar structure section deals experimental approaches powder bed characterization sections refer experimental investigations effects visible macroscopic mesoscopic microscopic level thus representing counterpart sections considering modeling simulation finally section provides recommendations concerning practical implementation slm process gives open questions potentials future process improvement modeling simulation approaches approaches modeling slm classified macroscopic mesoscopic microscopic models macroscopic simulation models typically treat powder phase homogenized continuum resulting efficient numerical tools capable simulating manufacturing entire parts slm macroscopic models commonly aim determining spatial distributions temperature residual stresses well dimensional warping within slm parts mesoscopic models typically resolve individual powder grains melt pool order determine part properties adhesion subsequent layers surface quality well creation mechanisms defects pores inclusions microscopic models consider evolution metallurgical microstructure involving resulting grain sizes shapes orientations well creation thermodynamically stable unstable phases computational effort required mesoscopic models currently limits application models single track simulations however insights gained mesoscopic models might serve basis improve continuum models powder melt phase macroscopic models similarly microscopic models readily address small areas part range powder layer thickness existing approaches commonly limited nevertheless may serve basis development improved inhomogeneous anisotropic continuum constitutive models whose quality essential quality simulation results derived macroscopic simulation models macroscopic well mesoscopic models commonly require submodels radiation transfer heat transfer within powder phase purpose modeling approaches considering powder bed radiation heat transfer discussed section challenge severe actual derivation physical models application development powerful discretization techniques numerical solution schemes order enable robust efficient computational solution two key factors characterization slm processes finite element method fem finite difference method fdm well finite volume method fvm represent spatial discretization schemes typically employed considered models temporal discretization almost exclusively based explicit implicit finite difference time integration schemes implicit schemes fully discretized problem typically represented system equations nonlinear unknown discrete primary variables requires application nonlinear solver scheme depending characteristics nonlinear system equations solved convergence nonlinear solution scheme might always guaranteed contrary explicit schemes allow direct extrapolation known configuration time unknown configuration time resulting system equations linear discrete unknowns iterative nonlinear solution process required however least geometrically linear regime implicit time integrators proven unconditionally stable thus typically allowing considerably larger time step sizes compared explicit schemes order preserve system stability keep total system energy bounded simulation consequently implicit schemes favorable problems dominated low frequency response large time step sizes possible schemes sufficient order resolve system answer contrary explicit schemes rather suited model high frequency responses phenomena high velocity impacts small time step sizes required explicit implicit schemes order accurately resolve system dynamics since computational effort per time step lower explicit schemes nonlinear convergence accounted schemes preferable scenarios time step sizes dictated explicit schemes context mesoscopic slm models typically considerably smaller time step sizes required capture relevant physical phenomena consequently implicit schemes would potential achieve substantial computational savings however complexities arising multiple field domain couplings well geometrical characteristics prevalent slm make implicit treatment fully resolved mesoscopic models challenging task important question asked numerical point view way different physical fields thermal mechanical fields domains powder melt solid phase coupled three concepts treating phase boundaries shall distinguished first category models sharp boundary phases resulting jump material properties physical fields consideration displacement velocity continuity conditions mechanical equilibrium interface numerical realizations models typically based explicit interface tracking via level set schemes discrete solution spaces allowing discontinuity primary variables extended finite element method xfem second category schemes still identifies interface explicit manner means additional phase variable taking value one phase however interface represented sharp boundary rather transition range finite thickness characterized phase variable values example scheme volume fluid method vof third category given methods introduce additional variables order distinguish phases schemes typically treat phase boundary implicitly means specific values existing primary fields defining phase boundary liquid solid isothermal contour representing melt temperature rapid change physical properties phase boundary considered schemes terms high gradients applied material parameters thus interface extended finite thickness small transition region might lead excessive gradients material parameters consequence discrete problems challenging solved numerically slm modeling approaches reviewed work almost exclusively rely second third category approaches numerical schemes also distinguished concerning succession solving different physical fields domains namely monolithic schemes solve entire problem partitioned schemes solve different physical subsequently partitioned schemes subdivided iterative strongly coupled partitioned schemes iterate different fields several times within time step achieving solution monolithic problem statement staggered weakly coupled partitioned schemes solve field per time step without additional iterations eventually leading solution differs monolithic one monolithic strongly coupled partitioned schemes typically combined implicit time integrators coupling schemes preserve desirable stability properties implicit integrators large time step sizes hand weakly coupled schemes often combined explicit time integrators since schemes also lead low computational costs per time step however weakly coupled partitioned schemes even combined implicit time integrators individual fields stability requirement often becomes restrictive typically dictating small time step sizes cases even prohibiting stability see discussion context fluid structure interaction fsi simulations since time step size correlates computational costs considerations high practical interest modeling optical thermal properties powder phase optical thermal properties powder bed combination laser characteristics crucially determine heat distribution powder bed subsequent melt pool dynamics section addresses approaches modeling optical thermal properties powder bed well energy transfer laser beam source powder bed heat transfer within powder bed considered approaches typically employed powder bed submodels within three main categories macroscopic mesoscopic microscopic models often laser beam described good approximation means gaussian power distribution first step question laser energy transfers slm powder bed process described via principles thermal radiation geometrical optics wave propagation considered mechanically compressed spreading powder bed high porosity freely poured powder range typical slm powders shown laser radiation penetrates powder pores depth several particle diameters multiple reflections depth comparable powder layer thickness thus laser energy deposited surface bulk powder layer incident laser energy consists portions reflected absorbed transmitted impinging upon surface powder particles emission radiation powder particles often neglected terms radiation reflection distinguished specular diffuse reflection latter exhibiting reflection intensity equally distributed possible directions concerning radiation transmission typically classification opaque transparent particles made multiple processes light powder bed additionally influenced powder characteristics mixture ratio mean particle size shape size distribution packing density powder bed depth also laser beam spot size also polarization laser beam relative powder particle surfaces lead energy absorption even across individual powder particles since thermal conduction powder bed typically weak due prevalent porosity melting time scales typically smaller heat conduction time scales individual grains energy absorptions across powder bed also across individual powder particles general considerable influence melt pool shape properties solidified track factor influence given possible surface oxidation contamination effects extensive review topic radiation transfer within media found well following section mainly focus approaches context slm according modeling approaches radiation transfer heterogeneous media classified models based continuum formulation radiation transfer equation rte homogeneous participating optical media models based discrete formulation rte typically leads ray tracing schemes approaches based simplifying assumptions geometrical optics considers light rays propagate rectilinear paths travel homogeneous medium bend particular circumstances may split two interface two dissimilar media follow curved paths medium refractive index changes may absorbed reflected continuum model powder bed radiation transfer general radiation transfer equation rte known homogeneous media underlying homogenized continuum models powder bed radiation transfer see also rte describes rate radiation energy flux density based energy balance considering radiation absorption first term side radiation emission second term side scattering first third term side context represents directional unit vector infinitesimal increment whose magnitude per definition identical infinitesimal surface element unit sphere product represents vector energy flux density transferred photons direction unit vector within solid angle increment position constants scattering absorption coefficients however often replaced alternative constants extinction coefficient albedo portion reflected light mapping normalized scattering phase function stating probability radiation scattered ieb planck blackbody function representing radiation emission certain position ieb powder temperature temperature ambient gas atmosphere stefanboltzmann constant radiation transfer equation originally derived homogeneous media apply also heterogeneous systems continuous quantities parameters introduced far replaced effective counterparts determined via spatial averaging reference volume much greater length scales prevalent heterogeneities individual particles powder beds considering radiation heat transfer powder beds extinction coefficient typically determined structure powder size shape particles arrangement independent optical properties material particles like reflectance contrary scattering characteristics albedo phase function commonly depend reflective properties material particles following coordinate system chosen positive points powder layer thickness direction representing upper surface powder bed boundary powder bed thickness underlying solidified phase substrate net radiation heat flux density following typical definition heat flux per unit area measured results energy flux density per unit angle increment via integration directions incident radiation expressed means increments equivalent integration surface unit sphere assuming heat flux density resulting solution negligible also incident power density per unit volume determined based following relations qrz gusarov considered general problem normal incidence collimated radiation thin powder layer consisting opaque optically particles arbitrary shapes dimensions placed reflecting substrate order derive resulting net absorptivities deposited energy profiles assumed particle size sufficiently small compared laser spot size powder bed thickness allow employed homogenization procedure slm particle size typically much greater wavelength radiation thereby justifying applicability geometrical optics theory contribution representing emission individual powder grains neglected due comparatively high energy density incident laser radiation gusarov propose statistical model based powder porosity specific powder surface areas order determine model parameters required statistical model accounts dense powder beds particles touch radiation transfer treated scattering independent particles effect referred dependent scattering see moreover analytical solution radiation transfer equation derived via method boundary conditions normally collimated incident flux specular reflection among others additional assumption laser spot size much larger absorption depth made thus radiation transfer problem could considered incident flux figure comparison deposited energy profiles resulting rte full lines ray tracing simulation broken lines powders particles sizes laser beam wave lengths figure resulting energy deposition profiles derived rte solution proposed means ray tracing simulation compared three examples considering either mixed powders particles sizes well laser beam wave lengths accordingly two modeling approaches good agreement since sufficiently deep powder bed sufficiently large laser beam size chosen laser energy absorbed thin powder layer reflective substrate increases thickness deposited energy density per unit volume decreases absorption depths figure seem comparatively high compared typical values known slm powder layers might indication rather low packing densities considered study subsequently model applied independent works model extended refined shown application model derived reasonable either one two phases powder particles pores opaque one phase much larger volume fraction radiation properties two phases least similar contrary application model simply averages radiation intensities across phases problems two phases capture comparable volume fractions show considerably different optical properties time mixed powders seems unjustified mixed materials model based partial homogenized radiation intensities denoted vector radiation transfer equation vrte proposed model partial values obtained averaging individual phase consists two transport equations partial homogenized radiation intensities equations similar conventional rte contain additional terms take account exchange radiation individual phases structure medium specified volume fractions phases specific surfaces phase boundaries optical properties described refractive indices absorption coefficients phases verified vector rte model reduces conventional rte model case one two phases opaque one phase prevails volume compared rte analytical solution longer achievable vrte instead method exploited ray tracing model powder bed radiation transfer pointed boley assumption rte continuum model questionable thin metal powder layers layer thickness range powder particles laser spot size comparable size powder particles scenarios often prevalent slm processes averaging process underlying rte continuum models lead considerable model errors compared exact solutions typically yield energy absorption highly respect spatial position laser beam ray tracing modeling one possible approach resolve heterogeneities however shall emphasized depending spatial resolution required physical question answered global residual stress distributions specific locations individual pores interest also application rte continuum models might reasonable typically require considerably lower computational effort compared ray tracing simulations ray tracing simulations total energy emitted laser beam certain time interval represented discrete ensemble rays defined spatial position orientation energy position energy associated individual rays typically chosen overall energy emission also spatial energy distribution resulting entire ensemble equals corresponding characteristics laser beam gaussian energy distribution defining individual rays path ray traced striking obstacle powder particle based optical properties obstacle surface part ray energy absorbed whereas remaining part ray energy represented reflected ray defined energy orientation traced trough powder bed figure commonly several reflections considered ray remaining energy drops predefined threshold individual ray energy contributions absorbed particle accumulated simulation also model based principles geometrical optics thus corresponding requirements mentioned fulfilled particular particle radius considerably larger laser wavelength global view ray tracing scheme absorptivity across spherical particle figure modeling laser energy absorption powder grain scale based ray tracing approach ray tracing approach studying radiation problems countless scientific disciplines several contributions applied approach order study radiation transfer trough porous media packed beds one first approaches context wang studied powder mixture consisting two species spherical particles differing particle size material dimensions zone estimated determining accumulated energy absorbed particle however mechanisms heat transfer heat conduction convection besides radiation considered based powder bed consisting tungsten carbide particles cobalt particles typical sls system parameters ray tracing simulations well accompanying experiments came consistent result absorbed energy concentrated within depth powder bed surface resulting spatial distribution absorbed energy normalized total incident energy etotal plotted powder bed depth coordinate figure highest amount primary laser beam radiation obviously arrives directly top surface powder bed maximum absorbed energy distribution occurs slightly beneath top surface since location also secondary radiation contributions stemming multiple reflections powder bed prevalent figure absorped incident energy mixture tungsten carbide cobalt particles model refined additionally considering polarizationdependence laser absorption thereto absorptivity laser beam components perpendicular index parallel index polarization states distinguished according fresnel formulas see cos cos cos cos description takes account model laser beam radiation electromagnetic wave speed direction propagation wave assumed linearly polarized constant unit vector representing direction electric field normal complex refraction index describing speed electromagnetic wave propagation real part absorption electromagnetic wave imaginary part angle incidence normal vector onto powder particle surface employed perpendicular index parallel index components polarized electromagnetic wave typically determined means projection polarization vector reference plane spanned vectors figure resulting spatial absorptivity distribution across one isolated sphere stainless steel irradiated horizontally polarized laser beam depicted correspondingly distribution energy absorption vary considerably across one individual particle high importance practical relevance fluctuations shall illustrated following time scale estimate time scales governing homogenization energy sphere radius due thermal conduction estimated thermal diffusivity metal time scales governing melting spherical particle approximated rhm melting enthalpy per volume laser irradiance absorptivity parameter values typical slm processes melting time often considerably shorter compared diffusion time consequently energy absorption results partial melting particle might turn considerably influence melt pool dynamics creation mechanisms pores however cases melting takes place larger time scales heat conduction initial energy temperature distributions might homogenize melting occurs consequently approaches considering constant energy distribution across particle see might yield reasonable results fact thermal conductivity loose powder typically governed gas filling powder bed pores yields even larger time scales heat conduction laser beam path across powders gaussian average size top bimodal distribution particle ratio bottom variation absorptivity along laser path two different laser beam sizes radius figure dependence absorptivity laser beam size particle size distribution investigated stainless steel alloy according two general factors influence leading absorption one related absorption within single particle considered related powder bed order investigate second factor influence ray tracing simulations laser tracks across powder beds thickness resting solid substrate consisting two different powder mixtures gaussian distribution average size figure top bimodal distribution maximal packing density particle ratio figure bottom conducted powder mixtures typical distributions total absorbed energy along scan track length illustrated figure observed large laser beam sizes see curve figure lead comparatively homogeneous spatial energy absorption laser beam sizes range particle diameter see curve figure lead strongly fluctuating spatial absorption values furthermore overall absorptivity turned higher bimodal distribution although degree fluctuations comparable mixtures fluctuations considered homogenized continuum models thus applicability requires sufficiently small particle sizes compared laser spot size powder layer thickness results derived extended considerable range different materials verified experimentally concretely simulated power bed absorptivity flat surface absorptivity considered materials compared plotting powder bed absorptivity flat surface absorptivity revealed smooth functional relation two quantities already observed interesting observation allows extract assumed powder bed absorptivity material given flat surface absorptivity procedure considered highest practical relevance figure comparable ray tracing simulation model proposed context sls figure dependence powder layer absorptivity flat surface absorptivity different materials continuum models powder bed heat conduction macroscopic simulation models discussed later section regard powder bed homogenized continuum consequently approaches typically rely model effective homogenized powder bed conductivity contrary mesoscopic simulation models considered section resolve individual powder grains spatial discretization consequently powder particle considered separate solid body simplifies powder bed heat conduction problem two subproblems heat conduction well heat conduction across contact interfaces study thermal transport two phase systems long topic scientific inquiry maxwell rayleigh publishing models problem see see respectively early contributions based comparatively strong model assumptions example rayleigh treatment idealizes geometry regularly spaced spherical particles within dispersed phase extension spherical particles larger spectrum geometric primitives proposed contribution bruggeman general aim recent work relax assumptions match model expected behavior limit conditions often context interpreting experimental results example often cited model published meredith toblas explain conductivity emulsions approach assumed yield realistic predictions maxwell bruggeman models high volume fractions dispersed phase review tostsas martin models divided treat primary parameters bulk conductivities porosity secondary parameters including radiative transfer pressure dependence contact particles deformation well convective effects effective thermal conductivity loose metallic powders material negligible contact surfaces typically controlled gas pores contrary model derived heat conduction power beds case small finite contact areas particles scenario typically arises sintering process sls beginning powder particle melting process slm already solid state powder bed case loose effective thermal conductivity within powder bed direction given unit vector defined basis heat flux density temperature gradient according illustration effective thermal conductivity shall briefly presented two conceptionally different heat transfer models heterogeneous systems maxwell model mentioned index model index according models effective thermal conductivities maxwell model describes effective thermal conductivity systems consisting randomly packed spheres conductivity embedded continuous medium conductivity applicable sufficiently small volume fractions spheres easily verified model yields expected results limit conditions contrary model considers effective thermal conductivity two isolated spheres bulk material conductivity radius exhibiting circular contact area radius packing densities prevalent powder bed make application maxwell model processes virtually impossible reason gusarov derived formulations thermal conductivity regular random packings spherical particles based models type comparable model effective conductivity ordered powder beds determined summing conductivity contributions individual pairs case random packings models type homogenized spatial integration geometrical quantities replaced effective quantities particle volume fraction mean coordination number describing average number closest neighbors average dimensionless contact size ratio leading following effective thermal conductivity index gusarov strong dependence contact size least qualitatively already known experiments experimental characterization approaches typically consider powder density coordination number however derivations recommended future experiments treat two parameters independently since randomly packed powder beds different coordination numbers according considerably differing effective conductivities possible powder bed density besides modeling approaches determination effective powder bed conductivities often also experimentally determined effective conductivities employed macroscopic slm models discussed section extensive comparison experimental modeling approaches effective powder bed conductivities also found yagi conclusion summary ray tracing models based comparatively simple theory yield high degree detailedness obtained solutions also require considerable computational resources applied scenarios practically relevant size hand heat transfer continuum models based complex theory general capable accurately resolving details length scales prevalent heterogeneities length scale individual powder particles case slm considerably less expensive computation simple cases even allow derivation analytical solutions example gusarov importantly ray tracing models require specification powder bed structure particle shape dimension coordinates problem often requires additional assumptions principle distinction two model classes comparable distinction atomistic continuum models physical disciplines electricity mechanics unfortunately characterization slm processes one often interested length scales comparable powder layer thickness powder bed heterogeneities form individual particles length scale application continuum models often considered estimate underlying radiation transfer processes moreover assumption laser beam penetrating powder bed means multiple reflection actually applies sintering slm processes based modulated scanning strategies long melt pool prevalent slm performed scanning however resulting melt pool front typically proceeds laser beam position thus laser beam impacts directly melt pool undisturbed powder bed order accurately resolve impact laser beam melt pool whose actual surface contour typically considerably distorted part solution ray tracing techniques seem indispensable similar conclusions drawn comparing mesoscopic homogenized macroscopic models powder bed heat conduction local energy temperature distribution within powder bed also within individual powder grains crucially influence melting behavior mesoscopic scale creation defects pores inclusions also sufficiently resolved mesoscopic models laser beam radiation transfer via ray tracing heat conduction resolving particles spatial discretization thermal problem mandatory questions scale relevant however applying macroscopic slm models continuum approaches suffering powder homogenization error anyways heat transfer continuum models appear reasonable choice macroscopic simulation models macroscopic simulation models context slm processes typically treat powder phase homogenized continuum described means effective spatially averaged thermal mechanical properties without resolving individual powder grains homogenization procedure yields efficient numerical tools capable simulating entire slm builds practically relevant size across practically relevant time scales thermal problem lies typically focus interest models applied works additional problem statement considered aiming assessment global residual stress distributions dimensional warping effects based full interactions thermal problem commonly given following set balance equations boundary initial conditions tend tend tend tend represents variant energy equation typically employed macroscopic mesoscopic slm models problem domain within considered time interval tend density specific heat constant pressure temperature represents velocity field thermal conductivity tensor represents heat source term two terms side represent material time derivative thermal energy density constituted local convective time derivative mentioned thermodynamically consistent statement problems involving compressible materials would require additional coupling terms however models discussed following typically assume either exact approximate incompressibility resign additional terms typically isotropic conductivity assumed yields often source term modeled based solution rte according one proposed equation accounts essential boundary conditions prescribed temperature boundary whereas represents natural boundary conditions prescribed heat fluxes boundary typically accounting thermal convection radiation emission powder surface normal denotes temperature ambient gas atmosphere convection coefficient emissivity equation describes phase change due melting interface powder phase melt pool equation indices refer temperature gradients thermal conductivities solid liquid phase respectively moreover melting temperature latent heat melting normal vector interface defining melt process pure interface phenomenon eventually prescribes initial temperature solid mechanics problem also solved well following system considered addition tend tend tend tend equation represents mechanical equilibrium linear momentum equation cauchy stress tensor related primary displacement field temperature field constitutive parameters contributions elastic plastic thermal strains mechanical equilibrium angular momentum fulfilled definition due symmetry chauchy stress tensor total time derivative represents material acceleration vector vector volume forces acting physical domain similar thermal problem essential natural boundary conditions given equations prescribing displacements tractions equation represents requirement displacement continuity mechanical equilibrium relevant interfaces interface interface characterized normal vector field superscripts denote quantities two different sides interface however considered macroscopic models typically resolve interfaces sense sharp interface discontinuous material parameters rather based smooth homogenized transition different phases latter variant easier realize numerically finally initial position velocity field given equations spatial discretization heat equation momentum equation typically based finite element method fem requires transfer equations equivalent weak form time integration explicit well implicit approaches found coupling thermal problem often realized staggered partitioned manner following important representatives macroscopic models available literature briefly discussed long macroscopic models considered melt pool dynamics explicitly resolved effective material properties employed principal strategies deriving macroscopic models comparable sls slm ebm processes many aspects thus scientific contributions related processes considered recent years considerable research effort development macroscopic mesoscopic microscopic models conducted lawrence livermore national laboratory llnl including macroscopic homogenized continuum model slm processes described hodge model laser beam heat source term taken heat losses occurring example due thermal emission evaporation mass ejection taken account reducing nominal total laser power effective total laser power order describe phase transition two spatial phase function fields introduced representing powder consolidated pure powder pure consolidated material importantly consolidated phase considered model assumed vanishing porosity captures melting well solidified phase words phase transition powder liquid melting explicitly considered means phase function phase boundary melt pool solidified material accounted means high gradients material properties spatial locations relevant thermal material parameters conductivity heat capacity etc considered function temperature phase thus porosity spatial discretization process based finite element method fem subsequent numerical implementation phase boundary powder melt pool resolved sharp manner instead typically small band finite elements phases prevalent fact reflected values correspondingly equation evaluated original form interface rather equivalent form within volume elements phase boundary powder melt pool tracked means phase variables equation spatially average version considered phase transition melting solid instead high value heat capacity chosen range melting temperature order account released latent heat solidification process thus typical temperature plateau observable enthalpytemperature diagrams elementally pure materials melting point replaced interval slowly increasing temperature physical point view might rather comparable melting behavior alloys liquidus solidus temperatures numerical point view interpreted commonly employed regularization constraint equation elementally pure materials order simplify numerical solution process hodge considered problem coupling thermal mechanical simulations iteratively partitioned manner material law additionally accounting thermal expansion consolidation shrinkage employed boundaries powder melt solidified phase considered implicitly via strong gradients associated mechanical material parameters respect temperature porosity means interfaces given equation explicitly resolved mechanical simulation furthermore material model applied three phases powder phase solidified phase also liquid phase thus detailed modeling melt pool fluid dynamics prevalent mesoscopic models discussed section missing statement applies similar manner macroscopic slm models considered section low velocity low velocity high velocity high velocity figure comparison melt pool geometries derived via macroscopic slm models different scan velocities thermal simulations confirmed results gusarov stainless steel concerning maximal temperature top surface shape well shape melt pool observed melt pool shape gets narrower longer increasing scan speed point concave region tail melt pool boundary occurs effect explained higher thermal conductivity solidified material behind melt pool compared powder material sides figure nevertheless considering similarities two models proposed emphasized approaches based radiation transfer model incident laser energy chosen submodel heat source might considerably influence overall results since thermal conductivity within powder phase strongly limited low conductivity atmospheric gas filling powder pores model explicitly considered phase change via inertia step driving left step driving left step driving right step driving right figure laser beam scanning across overhang first beam moves left steps back right steps blue color represents loose powder red color represents consolidated material melt solid melt pool shape indicated black contour lines melting process could visualized means melt pool boundary lagging behind isothermal actual melting temperature range high scan velocities furthermore investigated melt pool behavior scanning across overhangs supported loose poorly conductive powder figure experimental results could confirmed typically larger melt pool sizes excessive overheating evaporation occur applying scan parameters domains low thermal conductivity overhangs features commonly problem addressed adapting laser parameters increasing net thermal conductivity regions means additional support columns experimental measurements simulation plastic hardening simulation plastic hardening figure comparison predicted actual displacements arising dimensional warping slm triangular prisms validation hodge simulation via experiments given work also investigates effects build orientation infill scan strategy preheating residual stresses dimensional warping digital image correlation dic used measure displacements triangular prisms fabricated slm test artifacts moreover neutron diffraction experiments used measure stress distribution figure illustrates typical results triangular prism printed vertically shortest edge triangle forming one edge rectangular contact area build platform prisms along bottom edge tall thick part enable comparison previous study order measure dimensional warping prism removed build platform leading considerable distortion part geometry due previously present residual stresses figure experimentally measured displacements taking maximal values range illustrated comparison figures corresponding simulation results either based material law without plastic hardening illustrated besides qualitative agreement numerical experimental results relative error simulation without plastic hardening lies range maximal displacement variant plastic hardening overestimates maximal displacement approximately factor two displacements illustrated behavior turned almost opposite rough agreement simulation based material law plastic hardening large deviations variant without plastic hardening behavior might indicate comparatively simple isotropic constitutive laws applied many macroscopic slm models still accurate enough additionally simulation results figure show cauchy stresses resulting two chosen preheating temperatures qualitative results captured quite well confirming preheating reduces peak values residual stresses figure simulated cauchy stress cases preheated built platform slightly different macroscopic slm model proposed radiation transfer model derived authors earlier work supplemented thermal simulation framework based equations contrast additional phase function field introduced order explicitly solve interface powder melt pool thus interface given implicitly isothermal line effective thermal conductivity loose metallic powders low temperatures controlled gas pores effective thermal conductivity powders considered chosen employing gas conductivity temperatures melting point surface contacts particles formed powder melts thermal conductivity powder rapidly approaches value solid material chosen investigated stainless steel type model sharp change melting point assumed based simulated temperature fields derived isothermals melt pool geometry analyzed different laser beam scan velocities see figure well corresponding discussion paragraph pure thermal simulation framework extended incorporating simple plain strain version elastic constitutive parameters jumping phase boundaries pure thermal problem solved ebm process based finite element method spatial discretization implicit method temporal discretization additionally adaptive mesh refinement strategies applied direct vicinity electron beam model penetration electron beam material described means model thermal losses due emission considered different phases investigated alloy considered thermal model via thermal material parameters heat capacity prescribed linear function temperature high gradients around melting point order account latent heat melting figure heat conductivity described linear law differs three different phases powder melt solidified material figure authors considered coupled problem wherein energy equation derived thermodynamically consistent manner free energy density functional leading thermomechanical coupling term compressible materials momentum equation also energy equation focus work lay comparing overall computational efficiency two different numerical coupling schemes monolithic coupling scheme solving thermal structural mechanics problem simultaneously well staggered partitioned scheme proposed staggered partitioned scheme allowed lower computational costs per time step due smaller system sizes resulting separate treatment thermal structural mechanics problem however stability reasons partitioned scheme required considerable smaller time step size resulting higher overall computational effort compared monolithic scheme consequently authors recommend latter category coupling schemes future application problem classes slm specific heat thermal conductivity figure dependence thermal material parameters current temperature considered phase much work group stucker focused development simulation tools slm processes commercialized software package thermal problem solved modeling laser beam surface heat flux gaussian distribution accounting heat losses powder bed surface due thermal convection work implicit crank nicolson scheme time integration spatial finite element discretization combined dynamic adaptive mesh refinement derefinement algorithm employed physical model comparatively simple treats pure thermal problem dynamic mesh adaption algorithm might allow considerable efficiency gains compared uniform meshing strategies steps improving aforementioned simulation models conducted developing homogenization strategies based representative volume elements rve specific part geometries support structures among others tools applied predicting dimensional warping suggesting strategies geometrical compensation homogenized macroscopic continuum models considered section generally expected high spatial gradients solution slm problems predominantly occur direct vicinity laser beam spatial locations also individual part geometry characterized fine length scales transition high low wall thicknesses similarly high temporal gradients primarily expected direct neighborhood laser beam position consequently simulation approaches based adaption schemes spatial see discussed temporal discretization promising save computational costs commercial fem software slm simulation offered pan computing acquired autodesk example work extended contributions discussed section far allowing residual stress relaxation specifically stress plastic strain underlying constitutive model reset zero temperature exceeded prescribed annealing temperature based experimental verifications simulations parts claimed temperatureinduced plays crucial role ebm processes error maximal residual stresses geometrical distortion could reach value compared model without annealing effects computational efficiency model increased employing inactive element activation strategy well dynamic mesh coarsening algorithm context slm process simulations typically two different strategies material deposition distinguished notion quiet elements refers scheme elements representing final part already present beginning slm process whereat elements power layers deposited yet characterized artificial material properties low thermal conductivity specific heat young modulus etc intended rarely influence results within already deposited layers hand notion inactive elements refers scheme adds new finite elements every newly deposited layer elements associated subsequent layers yet present course quiet element schemes result higher computational effort since large system equations containing degrees freedom final part solved every time step resulting system matrices might consequence artificial material parameters hand inactive element schemes often supported commercial fem codes abundant number scientific contributions solving thermal problem prevalent slm processes based macroscopic models fem discretization schemes manner similar examples already given exception explicitly tracks interface powder melting phase slm models discussed far typically assume thermal properties powder based initial powder bed porosity energy equation well evolution equation temperature dependent powder bed porosity field sintering process based model according solved simultaneously two equations coupled means model powder bed conductivity proposed contrast sls model explicit solution powder sintering state porosity order accurately determine current powder conductivity rather untypical approaches considering slm may explained fact time interval separating states loose powder fully molten liquid rather short modeling error assuming simpler relation powder temperature porosity conductivity seems essential compared basic assumptions underlying typical macroscopic slm models works dai model sls slm proposed solve problem defined small number subsequent powder layers special emphasis parts thermal problem accounts heat losses due radiation convection employs gaussian laser energy distribution structural mechanics problem defined via constitutive law accounting elastic plastic thermal strains effect volume shrinkage melting loose powder considered means rather heuristic model moves finite element nodes direction gravity distance defined via initial powder porosity mass conservation work simple model considering one powder layer proposed mechanical model assumes state plane stress accounts elastic thermal strains thermal problems solved partitioned manner based fem contribution proposes finite element model thermal problem sls slm processes two different empiric laws proposed powder porosity sintering process moreover law powder conductivity observed experimentally considered laser beam modeled volumetric heat source intensity decreasing linearly penetration depth computational model determining temperature history practical slm process involving scanning five multiple layers proposed model builds previously developed concepts extends approaches simulation multiple layers comparatively rough yet efficient model proposed estimate residual stresses dimensional warping effects occurring cooling phase slm processes principal idea combine several layers slm process apply equivalent thermal load integrated virtual layer within heating cycle determine residual stresses dimensional warping effects subsequent cooling cycle staggered scheme recent thermal slm models similar ones discussed given increased thermal conductivity melt pool considered account convective melt pool heat transfer however without modeling hydrodynamics contrast finite element models considered far finite volume method fvm employed spatial discretization combination explicit euler time integration scheme underlying model considers pure thermal problem including phases powder melt solid evaporated gas order determine melt pool shapes furthermore radiation transfer model originally proposed employed based resulting temperature field volume fractions phases powder melt solid gas determined computational cell defined fvm based comparisons experimental data found consideration evaporative heat loss essential realistic temperature field results interesting approach crucially differs ones discussed far proposed model somehow combines procedures typically applied macroscopic mesoscopic slm models powder phase modeled homogenized continuum effective properties typical macroscopic models whereas fluid dynamics within melt pool explicitly modeled common mesoscopic slm models melt pool fluid flow modeled compressible laminar newtonian liquid flow governed navierstokes equations accounting surface tension buoyancy forces well frictional dissipation mushy zone typical alloys considered material interface thermal problem described basis laser beam energy source proposed well heat losses due thermal radiation solving problem based commercial computational fluid dynamics cfd solver resulting temperature fields used input data subsequent structural mechanics simulation sense partitioned field coupling employing commercial fem code structural mechanics problem combination material law solved order determine residual stress distributions solidified phase thermal mechanical properties relevant model assumed employed laser beam model valid ebm processes remaining model constituents seem directly transferable related processes slm although applied field domain coupling schemes appear comparatively rudimentary general idea employing efficient homogenized macroscale powder models simultaneously considering heat transfer melt pool basis cfd simulations regarded appealing basic physical phenomena typically resolved mesoscopic models discussed next section accessible models considered increased melt pool widths decreased melt pool peak temperatures consequence marangoni flow could already captured model although considerably lower resolution numerical cost compared mesoscopic models considerable number similar approaches might also relevant slm found fields laser electron beam welding see future slm simulation models could strongly benefit works mesoscopic simulation models macroscopic models consider physical phenomena length scale individual powder particles mesoscopic models explicitly resolve scales order study melt pool dynamics melt pool heat transport well wetting melt substrate powder particles typically models aim prediction part properties adhesion surface quality defects mesoscale pores inclusions models initial powder grain distribution either determined basis contact mechanics simulations employing discrete element method dem see means generic packing algorithms rain models subsequently simulations performed considering heat transfer within powder bed melting process well heat transfer hydrodynamics melt pool often unmelted powder grains assumed spatially fixed powder phase solved pure thermal problem based proper laser beam model see section melt pool fluid dynamics commonly modeled means energy equation considering latent heat melting well following set balance equations accounting conservation mass momentum incompressible flow tend tend tend tend tend system equations continuity equation requiring conservation mass navierstokes momentum equation represent essential natural boundary conditions boundaries prescribes initial velocity field flow typically assumed laminar incompressible fluid newtonian stress tensor represents tensor pressure kinematic viscosity body force term typically accounts gravity term might contain additional contributions related phase change metal powder liquid buoyancy forces discussed equation accounts surface tension effects melt pool ambient gas velocity field assumed continuous left side surface stress exhibits jump interface right side jump direction first term square brackets already present case spatially constant surface tension leading fluid surfaces curvature capillary effects already hydrostatic problems jump direction second term square brackets occurs spatially varying surface tension values solely considered due spatial temperature gradients newtonian fluids shear stresses proportional velocity gradients always induce flow marangoni effect resulting surface tension variations crucially influences convective heat transfer within melt pool surface tension significantly determines melt pool shape resulting solidified surfaces see numerical solution mathematical problem defined often staggered solution scheme relying sequential solution thermal problem powder bed fluid dynamics problem melt pool applied partitioned solution procedure powder particle zones typically identified mass increments per time step transferred solid fluid phase due melting consequently contour lines define boundary conditions subsequent fluid dynamics solution step many existing approaches consider spatial discretization via finite volume method fvm combination volume fluid vof method order track interface temporal discretization fluid dynamics problem predominantly based explicit time integration schemes required resolution spatial discretization small time step sizes admissible explicit time stepping schemes lead typically high computational costs allowing application mesoscopic models simulations far arguably one mesoscopic models kind presented khairallah continuum radiation transfer model discussed section employed laser beam source term individual particles resolved mesoscale order explicitly consider conductive heat transfer within powder bed limited atmospheric gas pores point contacts powder grains see spatial discretization finite element method based uniform cartesian mesh element size applied numerical solution process based staggered solution thermal hydrodynamics problem applied explicit time stepping scheme allowed robust treatment complex simulation problem case proper stability requirements fulfilled latter restriction limited achievable time step sizes consequently observable spatial temporal problem dimensions considerably leading computational effort range cpu order simulate single laser track length dimensions range time spans range several model includes surface tension effects prevalent melt pool flow well fluid viscosity gravity effects although latter effect turned negligible given short time scales dominating surface tension effects considered length scales however marangoni effect temperature melt flow due surface tension characteristics considered simulation surface tension effects simulation without surface tension effects balling effect attributed instabilities liquid cylinder range high scanning velocities figure mesoscopic simulation prevalent slm process stainless steel powder order validate thermal building block simulation framework effective powder conductivities calculated developed model compared analytical solutions experimental observations agreement thermal conductivity stainless steel powder typical packing density turned factor three higher conductivity air filling powder pores coupled thermohydrodynamics simulations revealed importance considering surface tension results smoother less granular melt pool geometry eventually leads higher effective thermal conductivities within pool also pool underlying substrate effect turn led increased heat transfer faster cooling rates consequently smaller melt pool sizes figure rapid thermal expansion directly laser beam turned induce high melt flow velocities especially backward direction also balling effect attributed instability long cylindrical fluid jet breaking individual droplets observed high scanning velocities see figure theoretical analysis model proposed extended also implicitly account effects recoil pressure evaporation zone laser beam subjacent melt pool flow marangoni convection well evaporative radiative surface cooling see also details model moreover detailed resolution laser energy absorption achieved based model discussed section surface tension assumed decrease linearly temperature accounts marangoni effects driving melt flow hot laser spot toward cold rear already observed surface temperatures laser spot easily reach boiling values resulting vapor recoil pressure adds extra forces melt pool surface create depression laser figure bottom right observations motivated differentiation three different melt pool regions namely depression region governed recoil forces tail end region governed surface tension forces well transition region see figure approach explicitly resolve vapor flow employs traction boundary condition based recoil pressure model proposed accordingly recoil pressure depends exponentially temperature exp model pre ambient pressure evaporation energy per particle constant surface temperature melting boiling temperature order accurately model heat losses melt pool surface evaporation heat flux based well heat flux due thermal radiation emission see considered according represents sticking coefficient recoil pressure according molar mass gas constant surface temperature however mass loss due evaporation considered continuity equation underlying model effect considering neglecting different model components illustrated figure compared continuum radiation transfer model employed model resolve radiation absorption across powder particles leading narrow connections underlying substrate turn higher heat accumulations within particles consequence reduced thermal conductivity model also captures scenarios particles partially melted might contribute surface pore defects accurately principal surface tension effect fostering strongly curved melt pool surfaces minimized surface energy observable figure additional melt flow generated buoyancy forces consideration temperaturedependent surface tension allows marangoni convection induces melt flow hot laser spot toward cold rear see figure effect turn increases melt pool depth contributes melt flow recirculation melt spattering colder liquid metal lower viscosity ejected pool additional consideration recoil pressure due evaporation leads considerably deeper melt pool increased convective heat transfer within melt pool combination additional heat sinks due evaporative radiative surface cooling increased overall cooling rate consequently case exhibits lowest amount stored heat figure influence considering individual modeling components laser ray tracing constant surface tension surface tension recoil pressure top left bottom right resulting melt pool dynamics visualized temperature field range blue red well fluid velocity vector field red contour line represents melt pool front stated overall melt pool dynamics direct vicinity laser beam strongly influenced recoil pressure whose magnitude increases exponentially temperature leading aforementioned depression directly laser depth depression due recoil pressure considered increase increasing laser power effect closely related keyhole mechanism observed laser electron beam welding see hand fluid dynamics cooler back end melt pool governed surface tension fostering balling tail cooling via analysis enabled also theoretical considerations concerning creation mechanism denudation zones low powder ditches alongside melt pool result lateral particles dragged melt pool surface tension responsible elevated edge effect typically observed first laser track powder layer accordingly effects denudation stronger first track layer since powder particles attracted two lateral sides melt pool simulation model combined experimental approaches order investigate physical phenomena mechanisms responsible occurrence denudation zone accordingly ambient gas flow induced vapor rising melt pool may eject surrounding particles powder layer contribute denudation figure fluid velocity magnitudes vectors three different melt pool regions depression region tail end region transition region laser beam scanning left right currently located top depression region alternative model proposed discrete element method dem applied order study spreading recoating inconel powder accounting frictional contact interaction well gravity driving forces subsequently problem melt pool free surface flow solved means commercial computational fluid dynamics cfd solver based finite difference method fdm resolves phase boundary melt pool surrounding atmosphere basis volume fluid vof method also model surface tension effects phenomena considered laser beam energy absorption within powder bed modeled simplified manner based prescribed heat flux boundary conditions top surface powder layer following gaussian distribution laser beam similar depression melt pool directly underneath laser beam could observed since recoil pressure considered observation essentially attributed backward flow molten metal due marangoni convection comparable figure also balling effect observed numerical simulations attributed instabilities occurring dependence melt pool dimensions powder particle arrangement accordingly higher packing density found reduce likeliness balling effect increase overall surface smoothness faster travel speed lower laser power identified possible sources fostering effect one hand claimed obtain simulation results basis comparable spatial resolution considerably lower computational effort compared alternative approaches hand reference discuss employed time integrator chosen time step size crucial influence computational performance review article megahed discuss several mesoscopic simulation results taken previous contributions authors model megahed comparable mesoscopic models discussed section far among others shown lower energy densities yielded smoother melt pool surfaces within considered range processing parameters effect observed similar manner reference argued melt pool associated high energy density reaches higher peak temperature values turn surface tension act longer time spans leading pronounced balling effect contrary reported increasing balling effect decreasing energy density authors argued total absorbed energy density decreases decreasing laser power constant velocity similar total absorbed energy density decreases increasing laser velocity constant laser power latter scenario known foster balling effect resulting surface roughness however difference two cases latter scenario results longer slender melt pool shape since less time available melt pool cool laser beam moves certain distance along track fosters hydrodynamic instabilities according theory observations made observations made might necessarily contradictory instead observations could associated upper lower bound energy density interval allows stable processing consistent experimental findings contrast continuous laser scan mode considered far slm machines use modulation melting strategies megahed also investigated melt pool characteristics resulting modulated laser see figure resulting melt pool shapes different points time seen laser modulation corresponding exposure time leads singular deep melt pools melt pools grow diameter exposure time join one continuous melt track upper surface powder bed depth typical continuous laser beam however argued deeper melt pool peaks might imply additional anchorage new layer previous ones addition discussion modulated strategies might also allow higher energy absorption laser beam penetrate deeply pores powder layer via multiple reflections instead directly interacting melt pool surface typically case continuous scan mode potential benefits could arise terms lower melt pool surface temperatures decreased evaporative mass loss recoil pressure figure melt pool shapes modulated laser temperature three different points time ranging blue yellow melt flow fluid dynamics solved means finite volume method fvm employing volume fluid method vof order resolve interface flow besides recoil pressure surface tension buoyancy forces already considered additionally supplemented momentum equations drag force contributions due transition mushy zone melting alloy based darcys term porous media energy equation additional heat losses due evaporation convection well radiation emission considered laser beam heat source model proposed context laser beam welding applied since radiation transfer mechanisms prevalent slm might strongly depend mode operation application also continuum radiation transfer laser models see section questioned critically also strong influence recoil pressure melt pool dynamic direct vicinity laser beam well thereby induced material spattering could observed even though significantly decreased laser track sizes well structured instead random powder packings considered reference similar mesoscopic modeling approaches example given figure simulation results based model discretization cells size illustrated initial powder grains melt pool shapes resulting two different prescribed wetting angles wetting left worse wetting left mesoscopic models considered far discretized means either fem fdm fvm entirely different approach proposed based contributions method combination explicit time integration scheme applied order discretize mesoscopic incompressible melt flow model incorporating gravity surface tension wetting effects well melting solidification process prevalent slm initial powder bed generation based random packing algorithm employing rain model scheme general simulation framework might applicable slm well ebm processes actual simulations experiments conducted reference focused ebm process consequently energy absorption considered predominantly take place powder particle surfaces first incidence fact reflected chosen exponential law light attenuation dense matter applicable slm process free surface melt pool ambient gas numerically described basis averaged fluid volume material properties prevalent discretization cells boundary manner similar vom surface tension thus marangoni convection neglected figure shows melt pool shape resulting spatially fixed laser beam position different wetting angles determined considered material combination triple point otherwise identical system parameters according figure wetting behavior expressed via wetting angle considerably influences resulting melt pool shape detailed investigations revealed however resulting melt pool shape considerably influenced powder packing density least range lower packing densities also individual stochastic realization different powder beds identical packing density consequently local powder topology well wetting behavior resulting prevalent material combination might considerable influence balling solidifying tail addition single point simulations also simulations performed melt pool shapes resulting different scan velocities fixed laser beam power fixed spatial energy density illustrated figure observed melt pool stability constant laser power increases decreasing laser beam velocity turn yields higher spatial energy density small melt pool sizes range powder grain diameter results low energy input limits wetting behavior fosters round melt pool shapes minimal surface energy increasing size relative contribution surface tension wetting gravity effects changes melt pool able interconnect powder particles visible form pronounced wetting behavior hand comparable melt track topologies found constant spatial energy density advocating quantity relevant factor influence given process conditions concluded packing density powder bed significant effect melt pool characteristics constant laser beam power constant energy density figure mesoscopic simulation results influence laser beam velocity wetting behavior melt track shape model applied study buildup process powder material model simplifies actual slm process investigating plane spanned build scan direction furthermore dimensionless factors characterizing different physical phenomena time scales prevalent slm analyzed reference based comparison laser beam interaction time thermal diffusion time recommendation optimal laser beam velocity made order avoid overheating excessive evaporation one hand well unnecessarily high energy losses hand argued surface tension dominates gravity terms driving forces whereas inertia effects dominate viscous damping terms resistance forces furthermore rayleigh time scale characterizing relaxation interface perturbation action inertia surface tension forces turned one order magnitude smaller thermal diffusion time scales surface coalescence powder particles follows melting process nearly instantaneously model employed order illustrate formation pores simulation framework proposed supplemented means alternative models electron energy dissipation ebm processes reference method extended problems special emphasize code parallelization strategies increase computational efficiency microscopic simulation models mechanical physical properties metals therefore parts made slm influenced local composition microstructure characterized grain size grain morphology shape grain orientation texture general microstructure evolution solidification processes governed spatial temperature gradients cooling rates solidification front velocities visualized figure qualitative solidification map approximately divided areas high low cooling rates regimes columnar equiaxed structures typically high cooling rates lead rapid nucleation grains microstructures characterized small grain sizes small dendrite arm spacings competition thermal gradients solidification front velocities lead columnar elongated dendritic grain morphologies high thermal gradients dominating since scenario interface becomes unstable preferred orientations grow faster others finally form dendritic morphologies contrary equiaxed rather spherical grain morphologies expected high solidification front velocities dominating consequence stable interface contrast conventional metallurgy casting slm based highly localized energy supply yielding extremely high heating rates consequence localized energy supply solidified material previous melt tracks layers remains comparatively low temperatures turn yields extreme cooling rates laser figure microstructure versus thermal gradients melt front velocity passed magnitude heating cooling rates typically lies range succession extreme heating cooling rates eventually also results high spatial temperature gradients solidification front velocities large range cooling rates thermal gradients induces large variety microstructures materials differing grain size morphology orientation high cooling rates often lead phases comparatively small grain sizes trend grain sizes slm typically smaller material strength higher compared alternative manufacturing processes casting build direction commonly representing direction highest temperature gradients depending employed material often elongated dendritic grain morphologies accompanied higher material strength build direction observed additionally solid state transformations grain growth contribute crucial microstructure changes repeated thermal cycles remaining melting point experienced previously deposited material initial solidification process see transformation martensite alpha beta phases discussed section knowledge resulting micro structure characteristics might provide important details formulation continuum constitutive laws crucial accurate residual stress predictions intended macroscopic simulation models hand global temperature distributions provided macroscopic continuum models represent essential input variables studying solidification process means microstructure models phase field models based various spatial temporal discretization strategies considered one established modeling approaches study solidification processes also diffusionless phase transformations occurring example formation martensitic phases phase field models based definition functional typically following form represents phase field variable order parameter taking value solid phase liquid phase value finitely thick interface region smooth phase transition furthermore represents volume mass fraction certain chemical species prevalent problem interest binary systems fraction second species given general systems consisting species variables required absolute temperature also grain orientation considered phase field model context represents parametrization grain orientation given rotation vectors quaternions euler parameters first term represents gibbs free energy either associated liquid solid phase certain location liquid solid domain proper interpolation values locations phase boundary region second term represents energy contributions stemming phase boundaries characterized existing gradient phase field choice parameter determines resulting phase boundary thickness requires compromise accurate resolution small boundary layer thickness typically prevalent physical systems high value certain degree artificially increased thickness reasons computational efficiency robustness low value eventually last term yields energy contributions due crystallographic orientation gradients contribution fosters uniform growth within individual grains penalizes misorientations grain boundaries making larger grain sizes reduced overall grain boundary surface favorable configurations thermodynamical equilibrium variation functional yields eulerlagrange equations variational problem determining equilibrium solution variables extremum starting system undercooled melt equilibrium solution defined functional found practical simulations transforming transient problem searching associated steady state solution thereto approach applies additional rate terms proportional sides equations slm temperature field required input variable phase field model might provided macroscopic model based figure phase field simulation solidification grain growth process assuming isotropic boundary energy mobility spatial discretization resolution required sufficiently accurate phase boundary representation typically results considerable numerical effort reason many available phase field models applied scenarios illustration general capability phase field models two existing approaches even though applied context slm shall briefly presented one proposed based finite difference discretization scheme space time work one chemical species variable required discrete set possible grain orientations considered consequently variable representing continuous spectrum possible orientations replaced additional phase field variables one possible orientation figure shows different time steps solidification problem allowing discrete grain orientations considering employed grid time steps computational effort seems large visualized figure initial number grains reduces approximately end simulation yielding decreased overall grain boundary energy thermodynamical equilibrium pfm based spatial fvm discretization implicit backward difference time stepping scheme given two different chemical species reflected one concentration variable well continuous grain orientation field given employed figure shows results simulation considering grain growth within region solid phases binary alloy chemical species phase diagram conditions starting randomly distributed nuclei see figure left phase begins growing within matrix see figure middle phase distribution end simulation due lower diffusion constant high cooling rates chosen example coring effect thermodynamical state characterized inhomogeneous concentration distribution see figure right observed representability configurations relevant modeling slm given high cooling rates present numerical simulation based finite volume grid time steps order simulate physical time leading computation time processors figure phase field simulation considering grain growth process within solid phase region binary alloy phase diagram initial distribution nuclei left final distribution middle well concentration second species right approaches microstructure modeling found restricted gong solidification growth primary grains ebm processing studied thereto phase field model similar employed representing concentration concentration solute treated combination thus ternary alloy simplified binary system influence grain orientation means parameter considered required temperature field provided via finite element solution energy equation figure shows simulated growth columnar structures different time steps solidification process initially number nuclei placed substrate molten powder layer number nuclei determined via regression equation accounting increasing nucleation density increasing undercooling initial dendrite growth occurred primarily scan direction horizontal build direction vertical neighboring dendrites started contact subsequently vertical growth observed resulting grain orientation parallel build direction typically observed experiments figure final configurations resulting different scan speeds depicted expected agreement experiments considered higher cooling rates induced higher scan velocities result higher nucleation density eventually finer grain structure still oriented build direction second category approaches often applied study solidification processes given cellular automaton schemes like pfms schemes require solution thermal field input however schemes rely additional phase field variable explicitly describes location individual phases phase boundaries instead evolution grain growth typically constructed rather geometrical manner tracking interface based cell capturing schemes consideration given initial nucleus orientation current interface geometry comprehensive overview methodologies modeling microstructure evolution among others considering pfm schemes also given figure phase field simulation growth ebm nucleation substrate different time steps figure phase field modeling dendrite growth different scan velocities low medium high columnar grain growth solidification inconel alloy studied based realization approach visualized figure every time step temperature field well phase distribution exchanged weakly coupled manner required temperature field derived means mesoscopic lattice boltzmann simulation model already discussed section study predicts columnar grain growth build direction contrast different grain orientations distinguished model effect stray grain formation due partially molten powder particles left right boundary computational domain could taken account thanks powder bed resolution mesoscopic scale also epitaxial grain growth build direction grain growth newly deposited layer continuing crystallographic structures grains prevalent previous layer predicted leading grain sizes considerably exceed powder layer height effect grain coarsening solid phase competition solid grains different orientation leading growth certain grains addressed scheme figure exchange temperature field phase information mesoscopic microscopic simulation model also recent contribution first steps made towards simulation framework predicting resulting microstructure evolution solidification slm processing thermal field simulated based fem discretization microstructure model relies approach problem slm processing inconel segregation formation laves phase grains interdendritic region may considerably degrade mechanical properties nie employed numerical scheme based stochastic approach studying nucleation growth dendrites segregation niobium formation laves phase particles solidification additionally discretization heat conduction equation employed solving thermal problem required input stochastic microstrucutre evolution model based numerical simulations nie demonstrated low cooling rates well high ratio temperature gradients solidification front velocities yield microstructures tending large columnar dendrite arm spacing consequently continuously distributed coarse laves phase particles thus suggested increase cooling rates decrease temperature gradients order obtain small equiaxed dendrite arm spacings discrete laves phase particles figure figure formation laves phase particles different solidification conditions summary existing microscale models potential future developments computational efficiency represents one key requirements slm simulation tools capable capturing practically relevant time length scales potential improvements existing simulation frameworks comprise temporally spatially adaptive time space discretization schemes use implicit time integrators efficient code design suitable parallel computing well development efficient consistent strategies material deposition dynamic adaption computational domain besides computational performance existing simulation tools require improvements terms model discretization accuracy following improvements also potential coupling three model classes discussed figure macroscopic model output temperature field residual stresses dimensional warping needs improved material laws models heat absorption transport powder melt effective properties mesoscopic model thermal boundary conditions temperature field reduction computational costs mesh adaptivity adaptivity implicit schemes material law microscopic model output grain size morphology texture phase distribution needs models solid state transformations multiple chemical species output surface quality layer adhesion porosity needs powder dynamics collisions vapor gas flow figure potential future improvements coupling submodels microscale macroscopic models macroscopic slm models typically consider entire parts order predict global temperature fields residual stresses dimensional warping melt pool shape resulting temperature distribution crucially depend accurate model laser radiation transfer heat conduction transfer within powder phase context macroscopic models considering powder phase spatially homogenized sense could improved transferring information mesoscopic models also model information related melt pool hydrodynamics determine heat transfer within melting phase could either gained mesoscopic models explicitly considered macroscopic models following existing approaches fields laser electron beam welding measures might considerably improve accuracy predicted temperature field direct vicinity melt pool macroscopic models tend strongly abstract complex physics prevalent regime expected provide good predictions temperature field regions away melt pool heat flow predominantly governed global part geometry thermal boundary conditions well solid material characteristics however one ultimate goals macroscopic models prediction residual stress distributions thermal strains identified kinematic origin residual stresses actual magnitude stresses well magnitude maximally admissible stresses given yield strength essentially determined solid material properties thus underlying metallurgic microstructure however part unknown solution variables slm process contrary even derived basis exact temperature solutions significance residual stress predictions strongly limited long comparatively simple material laws employed supplying macroscopic constitutive models details concerning material inhomogeneity anisotropy means microscale slm models could drastically improve quality predictive macroscopic slm models mesoscopic models mesoscopic slm models resolve length scales individual powder grains order accurately describe radiation heat transfer within powder bed well heat mass transfer within melt pool governed surface tension wetting effects recoil pressure ultimate goal pursued highly resolved models gain essential understanding underlying physical phenomena responsible melt track stability resulting adhesion subsequent layers surface quality well creation mechanisms pores types volume surface defects high spatial resolution naturally goes along considerable computational effort far computational challenge solving resulting numerical problems involving complex field domain couplings addressed explicit time integration limiting currently observable length scales single tracks length observable time scales several despite massive usage high performance computing resources concerning model accuracy recent experimental investigations suggest gas flow melt pool might considerably affect melt pool powder bed dynamics first contributions already considered evaporation implicit manner via recoil pressure heat transfer models explicit modeling ambient gas flow least direct vicinity melt pool could allow considerable insights governing physical mechanisms suggest strategies influencing typically undesirable gas dynamics also effects powder particle ejection denudation sufficiently understood require investigation existing mesoscopic models typically consider spatially fixed powder particles models account powder particle dynamics collision could allow study understand effects detail eventually mesoscopic models could benefit improved thermal mechanical boundary conditions interface representative mesoscopic volume global slm part context macroscopic slm models might provide useful data microscopic models microscopic slm models investigate metallurgical microstructure evolution resulting high temperature gradients extreme heating cooling rates slm process aim prediction resulting grain sizes morphologies textures well phase distributions mesoscopic models intend rather universal statements concerning optimal adjustment parameters laser beam velocity power powder layer thickness packing density well grain shape size distribution given powder material global temperature distributions residual stress fields microstructure evolutions derived macroscopic microscopic slm models strongly depend specific part geometry consequently efficient numerical tools required order enable simulation parts acceptable simulation time possible exchange information macroscopic microscopic scale macroscopic models typically provide temperature field solution input microscopic slm models compared variety existing macroscopic mesoscopic slm models microscopic modeling approaches take account specific thermal conditions prevalent slm processes regarded less elaborated existing approaches rely simplified representation problem although many physical mechanisms governing metallurgical microstructure evolution nature current approaches consider two individual chemical species however explicit modeling relevant alloying elements might desirable order describe experimentally observed effects segregation individual alloying components also mechanisms solid state phase transformation grain growth observed experiments likely considerably change material properties repeated thermal cycles initial solidification process might desirable supplementations future models experimental studies thermophysical mechanisms slm following modeling approaches discussed section shall supplemented representative methods experimental characterization next section focuses powder bed radiation heat transfer discussed rather theoretical point view section subsequent sections experimental characterization melt track stability surface quality defects residual stresses dimensional warping effects well metallurgical microstructures grain morphology considered sections represent counterpart sections exactly aspects prevalent different length scales part described means macroscopic mesoscopic microscopic models optical thermal characterization powders initial research interaction began mind rather surface modification techniques cladding processes haag describe archetypal experiment measure absorption coefficients metal powder wherein laser known power used irradiate powder bed absorption coefficient calculated powder bed temperature study laser used heat aluminum iron titanium aluminide copper powders selected particle sizes range validated sem key results summarized figure range low laser power absorption coefficients well consideration different powder sizes cause figure effects powder grain size material well laser power relative powder layer absorption slight variations overall powder bed absorption leading values absorption compared incident energy radiation increase absorption iron copper higher laser powers correspondingly high temperatures attributed surface oxidation material practically identical experiment performed tolochko catalog nominal absorption coefficients materials relevant sls slm processing wavelengths results indicate wavelength represents superior choice processing metals absorption coefficients generally lying opposed values around longer wavelength conversely absorption radiation greater cataloged polymer materials ceramic materials shown vary widely figure optical depth versus powder layer thickness iron powder different grain size mcvey presented approach measure optical power delivered reflected transmitted layer powder using integrating spheres powder bed figure effective optical depth logarithm ratio measured incident light plotted versus powder bed thickness applied laser beam wavelength accordingly absorption light strong dependence particle size finer particles yielding stronger absorption recently rombouts studied thermal conductivity powdered stainless steel iron copper apparatus relies modulated laser beam heat optically opaque sample container subsequently conducts heat powder bed within container circumstances heat flows powder bed detectors read using amplifiers indexed laser modulation experimental results shown figure accordingly powder conductivities two three orders magnitude lower fused material build platform bulk conductivities stainless steel iron copper respectively argued thermal transport powder beds fine particles occurs primarily gaseous phase knudsen diffusion direct conduction governing effect larger particle sizes millimeter scale figure thermal conductivities metallic powder beds different materials grain sizes distributions measurement residual stresses dimensional warping already stated earlier basically two thermal regimes responsible creation residual stresses distinguished slm first stresses might induced solid substrate underneath melt pool due high thermal gradients cooling rates region residual stresses influenced powder structure melt pool might result delamination current material layer secondly residual stresses might also result consequence repeated thermal cycles prevalent material layers larger distance top layer main factors influence region given specific geometry part slender columns thin walls overhanging structures well thermal boundary conditions form fixations build platform even primitive geometries cubes second category thermal stresses might occur due temperature gradient build direction typically results compressive stresses lower layers induced layers higher temperatures cool cases thermal stresses need reduced better avoided order fully exploit mechanical strength part exposed external loads order avoid dimensional warping removing parts build platform eventually order avoid crack propagation typically initiated locations residual stress peaks one hand residual stresses relieved annealing slm process build plate removal hand heat treatment compromise dimensional accuracy impose grain growth demanding tradeoff diverse requirements studied effect laser scan pattern properties inconel laser beam scanning strategy depicted figure part infill fused sequentially scanning square regions known islands specifically dimensions studied using laser scan speed layer thickness spot size study begins discussion part density specimens shown theoretical density specimens featured lower densities respectively see also next section discussion mesoscale defects pores findings congruent surface micrographs show increased porosity within smaller island sizes even cracking within specimen data also included shows comparatively little change offset yield strength ultimate tensile strength yet elongation break increases increasing island size agreement surface micrographs increased density defects serve sites propagate cracks initiate fracture finally residual stresses evaluated counterintuitively specimen featured lowest residual stress observation believed result stress relaxation due cracking expected specimens showed greater residual stresses specimens greater thermal gradients smaller specimen arising greater degree heat remaining previous pass explain greater residual stresses compared specimens figure determination optimal island sizes slm scanning strategy hodge combined simulation experimental approach applied order access residual stresses well amount dimensional warping observed components find stresses dependent upon build orientation infill scan strategy described earlier digital image correlation dic used measure dimensional changes triangular test components experimental technique based upon successive images extract strain measurements object undergoes physical change temperature external loading hodge uses technique monitor surface triangular test components removed build platform process changes residual stress distribution within component thereby inducing corresponding change strain measurements previously discussed simulation results augmented using neutron diffraction experiments directly measure residual stress distribution test pieces like dic experimental method well established solid mechanics mapping stress distributions materials full strain tensor may determined approaching volume plurality directions course measurement represents average strain tensor volume probed commonly hodge mainly employed described experimental techniques verification simulation model see section discussion results mercelis kruth evaluated residual stresses dependent scan strategy build height degree base plate relied sectioning parts followed diffraction studies determine residual stresses diffraction determines residual stresses identical principles neutron diffraction however comparatively low penetration technique makes useful surface technique hence need sectioning authors clearly showed residual stresses increase component height long parts stayed connected baseplate high stress levels typically range materials yield strength could observed also exposure strategy used fuse powder layers large influence residual stress levels developed found stresses larger perpendicular scan direction scan direction scanning strategy subdividing surface small islands resulted lower maximum stress value preheating baseplate shown reduce induced residual stresses nearly finally authors showed removing part baseplate relaxes approximately residual stresses part negatively impacts dimensional accuracy component removed havermann presented unique method fiber optics embedded within slm parts thereto authors fabricated rectangular test pieces location strain measurement desired fiber optic strain sensor bragg grating placed part thereby enabling strains induced residual stresses component also deform optical fiber changes optical transmission embedded bragg grating enabling quantification strain employed small fiber diameter enabled measurements within first several layers slm parts interaction baseplate central development stresses authors observed increasing compressive residual stresses first layers fiber fused building layers fiber allowing component cool residual stresses range measured kempen studied effect build platform preheating temperature order lower thermal gradients finally yield parts high density figure shows three samples built using three different preheating temperatures left middle right seen higher preheating temperature clearly results lower residual stresses lower degree crack formation practical applications high preheating temperatures long preheating times might however lead undesirable sintering loose powder complicates removal final part figure blocks high speed steel hss manufactured slm build platform temperatures left middle right showing critical influence preheating temperature cracking characterization melt track quality defect detection section exemplary experimental approaches characterizing melt track stability quality well possible defects discussed recent years much effort invested monitoring slm ebm processes process quality control defect detection centers identifying pores balling unfused powder cracking result process parameters parallel high speed monitoring researched aim real time process control results efforts manifested several commercially available systems monitoring metal processes shown figure following discuss several key works behind products refer review everton complete treatment figure summary different commercially available monitoring systems slm ebm ded processing craeghs studied melt pool heat transfer using radiometer high speed camera arranged coaxially laser work describe edge effect occurs track scanned without adjacent contour already discussed section heat scan segments adjacent neighboring contours shown conduct underlying layers previously solidified adjacent track layer however heat initial contour second path meaning bulk heat flux must directed underlying layer result melt pool larger reaches higher peak temperatures stays hot longer incorporates material subsequent contours melted overhanging features also studied found scan parameters suitable standard processing conditions delivered much power initial layers overhanging features fabricated areas unfused powder extra heat enhances rayleigh instabilities leading balling poor surface finish undesirable mechanical properties finally acute corners shown present many challenges temperature control specifically case scan pattern incorporating degree investigated due reduced area heat flow melt pool partly scan path extra heat results formations apex experimental hardware also used investigate influence support strategies process temperatures fabrication overhanging features authors printed structures overhung regions supported uniformly spaced sized column like additions spatial density columns shown greatly influence maximum temperatures achieved overhanging layer therefore porosity surface finish support structure serves heat sink krauss performed infrared lwir imaging determine component porosity specimens study begins deriving approximation thermal diffusivity given vhc expression represent laser power scan velocity hatch spacing represents efficiency energy absorption powder layer assumed moreover specific heat capacity density material usual finally ambient temperature build environment radiometrically determined temperature pixel function time authors assign imperfection level based upon observed porosity degree binding error imperfection level determined inversely proportional diffusivity cubic specimens layer height correlation coefficient based proper calibration relation proposed procedure enables determination part imperfection based measured temperature profiles method shown sensitive delamination porosity arising process parameters sensitivity demonstrated range nearly fully dense material void fraction ultimate sensitivity method calculated however authors note spatial temporal resolution imaging sensors represents greatest challenge study detailed information laser power scan rate provided resulting effects thermal diffusivity maximal temperature observation melt sputter presented figure detection porosity melt pool size temperature comparison predicted actual defect locations black areas simultaneously clijsters studied monitoring quality control working towards feedback process control using ingaas nir camera pair photo diodes differing band pass filters authors extract signals representing melt pool radiance intensity emitted light area length width report success detection overheating melt pool specifically extremes fill scan vectors along increased porosity regions using techniques authors able predict locations porosity within parts verified cross sectional images see figure illustration experimentally process maps constituted gathering data part density properties related scan rate hatch spacing laser power etc ultimate control slm arguably require determination local scan parameters according local part geometry thermal boundary conditions present approach determine generalized process window applied approximately achieve full density literature field abundant considering wide variety materials processes thomas propose methodology normalizing process parameters slm ebm methodology builds previous work dimensionless laser beam power velocity defined vrb originally dimensionless quantities identified analytical solutions derived temperature profiles resulting moving heat source body model applied order study heat transport laser beam welding effective laser power defined laser power times absorption coefficient related temperature difference initial powder bed temperature melting temperature thermal conductivity laser beam radius equation relates velocity laser beam heat source thermal diffusivity according choice parameters controls peak temperature heating rate representation matches figure thomas supplemented parameters proposed following dimensionless quantities emin dimensionless thickness powder layer defined via physical powder bed depth laser beam radius also hatch spacing normalized laser beam radius assuming powder bed porosity term represents energy density required increase temperature powder effective density heat capacity initial temperature melt temperature represents effectively absorbed laser power normalization laser power product yields energy per unit volume assuming latent heat melting range typically employed metals preheating temperatures thomas postulated energy density required increase powder temperature melt powder subsequently value consequently last dimensionless parameter emin represents ratio incident energy density energy density required heating melting powder parameter combinations found different experimental butions literature plotted emin diagram illustrated figure order gain understanding concerning specific choice diagram axes slightly different version normalized hatch spacing shall considered following context represents figure normalized processing diagram showing different realizations literature inverse dimensionless hatch spacing plotted dimensionless energy ratio emin full overlap represents vanishing overlap two successive scan tracks emin diagram lowing modified definition constant ratios emin const represented straight lines slope powder bed surface dimensions scan direction hatch direction scan tracks required order scan entire surface represents time scanning one track total amount energy absorbed one powder layer given hand based assumptions amount energy required heat melt entire powder bed given easily verified ratio emin exactly represents ratio two energies energy absorbed entire powder bed energy required heat melt entire powder bed statements made basis alternative dimensionless hatch spacing equivalent considering relation consequently constants denoted isopleths plotted figure multiplied factor order yield practically relevant ratio absorbed energy required energy heating melting entire powder bed completeness experimental raw data literature underlying data points figure summarized table focusing mostly based failure mechanisms reported literature included figure excessively high energy deposition may result overheating cracking part swelling whereas insufficient energy deposition may yield incomplete melting incorporation powder bed consequently undesired porosity void formation graphical results thomas deduce across different processes alloys effective realm thus amount effectively absorbed energy per powder layer typically chosen least factor four higher minimally required energy heating melting powder one layer since equivalent see also discussion paragraph factor four might become reasonable considering substantial parts neighboring tracks underlying layers remelted order provide good adhesion well additional effects thermal losses due emission heat source type processing parameters platform alloy system bed power thomas electron beam arcam juechter electron beam arcam vranken laser smyb yag qui laser concept laser laser slm kamath laser concept laser ziolkowski laser slm realizer unpublished laser data cooper laser boswell laser slm carter laser concept laser boswell laser eos brif laser velocity layer height hatch spacing beam radius thermophysical properties asm international asm international asm international asm international asm international asm international asm international mukai avala mukai avala brif table literature survey slm ebm process parameters required normalization creation process maps figure porosity slm energy density different laser powers convection evaporation powder bed surface heat conduction across build platform thomas went conduct experiments using ebm develop material specific processing map found high quality absence defect voids obtained highest study trade space region sacrifices production speed slow scanning speed scans per given part planar area low higher speeds wider hatch spacings vickers hardness number vhn observed increase consequence higher cooling rates possible creation phases structural integrity compromised presence undesirable microstructural features lack fusion defects isopleth lines constant might provide rough process window well evaluation process energy efficiency approach allow precise assessment stability surface quality individual melt tracks example obvious practice excessively high energy deposition per melt track compensated larger hatch spacing however would suggested definition similar discussion dimensionless energy emin represented vertical lines figure shown represent ratio absorbed energy per melt track energy required heating melting track however even comparison results related constant values emin exhibit difficulties considering example mechanisms powder bed radiation transfer discussed section becomes obvious excessively high beam power typically compensated high powder layer thickness due typical radiation transfer depth determined powder layer structure low thermal conductivity powder rather likely intended compensation results excessive evaporation powder bed surface combination unmolten powder bottom thick powder layer specific materials process parameter sets direct optimization scan parameters relation porosity readily performed gong varied energy density via laser beam power velocity however constant powder bed depth density figure subplot representing constant laser power shows relation incident energy density part quality using porosity metric quality accordingly gong identified four broad regimes process parameters comparing porosity resulting differing combinations beam speed beam power regimes shown figure denoted iii region described processing window producing fully dense slm parts process parameters regions iii produce porosity defects due underheating incomplete melting powder particles due overheating evaporation gas cavitation respectively region labeled referring overheating regime induces severe accumulated thermal strains part distortions inhibit operation slm recoater similar approach taken stainless steel figure processing regimes slm depending laser power scanning velocity processing window incomplete melting iii overheating severe overheating figure visualization physical mechanisms associated melt pool dynamics keyhole formation figure results micro computed tomography samples processed slm ordered increasing effective track penetration depth red arrows scans show transition large numerous pores generated keyhole formation highly dense parts highly porous samples generated lack fusion noted inhomogeneity observed scans stems use different set machine processing parameters top skin high part surface quality commonly porosities desired slm processes high residual porosity may regarded significant defect parts many applications pores facilitate crack formation propagation particularly deleterious cyclic fatigue life context formation pores typically occurs high energy densities explained keyhole mechanism illustrated figure generally believed recoil pressure resulting excessive heating attended evaporation able form deep depression melt pool directly laser beam see also discussion section increasing depth depression incident laser radiation reflected sides depression leading concentrated energy input higher temperatures bottom depression evaporation location consequently creation even deeper narrow corridors typically keyhole mechanism lead considerable penetration depths laser beam underlying material due focused energy source slm commonly results deeper keyholes ebm whereas ebm yields larger melt pool sizes laser beam moves also recoil pressure driving force diminishes point keyhole collapse action surface tension gravity however typically negligible relevant range length scales incomplete collapse vapor cavity bottom keyhole typically leaves voids characteristic spherical shape recent micro computed tomography studies clearly shown relationship slm processing energy density resulting porosity slm part cunningham performed using advanced photon source argonne national laboratory image samples fabricated slm shown figure samples studied processed eos machine varying laser powers scan speeds hatch spacings achieve widely varying penetration overlap depths calculated idealized geometry two hemispherical melt tracks separated hatch spacing expected defects effectively eliminated layer thicknesses less results correlate well density porosity results shown gong microstructure evolution slm process section summarize representative studies relationships materials inconel well stainless steels one hand materials commonly studied context slm hand exhibit importantly different thermomechanical behavior figure temperature distribution profile single powder layer top substrate derived means macroscopic slm simulation model sem micrograph side view reference sample processed scan pattern right left indicating grain growth along direction highest temperature gradients thijs studied microstructural evolution selective laser melting due high cooling rates gradients slm process acicular martensitic phase hexagonal structure revealed layers also found grains grew epitaxially material crystalline overlayer orientation respect crystal structure underlying substrate observed elongated grains side front views samples interesting note direction elongated grains depends local heat transfer condition grains oriented towards melt pool shown figure based simple thermal model indicated grain direction parallel local conductive heat transfer given highest thermal gradient since thermal field controlled scan strategy also resulting grain orientation influenced another interesting phenomenon occurring due fast solidification slm process segregation aluminum distinguishable dark bands believed represent intermetallic phase higher energy density due higher laser power applied material volume precipitates second phases increase consequence lower cooling rates longer diffusion times formation second phases although microstructure dominated martensite due rapid cooling rates post heat treatment transform alpha prime martensite alpha beta phases owing prevalent thermal profiles particular cycled heating slm process allow formation alpha beta phases may form graded microstructure build direction recently observed decomposition alpha prime martensite lamellar alpha beta phases bottom top sample turn might result decreasing material strength bottom top figure grain morphology inconel samples built ebm beam settings raghavan analyzed microstructures resulting inconel shown figure sample processed means beam current exhibits directional grains columnar grains along build direction one employed beam current shows equiaxed grains reason observation higher beam current induces higher sample temperature acting local preheat temperature unmolten material regions therefore thermal gradient lower consequently resulting equiaxed grains dehoff demonstrate use design change local crystal orientation three scanning strategies relying different beam powers velocities applied order vary thermal gradients cooling rates solidification front velocities eventually allows achieve tailored grain orientations working principle strategy proven writing prescribed letters block inconel letters represented misoriented equiaxed grains surrounding bulk materials exhibits oriented columnar grains methodology provides fundamentally new design tool tailor microstructures also several classes stainless steel alloys practical interest slm processing steel parts manufactured means slm satisfy typical requirements applications especially terms example mould tool applications steel grades available typically austenitic stainless steels commonly exhibit completely austenitic microstructure process characterized elongated textured grains example stainless steel typical microstructure elongated grains build direction observed precipitation chromium carbides grain boundaries would observable lower cooling rates prevalent traditional processes casting typically avoided means slm similar observations also made stainless steel moreover gradient microstructure observed parts accordingly increasing distance build platform resulted coarser microstructures due lower cooling rates future directions practical implementation indisputably modeling simulation approaches play key role enabling slm ebm powder bed processes build highly accurate complex parts achieve stringent quality specifications current implementations macroscopic mesoscopic microscopic models shed light underlying physical mechanisms slm relations process parameters part characteristics terms residual porosity residual stresses metallurgical microstructure resulting different length scales goal must combine three model classes based proper exchange information yield integrated modeling scheme capable predicting final part characteristics three length scales see also figure clearly computational efficiency robustness also coupling schemes play key role regard integrated process model hand prediction process output given input data also inverse problem determining locally optimal process parameters order optimize resulting part characteristics measured properly defined objective function realized via methods numerical optimization highest practical interest see overview inverse analysis schemes combination integrated simulation optimization methodologies capability slm locally control temperature gradients cooling rates opens door improved dimensional quality well engineered microstructures includes limited interest metallurgical microstructure designs optimal respect prescribed requirements inhomogeneous anisotropic distributions material strength ductility part density porosity material dissipation resulting mechanical acoustic damping thermal electrical conductivity internal capacity course predictive accuracy respective modeling approaches strongly related degree abstraction context correlation situ metrology data models essential build accurate computationally efficient representations process physics proper degree abstraction furthermore employing situ measurements means fault indentification process control achieved eventually comparison situ data relevant physical fields temperature field evolutions fields predicted numerical simulation opens door proper manipulation process input parameters according principles control theory simulations experiments must pursued understand fundamental limits slm processes instance process machine parameters well heating cooling rates resulting interaction boundary conditions melt pool impose range accessible heating cooling rates turn limit potential microstructure stress control also ability perform slm powders broader candidate set shape size distributions practical interest increase range processible materials geometrical resolutions access powder production techniques lower intrinsic cost currently two main approaches powder manufacturing typically applied powder favorable terms lower production costs however powder irregular particle shape therefore typically leads powder poor flow packing characteristics powder costly yet advantageous producing spherical particles higher purity consideration powder purity transient composition changes critical oxidation powder substrate surfaces might considerably decrease wetting behavior liquid melt possibly leading melt pool balling reduced surface quality defects pores inclusions reduced adhesion might induce delamination furthermore practice slm powder must kept dry since evaporation water contents melt process induces undesirable recoil pressure fosters surface oxidation water content commonly reduced far possible combined approaches preheating controlled inert gas flow flushing possible sources contamination build chamber paramount need assist quality control certification resulting parts materials first complexity slm process underlying physical phenomena leads higher sensitivity part quality respect process parameter variations case rather traditional manufacturing technologies casting forging second combination limited practical experience comparatively new technology frequency defective parts considerably higher third types defects well resulting mechanisms failure specific physical phenomena governing slm process consequently require sophisticated surface bulk metrology tomography context measurement recording certain physical fields temperature field might useful locating different types defects within slm parts success slm manufacturing technology course also strongly determined economic aspects currently limited understanding underlying physical phenomena leads strong restrictions slm process form tight process windows example maximum scan velocity may exceeded order avoid melt pool instabilities resulting high surface roughness possible porosity final part excessive energy input consequence high power densities lead overheating decreased surface quality high residual stresses crack propagation however high process throughput increased scan velocities beam spot sizes power densities would desirable regard study novel scanning strategies laser configurations beam shaping approaches seem promising order find regimes stable processing yet higher production throughput rough estimate lower energy bound slm per layer given amount energy required heat melt powder material one layer due heat capacity amount energy required actual manufacturing process considerably higher substantial parts neighboring tracks underlying layers remelted order provide good adhesion various thermal losses due emission convection evaporation powder bed surface well heat conduction across build platform energy consumption insignificant overall cost slm present yet become critical cost drivers namely material equipment cost improve besides pure energy balance avoidance evaporation also essential terms material savings melt track quality avoidance machine instrument contamination due condensation metallic vapor especially undesirable remelted processing different alloys machine conductive emissive losses minimized via proper thermal insulation shielding build platform machine casing research field powder material production may yield improved grain surface properties terms heat conduction radiation absorption certain slm related technologies present rich set multidisciplinary scientific practical challenges along opportunity transform design advanced products manufacturing systems acknowledgements financial support preparation article provided german academic exchange service daad meier honeywell federal manufacturing technologies penny hart swiss national science foundation early postdoctoral mobility fellowship zou grant united states navy engineering duty officer program gibbs work partially funded honeywell federal manufacturing technologies llc manages operates department energy kansas city national security campus contract united states government retains publisher accepting article publication acknowledges united states government retains nonexclusive paid irrevocable license publish reproduce published form manuscript allow others united states government purposes nomenclature abbreviations cad dem dic ebm ebw fdm fem haz lbw lwir nir vof xfem additive manufacturing cellular automation comuter aided design discrete element method digital image correlation electron beam melting electron beam welding finite difference method finite element method fluid structure interaction finite volume method heat affected zone laser beam welding long wave infrared near infrared phase field method radiation transport equation selective laser melting selective laser sintering vickers hardness number volume fluid method extended finite element method slm process powder bed thickness process material radius spherical particle laser power laser velocity hatch spacing laser beam radius associated dimensionless quantity powder bed dimension scan direction powder bed dimension hatch direction factor describing fraction effectively absorbed laser energy mechanical problem cauchy stress tensor vector volume forces displacement vector solid strain tensor tensor surface tension wetting angle interface curvature pressure melt pool pre pressure ambient gas evaporation energy per particle sticking coefficient gas constant molar mass thermal problem temperature processes material temperature ambient atmosphere initial temperature melting temperature boiling temperature liquidus temperature solidus temperature unit vector describing direction radiation transfer spatial position vector radiation intensity scattering coefficient absorption coefficient extinction coefficient albedo scattering phase function constant general heat flux density radiation heat flux density emissive heat flux density evaporation heat flux density conductive heat flux density convective heat flux density incident energy density perpendicular component polarized absorptivity parallel component polarized absorptivity complex refraction index angle radiation incidence direction wave propagation wave length wave direction electric field surface normal vector general unit vector thermal conductivity effective thermal conductivity thermal diffusivity general absorptivity flat surface absorptivity latent heat melting powder density coordination number radius spherical contact area dimensionless contact size density specific heat constant pressure time velocity vector thermal conductivity tensor problem domain problem boundary convection coefficient radiation emissivity phase field concentration rotation vector describing grain orientation free energy free energy boundary layer thickness parameter phase field method references ammer markl ljungblad simulating fast electron beam melting parallel thermal free surface lattice boltzmann method computers mathematics applications anisimov khokhlov instabilities interaction crc press boca raton apel eiken hecht phase field models heterogeneous nucleation application inoculation alloys european physical journal special topics argento bouvard ray tracing method evaluating radiative heat transfer porous media international journal heat mass transfer arif qin model formation martensite bainite advanced materials research armero simo new unconditionally stable fractional step method coupled thermomechanical problems international journal numerical methods engineering attar lattice boltzmann method dynamic wetting problems journal colloid interface science attar lattice boltzmann model thermal free surface flows phase transition international journal heat fluid flow bachmann avilov gumenyuk rethmeier numerical assessment experimental verification influence hartmann effect laser beam welding processes steady magnetic fields international journal thermal sciences baillis thermal radiation properties dispersed media theoretical prediction experimental characterization journal quantitative spectroscopy radiative transfer scharowsky defect generation propagation mechanism additive manufacturing selective beam melting journal materials processing technology boettinger coriell greer karma kurz rappaz trivedi solidification microstructures recent developments future directions acta materialia boley khairallah rubenchik calculation laser absorption metal powders additive manufacturing applied optics boley mitchell rubenchik metal powder absorptivity modeling experiment applied optics bourell perspectives additive manufacturing annual review materials research bruggeman berechnung verschiedener physikalischer konstanten von heterogenen substanzen und der aus isotropen substanzen annalen der physik calvello finno selecting parameters optimize model calibration inverse analysis computers geotechnics carlson lathrop transport theory method discrete ordinates los alamos scientific laboratory university california causin gerbeau nobile effect design partitioned algorithms problems computer methods applied mechanics engineering cervera lombera numerical prediction temperature density distributions selective laser sintering processes rapid prototyping chan tien radiative transfer packed spheres journal heat transfer chandrasekhar radiative transfer courier corporation chang allen blackburn hilton fluid flow characteristics porosity behavior full penetration laser welding titanium alloy metallurgical materials transactions process metallurgy materials processing science chen churchill radiant heat transfer packed beds aiche journal chen phase models microstructure evolution annual review materials research chen wang chen effect electron beam energy density temperature field electron beam melting advanced materials research childs selective laser sintering melting stainless tool steel powders experiments modelling proceedings institution mechanical engineers part journal engineering manufacture clijsters craeghs buls kempen kruth situ quality control selective laser melting process using melt pool monitoring system international journal advanced manufacturing technology craeghs bechmann berumen kruth feedback control layerwise laser melting using optical sensors physics procedia part craeghs clijsters yasa bechmann berumen kruth determination geometrical factors layerwise laser melting using optical process monitoring optics lasers engineering cunningham narra montgomery beuth rollett microtomography characterization effect processing variables porosity formation laser additive manufacturing jom dai shaw thermal stress modeling laser processing acta materialia dai shaw thermal mechanical finite element modeling laser forming metal ceramic powders acta materialia dai shaw finite element analysis effect volume shrinkage laser densification acta materialia dai shaw parametric studies laser densification materials science engineering das physical aspects process control selective laser sintering metals advanced engineering materials dehoff kirka sames bilheux tremsin lowe babu site specific control crystallographic grain orientation electron beam additive manufacturing materials science technology denlinger heigel michaleris residual stress distortion modeling electron beam direct manufacturing proceedings institution mechanical engineers part journal engineering manufacture denlinger irwin michaleris thermomechanical modeling additive manufacturing large parts journal manufacturing science engineering dong makradi ahzi remond transient finite element analysis selective laser sintering process journal materials processing technology dorr fattebert wickett belak turchi numerical algorithm solution model polycrystalline materials journal computational physics drolen tienf independent dependent scattering systems journal thermophysics heat transfer everton hirsch stravroulakis leach clare review process monitoring metrology metal additive manufacturing materials design wall ramm artificial added mass instabilities sequential staggered coupling nonlinear structures incompressible viscous flows computer methods applied mechanics engineering ganser pieper liebl zaeh numerical simulation thermal efficiency laser deep penetration welding physics procedia garibaldi ashcroft simonelli hague metallurgy steel parts produced using selective laser melting acta materialia geiger leitz koch otto transient model keyhole melt pool dynamics laser beam welding applied joining zinc coated sheets production engineering gibson rosen stucker additive manufacturing technologies volume springer glicksman principles solidification introduction modern casting crystal growth concepts springer science business media gong rafi starr stucker analysis defect generation parts made using powder bed fusion additive manufacturing processes additive manufacturing gong chou modeling microstructure evolution electron beam additive manufacturing jom gong lydon cooper chou beam speed effects microstructures electron beam additive manufacturing journal materials research laser additive manufacturing materials springer guo xiong grant phase field simulation binary alloy dendrite growth fields implementation approach metallurgical materials transactions process metallurgy materials processing science karg leitz schmidt simulation laser beam melting steel powders using volume fluid method physics procedia gusarov homogenization radiation transfer media irregular phase boundaries physical review condensed matter materials physics gusarov bentefour rombouts froyen glorieux kruth reflectances powder beds journal applied physics gusarov kovalev model thermal conductivity powder beds physical review condensed matter materials physics gusarov kruth modelling radiation transfer metallic powders laser treatment international journal heat mass transfer gusarov laoui froyen titov contact thermal conductivity powder bed selective laser sintering international journal heat mass transfer gusarov pavlov smurov residual stresses laser surface remelting additive manufacturing physics procedia part gusarov smurov modeling interaction laser radiation powder bed selective laser melting physics procedia part gusarov yadroitsev bertrand smurov heat transfer modelling stability analysis selective laser melting applied surface science haag albright ramasamy laser light absorption characteristics metal powders journal applied physics hanzl zetek kroupa influence processing parameters mechanical properties slm parts energy procedia havermann mathew macpherson hand maier measuring residual stresses metallic components manufactured fibre bragg gratings embedded selective laser melting international conference optical fibre sensors pages international society optics photonics hebert viewpoint metallurgical aspects powder bed metal additive manufacturing journal materials science herzog seyda wycisk emmelmann additive manufacturing metals acta materialia hodge ferencz vignes experimental comparison residual stresses thermomechanical model simulation selective laser melting additive manufacturing hodge ferencz solberg implementation thermomechanical model simulation selective laser melting computational mechanics hussein hao yan everson finite element simulation temperature stress fields single layers built withoutsupport selective laser melting materials design ion shercliff ashby diagrams laser materials processing acta metallurgica materialia irrinki dexter barmore enneti pasebani badwe stitzel malhotra atre effects powder attributes laser powder bed fusion process conditions densification mechanical properties stainless steel jom jamshidinia temperature distribution fluid flow modeling electron beam melting ebm asme international mechanical engineering congress exposition pages jamshidinia kong kovacevic coupled model electron beam melting ebm asme district career technical conference pages birmingham jones mcleod correlation measured computed radiation intensity exiting packed bed journal heat transfer kamath gallegos king sisto density parts using laser powderbed fusion powers international journal advanced manufacturing technology kamiuto correlated radiative transfer systems journal quantitative spectroscopy radiative transfer kamiuto iwamoto sato nishimura coefficients systems journal quantitative spectroscopy radiative transfer kempen vrancken buls thijs van humbeeck kruth selective laser melting high density high speed steel parts baseplate preheating journal manufacturing science engineering khairallah anderson mesoscopic simulation model selective laser melting stainless steel powder journal materials processing technology khairallah anderson rubenchik florando lowdermilk simulation main physical processes remote laser penetration large laser spot size aip advances khairallah anderson rubenchik king laser fusion additive manufacturing physics complex melt flow formation mechanisms pores spatter denudation zones acta materialia khoo karuppanan tan review surface deformation strain measurement using digital image correlation metrology measurement systems mazumder mohanty modeling laser keyhole welding part mathematical modeling numerical methodology role recoil pressure multiple reflections free surface evolution metallurgical materials transactions june mazumder mohanty modeling laser keyhole welding part simulation keyhole evolution velocity temperature profile experimental verification metallurgical materials transactions king anderson ferencz hodge kamath khairallah overview modelling simulation metal powder bed fusion process lawrence livermore national laboratory materials science technology klassen modelling electron beam absorption complex geometries journal physics applied physics kobayashi modeling numerical simulations dendritic crystal growth physica kolossov boillat glardon fischer locher simulation temperature evolution selective laser sintering process international journal machine tools manufacture attar heinl mesoscopic simulation selective beam melting processes journal materials processing technology attar fundamental consolidation mechanisms selective beam melting powders modelling simulation materials science engineering krauss eschey zaeh thermography monitoring selective laser melting process proc annual international solid freeform fabrication symposium pages krauss zeugner zaeh layerwise monitoring selective laser melting process thermography physics procedia krauss zeugner zaeh thermographic process monitoring powderbed based additive manufacturing aip conference proceedings krill iii chen computer simulation grain growth using model acta materialia kruth mercelis vaerenbergh froyen rombouts binding mechanisms selective laser sintering selective laser melting rapid prototyping journal kruth levy klocke childs consolidation phenomena laser based layered manufacturing cirp annals manufacturing technology kurz fisher fundamentals solidification trans tech publications switzerland landau electrodynamics continuous media edition landau lifshitz course theoretical physics lee simulation laser additive manufacturing applications phd thesis ohio state university levy schindel kruth rapid manufacturing rapid tooling layer manufacturing technologies state art future perspectives cirp technology stott numerical model phase change process laser melting ceramic materials international journal heat mass transfer lipson kurman fabricated new world printing john wiley sons liu wang hou hao yang sercombe zhang microstructure defects mechanical behavior titanium porous structures manufactured electron beam melting selective laser melting acta materialia gan huang yang junjie lin study microstructure mechanical property residual stress slm alloy manufactured differing island scanning strategy optics laser technology mackenzie shuttleworth phenomenological theory sintering proceedings physical society section majumdar manna introduction laser assisted fabrication materials pages springer markl ammer ljungblad electron beam absorption algorithms electron beam melting processes simulated thermal free surface lattice boltzmann method distributed parallel environment procedia computer science matsumoto shiomi osakada abe finite element analysis single layer forming metallic powder bed rapid prototyping selective laser processing international journal machine tools manufacture matthews guss khairallah rubenchik depond king denudation metal powder layers laser powder bed fusion processes acta materialia maxwell electricity magnetism clarendon press oxford mcvey melnychuk todd martukanitz absorption laser irradiation porous powder layer journal laser applications meakin jullien restructuring effects rain model random deposition journal physique megahed mindt dri duan desmaison metal process residual stress modeling integrating materials manufacturing innovation mercelis kruth residual stresses selective laser sintering selective laser melting rapid prototyping journal meredith tobias conductivities emulsions journal electrochemical society militzer phase field modeling microstructure evolution steels current opinion solid state materials science nie ojo numerical modeling microstructure evolution laser additive manufacturing superalloy acta materialia pal patil kutty zeng moreland hicks beeler stucker generalized dynamic adaptive mesh refinement derefinement framework metal laser sinteringpart nonlinear thermal simulations validations journal manufacturing science engineering pal patil zeng stucker integrated approach additive manufacturing simulations using physics based coupled multiscale process modeling journal manufacturing science engineering patil pal haludeen kutty zeng moreland hicks beeler stucker generalized feed forward dynamic adaptive mesh refinement finite element framework metal laser sintering part formulation algorithm development journal manufacturing science engineering august qin bhadeshia phase field method materials science technology qiu panwisawas ward basoalto brooks attallah role melt flow surface structure porosity development selective laser melting acta materialia raghavan dehoff pannala simunovic kirka turner carlson babu numerical modeling influence process parameters tailoring grain morphology electron beam additive manufacturing acta materialia rai markl coupled cellular boltzmann model grain structure simulation additive manufacturing computational materials science rai burgardt milewski lienert debroy heat transfer fluid flow electron beam welding steel alloy journal physics applied physics rayleigh influence obstacles arranged rectangular order upon properties medium philosophical magazine series riedlbauer scharowsky singer steinmann mergheim macroscopic simulation experimental measurement melt pool characteristics selective electron beam melting international journal advanced manufacturing technology pages riedlbauer steinmann mergheim thermomechanical finite element simulations selective electron beam melting processes performance considerations computational mechanics roberts wang esterlein stanford mynors finite element analysis temperature field laser melting metal powders additive layer manufacturing international journal machine tools manufacture rombouts froyen gusarov bentefour glorieux photopyroelectric measurement thermal conductivity metallic powders journal applied physics sames list pannala dehoff babu metallurgy processing science metal additive manufacturing international materials reviews march scherer sintering glasses theory journal american ceramic society semak bragg damkroger kempka transient model keyhole laser welding journal physics applied physics shen chou numerical thermal analysis electron beam additive manufacturing preheating effects proceedings solid freeform fabrication symposium pages austin texas usa shen chou thermal modeling electron beam additive manufacturing process powder sintering effects proceedings asme international manufacturing science engineering conference pages notre dame indiana usa shiomi yoshidome abe osakada finite element analysis melting solidifying processes laser rapid prototyping metallic powders international journal machine tools manufacture singh kaviany independent theory versus direct simulation radiation heat transfer packed beds international journal heat mass transfer singh kaviany modelling radiative heat transfer packed beds international journal heat mass transfer singh kaviany effect solid conductivity radiative heat transfer packed beds international journal heat mass transfer steinbach model microstructure evolution mesoscopic scale annual review materials research stucker additive manufacturing technologies technology introduction business implications frontiers engineering reports engineering symposium pages tancrez taine direct identification absorption scattering coefficients phase function porous medium monte carlo technique international journal heat mass transfer thijs verhaeghe craeghs humbeeck kruth study microstructural evolution selective laser melting acta materialia thomas baxter todd normalised processing diagrams additive layer manufacture engineering alloys acta materialia tolochko laoui khlopkov mozzharov titov ignatiev absorptance powder materials suitable laser sintering rapid prototyping journal tsotsas martin thermal conductivity packed beds review chemical engineering processing process intensification verhaeghe craeghs heulens pandelaers pragmatic model selective laser melting evaporation acta materialia viskanta radiative transfer dispersed media applied mechanics reviews wang yang research fabricating quality optimization overhanging surface slm process international journal advanced manufacturing technology wang sekerka wheeler murray coriell braun mcfadden models solidification physica nonlinear phenomena wang kruth simulation model direct selective laser sintering metal powders topping editor computational techniques materials composites composite structures pages edingburg wang laoui bonse kruth lauwers froyen direct selective laser sintering hard metal powders experimental study simulation international journal advanced manufacturing technology wheeler boettinger mcfadden model isothermal phase transitions binary alloys physical review withers neutron neutrons mapping residual internal stress materials neutron diffraction comptes rendus physique wits bruins terpstra huls geijselaers single scan vector prediction selective laser melting additive manufacturing brown kumar gallegos king experimental investigation additive residual stresses stainless steel metallurgical materials transactions qin wang lin adaptive volumetric heat source models laser beam laser pulsed gmaw hybrid welding processes international journal advanced manufacturing technology brandt sun elambasseril liu latham xia qian additive manufacturing strong ductile selective laser melting via situ martensite decomposition acta materialia yadroitsava grewar hattingh yadroitsev residual stress slm alloy specimens materials science forum yadroitsev bertrand smurov parametric analysis selective laser melting process applied surface science yagi kunil studies effective thermal conductivities packed beds journal yang howell klein radiative heat transfer randomly packed bed spheres monte carlo method journal heat transfer dai xia shi role processing parameters thermal behavior surface morphology accuracy laser printing aluminum alloy journal physics applied physics yuan molten pool behaviour physical mechanism selective laser melting nanocomposites simulation experiments journal physics applied physics zaeh branner investigations residual stresses deformations selective laser melting production engineering lutzmann modelling simulation electron beam melting production engineering zeng pal patil stucker new dynamic mesh method applied simulation selective laser melting proceedings solid freeform fabrication symposium pages austin texas usa zeng pal teng stucker evaluations effective thermal conductivity support structures selective laser melting additive manufacturing zhang liou seufzer newkirk fan probabilistic simulation solidification microstructure evolution laserbased metal deposition proceedings int solid freeform fabr pages zhou zhang chen numerical simulation laser irradiation randomly packed bimodal powder bed international journal heat mass transfer
| 5 |
degenerations graded modules feb naoya hiramatsu abstract introduce notion degenerations graded modules relation also introduce several partial orders graded analogies hom order degeneration order extension order prove orders identical graded modules graded ring graded finite representation type representation directed introduction notion degenerations modules appears geometric methods representation theory dimensional algebras yoshino gives schemetheoretical degenerations considered modules noetherian algebra necessary dimensional theory degenerations considered derived module categories stable module categories degeneration problem modules studied many authors study several order relations modules hom order degeneration order extension order introduced connection among studied previous paper author give complete description degenerations ring simple hypersurface singularity type present paper consider degenerations graded modules graded gorenstein ring graded isolated singularity first consider order relation category graded modules called hom order shall show actually partial order graded ring gorenstein graded isolated singularity section propose degenerations graded modules state several properties show graded ring graded representation type representation directed hom order degeneration order extension order identical graded cohenmacaulay modules also consider stable analogue degenerations graded modules section date april mathematics subject classification primary secondary key words phrases degeneration graded module finite representation type naoya hiramatsu hom order graded modules throughout paper commutative noetherian ring characteristic zero graded ring said local set graded proper ideals unique maximal element thus local since unique maximal ideal denote modz category generated modules whose morphisms homogenous morphisms preserve degrees modz homr consisting homogenous morphisms degree set homr homr graded prime ideal denote homogenous localization modz take graded free resolution lth syzygy module imdl say graded said graded extir dim particular condition equivalent extir canonical module denote cmz full subcategory modz consisting graded assumption modz cmz namely object decomposed indecomposable objects isomorphism uniquely modz denote sequence dimk integers easy see hilbert series moreover also homr see theorem remark give class grothendieck group element modz however converse hold general let deg deg set modz fact let ideal rankr rankr note modz yields modz thus never happen motivation paper investigate graded degenerations graded modules terms order relations reason consider following relation cmz known hom order degenerations graded modules definition cmz cmz abbreviation dimk homr remark modz thus consider relation consequence dimension partial order cmz since cmz closed kernels cases lemma let graded gorenstein ring let indecomposable graded suppose modz finite projective dimension proof graded projective dimension also projective dimension since gorenstein graded thus take graded free resolution apply homr resolution get exact sequence homr homr homr homr hence also since gorenstein equality free module therefore paper use theory abbr sequences graded modules detail recommend reader look chapter denote full subcategory cmz consisting cmz graded prime ideal theorem let noetherian gorenstein local ring admits sequences definition say graded isolated singularity graded localization regular graded prime ideal easy see cmz graded isolated singularity cmz admits sequences denote multiplicity direct summand naoya hiramatsu theorem let graded gorenstein ring algebraically closed field let graded assume graded isolated singularity cmz particularly partial order cmz proof decompose indecomposable graded free take sequence ending translation apply homr homr sequence since algebraically closed homr homr homr homr homr homr counting dimensions terms conclude free may assume let maximal ideal consider approximation see note graded injective dimension also note projective dimension since gorenstein let composition map approximation natural inclusion get following commutative diagram diagram chasing see surjective projective dimension obtain exact sequence homr homr homr according lemma thus hence consequently degenerations graded modules graded degenerations graded modules notion degenerations graded modules definition let noetherian ring let trivial gradation localization namely generated graded say gradually degenerates graded degeneration generated graded following conditions graded graded yoshino gives necessary condition degenerations modules one also show graded version similar way see also theorem theorem following conditions equivalent finitely generated graded gradually degenerates short exact sequence finitely generated graded remark yoshino shown endomorphism sequence theorem nilpotent note need nilpotency assumption actually since endr artinian using fitting theorem describe endomorphism direct sum isomorphism nilpotent morphism see also remark assume graded modules show also graded see remark assume exact sequence generated graded rmodules gradually degenerates see remark instance let generated graded suppose gradually degenerates modules give class grothendieck group also prove similar way proof theroem modz gradually degenerates gradually degenerates gradually degenerates one show gradually degenerates consideration fact partial orders follows naoya hiramatsu definition generated graded relation called degeneration order gradually degenerates also relation modules short exact sequences cmz cmz take syzygy module applying homr sequence denote lemma let graded isolated singularity let graded assume graded exists integer proof cmz take exact sequence note direct sum integers applying homr sequence homr homr homr extr since graded isolated singularity dim extr thus extr large similarly also exists integer extr set max homr homr homr therefore obtain equation also suppose following inequality holds since hence see contradiction since therefore integer degenerations graded modules say category cmz graded representation type number isomorphism classes indecomposable graded cohenmacaulay modules shift note cmz representation type graded isolated singularity see chapter detail immediate consequence lemma following corollary let finite representation type let graded assume finitely many indecomposable graded graded consider following set isomorphism classes indecomposable graded modules note corollary set also note case proposition let graded gorenstein ring graded finite representation type let graded assume exists graded degenerates proof although proof proposition given refer argument proof present paper reason recall proof proposition since graded isolated singularity cmz admits sequences take sequence starting consider sequence direct sum copies runs modules namely indecomposable cmz obtain homr homr homr implies equality thus holds cmz hence yields naoya hiramatsu since degenerates therefore degenerates focus case graded gorenstein ring graded representation type representation directed say graded cohenmacaulay ring representation directed quiver oriented cyclic paths bongartz studied case dimensional graded settings similar results hold actually shall prove following theorem let graded gorenstein ring graded finite representation type representation directed following conditions equivalent cmz prove theorem modify arguments lemma cancellation property let finitely generated graded assume gradually degenerates gradually degenerates assume gorenstein graded rmodules gradually degenerates graded free rmodule gradually degenerates proof since gradually degenerates exist exact sequence construct pushout diagram degenerations graded modules middle column sequence hand bottom row sequence since thus implies middle column sequence splits therefore get namely gradually degenerates since gradually degenerates exact sequence suppose contains graded free module direct summand construct pushout diagram left column sequence split sequence induced decomposition note also graded module since gorenstein middle column sequence also split hence yields hence may assume graded free modules direct summands consider composition surjection projection composition mapping split contains direct summand hence gradually degenerates since conclude gradually degenerates naoya hiramatsu indecomposable graded modules write exists path quiver lemma let graded gorenstein ring graded finite representation type representation directed let cmz assume common direct summands let cmz indecomposable module property middle term sequence starting proof proof proposition construct sequence cmz via taking direct sum sequences starting modules isomorphism implies cmz thus enough show equality holds sequence one show direct summand cyclic path contradiction since represented directed thus isomorphic inequality also show direct summand thus irreducible map hence direct summand contradiction consequently therefore proof theorem implication trivial note extr indecomposable cmz since representation directed thus implication shown argument shall show set since show implication induction cmz thus hence assume summand common inductive step take cmz lemma let middle term sequence starting virtue lemma lemma enough show property sequence indecomposable otherwise thus remark contained since degenerations graded modules induction hypothesis remark implication hold general let deg deg gradually degenerates fact exact sequence since indecomposable graded module isomorphic never happen see also remark proposition let graded gorenstein ring graded finite representation type let graded assume indecomposable graded following equality proof assumption proof proposition construct exact sequence enough show equality holds indecomposable moreover sequence direct sum sequences may assume sequence starting let cmz indecomposable non projective property sequence since get equality remarks stable degenerations graded modules rest paper consider stable analogue degenerations graded modules let graded gorenstein ring let trivial gradation note naoya hiramatsu graded gorenstein rings well cmz cmz triangulated categories denote cmz cmz resp cmz cmz triangle functor localization resp taking see also definition let cmz say stably degenerates exists graded module cmz cmz cmz one show following characterization stable degenerations similarly proof theorem theorem let graded gorenstein ring field following conditions equivalent graded degenerates graded free triangle cmz stably degenerates proof note implication setting local graded projective resp graded free resp hence show implication artinian case proof theorem remark theory degenerations derived categories studied shown complexes bounded derived category dimensional algebra degenerates exists triangle form appears theorem let graded gorenstein ring algebraically closed shown suppose simple singularity exists dynkin quiver triangle equivalence cmz bounded derived category category generated left modules path algebra virtue theorem describe degenerations terms graded degenerations cmz since graded ring graded representation type representation directed already seen theorem acknowledgments author express deepest gratitude tokuji araya yuji yoshino valuable discussions helpful comments degenerations graded modules references araya exceptional sequences graded rings math okayama univ auslander buchweitz homological theory maximal cohenmacaulay approximations mem soc math france auslander reiten almost split sequences rings singularities representation algebras vector bundles lambrecht lecture notes springer berlin auslander reiten almost split sequences abelian group graded rings algebra bongartz generalization theorem auslander bull london math soc bongartz degenerations extensions modules adv math bruns herzog rings cambridge studies advanced mathematics cambridge university press cambridge revised edition hiramatsu yoshino examples degenerations modules appear proc amer math iyama takahashi tilting cluster tilting quotient singularities appear math jensen zimmermann degenerations derived categories pure appl algebra kajiura saito takahashi matrix factorization representations quivers type ade case adv math riedtmann degenerations representations quivers relations ann scient norm sup yoshino modules rings london mathematical society lecture note series cambridge university press cambridge yoshino degenerations modules algebra yoshino degenerations modules algebra yoshino stable degenerations modules algebra zwara order modules arch math zwara degenerations modules algebras proc amer math soc zwara degenerations modules given extensions compositio math department general education kure national college technology agaminami kure hiroshima japan address hiramatsu
| 0 |
build compact binary neural network sensitivity data pruning yixing fengbo ren school computing informatics decision systems engineering arizona state university yixingli renfengbo abstract convolutional neural network cnn widely used tasks due high computational complexity memory storage requirement hard directly deploy cnn embedded devices hardwarefriendly designs needed embedded devices emerging solutions adopted neural network compression weight network pruned network quantized network among binarized neural network bnn believed framework due small network size low computational complexity existing work shrunk size bnn work explore redundancy bnn build compact bnn cbnn based sensitivity analysis data pruning input data converted high dimensional format stage analyze impact different bit slices accuracy pruning redundant input bit slices shrinking network size able build compact bnn result shows scale network size bnn accuracy drop actual runtime reduced compared baseline bnn counterpart respectively introduction applications found many embedded devices classification recognition detection tracking tasks tian specifically convolutional neural network cnn become core architecture tasks lecun since outperform conventional feature algorithm terms accuracy becomes popular advanced system adas either used cnns guiding autonomous driving alerting driver predicted risk tian obvious adas depends system get timely reaction artificial intelligence applications also explode smartphones automatically tagging photos face detection schroff apple reported working apple neural engine partially move processing module device lumb processing users requests sending data center much overhead latency power consumption caused commutation processing future trend balance power efficiency latency however cnns known high computation complexity makes hard directly deploy embedded devices therefore compressed cnns demand early stage research work cnns focused reducing network precision bits stage suda either limited reduction suffers severer accuracy drop lately techniques brought achieving much higher compression ratio binaryconnect bnn ternarynet courbariaux hubara rastegari zhu pushed reduce weight binary ternary values network pruning han reduces network size memory size parameters means reducing number connections regarding network size pruned network ternarynet achieve reduction zhu han respectively binaryconnect bnn achieve reduction terms computational complexity bnn binarized weights activations simply replace convolution operation bitwise xnor bit count operation however additional scaling factor filters layer brings overhead memory computation cost bnn optimal solution hardware deployment considering network size computational complexity addition bnn drawn lot attention hardware community however existing study explored scaling bnn efficient inference stage work first one explores proves still redundancy bnn proposed flow reduce network size triggered conversion analysis input data rather network body rarely seen previous work novel flow proposed prune redundant input bit slices rebuild compact bnn sensitivity analysis experiment results show compression ratio network size achieving accuracy drop rest paper organized follows section discusses related work network compression explains bnn superior solution deployed hardware section validates hypothesis bnn redundancy proposes novel flow build compact bnn experiment results discussion shown section section concludes paper related work regarded oriented designs fair emphasize compressing network size computational complexity also essential section first discuss evaluate related work network compression emphasizing factors also present simple benchmark study help reader better understand computational complexity terms hardware resource existing work reveal bnn superior solution deployed hardware binaryconnect courbariaux study early stage exploring binarized weight neural network binaryconnect network weights binary values activations still arbitrary value multiplies equivalent conditional bitwise operation hence convolution operation decomposed conditional bitwise operations accumulation big step moving multiplication much simpler bitwise operations bnn hubara first one builds network binary weights activations convolution operation simplified bitwise xnor bit count operations hardware resource cost minimized gpu fpga asic implementation gpu implementation bitwise xnor implemented single clock cycle one cuda core fpga asic implement need use dsp digital signal processor resources anymore relatively costly simple logic elements luts look tables used map bitwise xnor bit count operations makes easy map highly parallel computing engines achieve high throughput low latency rastegari also builds network based binary weights activations however introduces filter scaling factors convolutional layer ensure better accuracy rate additional nonbinary convolution operations needed convolutional layer cost additional time computing resources thus strictly fully binarized network ternarynet zhu holds ternary weights network increasing precision level weights enhances accuracy rate since ternary weights encoded bits computational complexity least double compared binaryconnect network pruning han revealed popular technique compressing cnns weights cnn usually range bit bit suda compresses network pruning useless weights gains speedup mainly reducing network size unlike technique mentioned neither weights activations pruned network binary ternary still computation complexity operation much higher bnn implement matrix multiplication xilinx fpga board analyzing computational complexity different architecture mentioned precision elements precision weights activations architecture matrix multiplication fully mapped onto fpga words reuse hardware resource resource utilization good reflection computational complexity since bits enough maintain accuracy rate full precision network suda set precision bits full precision weights activations pruned network set pruned network zero fair comparison since pruned network get reduction han bnn get size pruned network larger weights total number weights pruned network binarized weight cases shown fig bnn apparently consumes least amount hardware resource among architecture summary methods mentioned pruning categorized connection reduction rest categorized precision reduction however kinds methods applied bnn pruning prunes weights close zero value however weights bnn already constrained precision reduction bnn already reached lower bound since cnns believed huge redundancy hypothesize bnn also redundancy able get compact bnn best knowledge existing work inspired analyzing reducing input precision explore bnn redundancy analysis input data validate hypothesis step step next section figure resource consumption multiplication xilinx fpga different architecture following paragraphs bnn referring hubara work binarized cnn baseline model reconstructed bnn cbnn referring reconstructed model used sensitivity analysis final compact bnn shrunk network size respectively build compact bnn bnn reconstruction section first illustrates reformatting input modifying first layer bnn reconstruction training method presented binarized input existing binarized ternarized neural networks take format data network input means computation first layer inner product nonbinary matrix binary matrix layers convolution operation implemented xnor dotproduct operation simplified bit count gpu xnor logic contrast computation first layer much complicated intuitively bitsliced input enable xnor operation first layer inspires explore feasibility converting dataset format single image dataset represented width height number channels shown fig raw data usually stored format integer maximum value lossless conversion integer binary format defined function first need reconstruct train new model sensitivity analysis section demonstrates model reconstruction bnn reconstructed bnn section prove redundancy exists bnn decide prunable bit slices sensitivity analysis stage finally section presents guide rebuilding compact bnn cbnn triggered input data pruning bits figure conversion input binary input although network size increases growth somewhat limited example bnn hubara size first layer entire network proved quantization weights enough preserve accuracy suda input network size slightly increase negligible input first layer reconstruct bnn model refer reconstructed bnn although computational complexity new structure helps reduce redundancy bnn elaborated following sections binary constrained training adopt training method proposed hubara hubara objective function shown represents weights first layer represents weights binary layers loss function hinge loss training stage reference weights used backward propagation binarized weights hubara used forward propagation tang propose tang reference weights binary layers punished close also regularization term applied first layer conversion channel image expanded channels binary format first layer experimental observation shows input negative impact accuracy rate two main reasons since input data format methods mean removal normalization zca whitening applied results accuracy drop addition compared standard first layer bnn computational complexity drops may hurt accuracy rate therefore assign first layer weights keep computational complexity first layer standard first layer bnn sensitivity analysis use training method section train reconstructed bnn model input first layer stage evaluate sensitivity input accuracy performance shown fig reconstructed bnn initial nth bit nth least significant bit slices rgb channels substituted binary random bit slices reason use binary random bit slices pruning pruning reduce size network want eliminate factors influence accuracy performance difference actual inference error reference point less threshold nth bit slices classified prunable binary random bits bit slices rgb channels errth errref err sensitive errinf reconstructed bnn figure sensitivity analysis reconstructed bnn distorted input without retraining network error brought random bit slices propagate throughout entire network even merely accuracy drop inference stage inferred bit slices less sensitivity accuracy performance useless training stage also indicates existing redundancy bnn allows shrink network size evaluating sensitivity bit slice also analyze sensitivity stack slices using method find collection insensitive bit slices prunable training stage slices categorized accuracy insensitive number channels reduced times say size input array reduced times rebuild compact network popular cnn architectures alexnet krizhevsky vgg simonyan zisserman resnet depth incremental ratio feature map one layer next layer either keeping remaining intuitively speaking useful keep depth incremental ratio across entire network thus good starting point rebuilding compact bnn cbnn shrinking depth layers times since quadratic relation depth network size reduction network size cbnn expected times although explored build accurate model optimize network compression ratio emphasize entire flow presented section reduces redundancy bnn enables speedup inference stage cbnn section present discuss performance results corresponding subsection section result discussion first walk flow presented section experimental results classification task section section present additional results snvh gtsrb datasets experiment table performance comparison different input format layer configuration arch input first layer network size error rate bnn full precision binary fbnn bit slices binary reconstructed bnn bit slices setup build model based upon hubara bnn theano description dataset listed follow krizhevsky hinton dataset classification task rgb images training dataset contains images testing dataset contains svhn street view house numbers netzer dataset house number dataset google street view images digits training digits testing image size campos dataset contain characters natural images synthesized images images serve training set rest serve testing set image size gtsrb german traffic sign benchmark stallkamp dataset includes traffic signs resize traffic sign images training data testing data experiment subsections show experimental results corresponding methodology section respectively bnn reconstruction following input data conversion method section raw data dataset denoted cifar pixel value represented integer magnitude thus bits enough lossless binary representation input denoted cifarb illustrated section proposed structure reconstructed bnn different original bnn input format first layer table compares performance results three network structure different input layer baseline bnn design one hubara full precision input binarized layer define cnn input binarized weights activations fbnn fbnn bit slices input bnn training method section fbnn shows accuracy drop compared bnn accuracy affected computational complexity degradation layer unnormalized input data also gives insights fbnn hard get good accuracy rate accord tang table sensitivity analysis single bit slice channel random noise injected arch bit arch bit bnn fnn recon fnn bnn table sensitivity analysis multiple bit slices channel random noise injected arch arch bits bits bnn fnn recon fnn bnn opinion tang introducing bit slices input layer reconstruct bnn proposed section accuracy drop compensated shown table even get better error rate baseline bnn slightly increased network size also gives margin compressing network sensitivity analysis reconstructed bnn reconstructed bnn presented last section sensitivity analysis stated section first analyze sensitivity single bit slice results shown table data shows table figure error rate randomizing one multiple bit slices sensitivity analysis table performance cbnns network size gops err arch ratio ratio bits bnn cbnn average trials addition reconstructed bnn also evaluate sensitivity input counterpart denoted fnn fnn intend show data redundancy reflected binary domain domain pattern take architecture first row reference design row err column errref others errinf errref bnn reference design reconstructed bnn fnn input reference design fullprecision ones interesting bit slices sensitivity level concluded almost unchanged define turning point error sensitivity analysis point flips sign increases abruptly turning point bit second analyze sensitivity bit slices stacks stack contains nth bit slices color channel results shown table bit slices makes difference distortion injected one makes slight difference around accuracy drop bit also turning point around accuracy drop even randomize entire input values bit slices variation propagates entire network accuracy change much therefore bits useless training stage validates hypothesis bnn still redundancy fig error rate turning point circled bit slice trend error rate binary domain domain shown fig align well order make entire process automatic simple set threshold errth determine many bits prunable errth set conclude bit slices redundant prunable sensitivity analysis accordingly reconstructed bnn shrunk reduce redundancy get compact architecture rebuild compact bnn cbnn since bit slices prunable rebuild compact bnn depth layer shrunk half performance cbnn shown table ratio represents compression ratio gops stands giga operations one operation either addition multiplication regarding network size use bits measuring weights layer since proved bit enough maintain accuracy suda table performance results cbnns datasets dataset svhn gtsrb err bits network size ratio gops ratio also show alternatives pruning bit slices shrink layerwise depth results align sensitivity analysis bit slices little impact classification performance choice pruning bit slices optimal one maximize compression ratio accuracy drop since size layer larger bnn achieve ideal network size compress ratio regarding entire network actual compression ratio network size compression ratio number gops experiment datasets section skip sensitivity analysis show result comparison baseline final cbnns get procedure svhn datasets use baseline architecture half depth layer one gtsrb use baseline architecture unit fnn bnn figure runtime comparison different network compression technique filter configuration one since input size gtsrb larger network gtsrb depth larger width height layer table shows performance results cbnns evaluating different datasets network setting baseline dataset shown first row dataset region gtsrb cbnns able maintain accuracy drop achieving network size reduction respectively svhn dataset accuracy drop pruning bits pruning bits large order preserve accuracy drop network size reduction yield runtime evaluation evaluate actual runtime performance cbnns nvidia gpu titan batch size fixed experiments use gpu kernel hubara cbnn implementation computational time calculated averaging runs fig illustrates actual runtime runtime speedup cbnns compared baseline bnns configurations highlight ones table table cbnns processing gtsrb datasets network size total gops shrink resulting speedup cbnn processing svhn dataset network size total gops shrinks resulting speedup fig normalize runtime performance cnn fnn proved han combining pruning quantization huffman coding technique fnn achieve speedup shown green bar hubara demonstrate multilayer perceptron bnn get speedup compared counterpart top bnn proposed cbnn give extra speedup therefore cbnn achieve speedup compared fnn conclusion paper propose novel flow explore redundancy bnn remove redundancy sensitivity analysis data pruning order build compact bnn one follow three steps specifically first reconstruct bnn input layer inject randomly binarized bit slices analyze sensitivity level bit slice classification error rate prune accuracy insensitive bit slices total slices rebuild cbnn depth shrunk times layer experiment results show error variation trend sensitivity analysis reconstructed bnn well aligned cbnn addition cbnn able get network compression ratio computational complexity reduction terms gops accuracy loss compared bnn actual runtime reduced compared baseline bnn counterpart respectively references courbariaux matthieu courbariaux yoshua bengio david binaryconnect training deep neural networks binary weights propagations advances neural information processing systems pages hubara itay hubara matthieu courbariaux daniel soudry ran yoshua bengio binarized neural networks advances neural information processing systems pages rastegari mohammad rastegari vicente ordonez joseph redmon ali farhadi imagenet classification using binary convolutional neural networks european conference computer vision pages springer cham zhu chenzhuo zhu song han huizi mao william dally trained ternary quantization arxiv preprint han song han jeff pool john tran william dally learning weights connections efficient neural network advances neural information processing systems pages krizhevsky alex krizhevsky ilya sutskever geoffrey hinton imagenet classification deep convolutional neural networks advances neural information processing systems pages simonyan zisserman karen simonyan andrew zisserman deep convolutional networks image recognition arxiv preprint kaiming xiangyu zhang shaoqing ren jian sun deep residual learning image recognition proceedings ieee conference computer vision pattern recognition pages suda naveen suda vikas chandra ganesh dasika abinash mohanty yufei sarma vrudhula seo cao fpga accelerator convolutional neural networks proceedings international symposium gate arrays pages acm tian yonglong tian ping luo xiaogang wang xiaoou tang pedestrian detection asided deep learning semantic tasks proceedings ieee conference computer vision pattern recognition pages schroff florian schroff dmitry kalenichenko james philbin facenet unified embedding face recognition clustering proceedings ieee conference computer vision pattern recognition pages guosheng yongxin yang dong josef kittler william christmas stan timothy hospedales face recognition meets deep learning evaluation convolutional neural networks face recognition proceedings ieee international conference computer vision workshops pages tang wei tang gang hua liang wang train compact binary neural network high accuracy aaai pages lumb lumb apple engine chip could power iphones https yixing zichuan liu kai hao fengbo ren fpga accelerator binary convolutional neural networks proceedings international symposium gate arrays pages acm krizhevsky hinton alex krizhevsky geoffrey hinton learning multiple layers features tiny images http lecun yann lecun mnist database handwritten digits http lecun netzer yuval netzer tao wang adam coates alessandro bissacco andrew reading digits natural images unsupervised feature learning nips workshop deep learning unsupervised feature learning vol lecun yann lecun yoshua bengio hinton deep learning nature alpaydin ethem alpaydin introduction machine learning mit press campos teo campos bodla rakesh babu manik varma proceedings international conference computer vision theory applications stallkamp johannes stallkamp marc schlipsing jan salmen christian igel german traffic sign recognition benchmark classification competition neural networks ijcnn international joint conference pages ieee han song han huizi mao william dally deep compression compressing deep neural networks pruning trained quantization huffman coding international conference learning representations
| 1 |
hierarchical reinforcement learning approximating optimal discounted tsp using local policies tom zahavy avinatan hasidim haim kaplan yishay mansour mar abstract work provide theoretical guarantees reward decomposition deterministic mdps reward decomposition special case hierarchical reinforcement learning allows one learn many policies parallel combine composite solution approach builds mapping problem reward discounted traveling salesman problem deriving approximate solutions particular focus approximate solutions local solutions observe information current state local policies easy implement require substantial computational resources perform planning local deterministic policies like nearest neighbor used practice hierarchical reinforcement learning propose three stochastic policies guarantee better performance deterministic policy introduction one unique characteristics human problem solving ability represent world different granularities plan trip first choose destinations want visit decide destination hierarchical reasoning enables map complexities world around simple plans computationally tractable reason nevertheless successful reinforcement learning algorithms still performing planning one abstraction level provides general framework optimizing decisions dynamic environments however scaling problems suffers curses dimensionality coping exponentially large state spaces action spaces long horizons one approach deals large state spaces introducing function approximation value google technion bar ilan univeristy tel aviv university correspondence tom zahavy tomzahavy function policy making possible generalize across different states two famous examples tesauro deep network dqn mnih introduced deep neural network dnn approximate value function leading high performance solving backgammon video games different approach deals long horizons using policy network search among game outcomes efficiently silver leading performance playing chess poker silver however utilizing approach possible simulate environment still open problem approach dealing long horizons introduce hierarchy problem see barto mahadevan survey focus options framework sutton hierarchy formulation options local policies map states actions learned achieve subgoals policy options selects among options accomplish final goal task recently demonstrated learning selection rule among options using dnn delivers promising results challenging environments like minecraft atari tessler kulkarni studies shown possible learn options jointly vezhnevets bacon work focus specific type hierarchy reward function decomposition dates back works humphrys karlsson studied among different research groups recently van seijen formulation option learns maximize local reward function final goal maximize sum rewards option trained separately provides value function option policy options uses local values select among options way option responsible solving simple task options learned parallel across different machines higher level policy trained using smdp algorithms sutton different research groups suggested using rules select among options approximating optimal discounted tsp using local policies example choosing option maximal value function humphrys barreto choosing action maximizes sum option value functions karlsson using rules derive policies mdp learning options without learning mdp learning fully decentralized although many cases one reconstruct options original mdp would defeat entire purpose using options work concentrate local rules select among available options even specifically consider set mdps deterministic dynamics share components reward given set options one per reward optimal policy collecting reward interested deriving optimal policy collecting rewards solving mdp setting optimal policy derived solving smdp whose actions optimal policies collecting single rewards specifically focus collectible rewards special type reward common navigation domains like minecraft tessler deepmindlab teh beattie vizdoom kempka challenge dealing collectible rewards state space changes time collect reward one think subset available rewards part state since combinations remaining items considered state space grows exponentially number rewards show solving smdp considerations equivalent solving reward discounted traveling salesman problem blum similar classical traveling salesman problem tsp computing optimal solution furthermore get approximate solution value least optimal discounted return blum polynomial brute force approach solving requires evaluating possible tours connecting rewards also adapt dynamic programming algorithm tsp bellman held karp solve see algorithm appendix scheme identical tabular smdp still requires exponential algorithm approximating optimal return within factor larger must worstcase running time grows faster polynomial assuming widely believed conjecture hardness results rule efficient solutions special mdps example provide appendix exact solutions case blum proposed polynomial time planning algorithm computes policy collects least fraction optimal discounted return later improved farbstein levin planning algorithms need know entire smdp order compute approximately optimal policies contrast work focus deriving analyzing policies use local make decisions local policies simpler implement efficient need learn reinforcement learning community already using simple local approximation algorithms hope research provide important theoretical support comparing local heuristics addition introduce new reasonable local heuristics specifically prove guarantees reward collected algorithms relative reward optimal tour also prove bounds maximum relative reward local algorithms collect experiments compare performance local algorithms particular main contributions follows results establish impossibility results local policies showing deterministic local policy guarantee reward larger every mdp stochastic policy guarantee reward larger every mdp impossibility results imply nearest neighbor algorithm iteratively collects closest reward thereby total least reward optimal constant factor amongst deterministic local policies positive side propose three simple stochastic policies outperform best combines random depth first search rdfs guarantees performance least opt achieves least general case combining jumping random reward sorting rewards distance slightly worse guarantee simple modification first jump random reward continues already improves guarantee log problem formulation define problem explicitly starting general transfer framework definition specific transfer learning setting collectible reward decomposition definition definition general transfer framework given set mdps mdp mdp line star observe value option current state approximating optimal discounted tsp using local policies differ reward signal derive optimal policy given optimal policies definition describes general transfer learning problem similar barreto transfer framework assumes set mdps sharing reward signal interested transfer learning using quantities learned mdps mdp specifically given set optimal options value functions interested transfer mdp deriving policies solving without learning definition collectible reward decomposition reward decomposition reward represents sum local rewards collectible rewards reward signal represents collectible prize iff particular state action otherwise addition reward collected deterministic dynamics deterministic transition matrix action row exactly one value equals values equal zero property definition requires decomposition previous rewards property requires local reward collectible prize limiting generality models satisfy properties investigated theory simulation barreto tessler higgins van seijen humphrys given value functions local policies optimal shortest path reward reward given following option state addition length shortest path denoted given value function since notice state optimal policy always follow shortest path one rewards see assume exists policy following shortest path state next improve taking shortest path contradicting optimality last observation implies optimal policy composition local options property definition requires deterministic dynamics property perhaps limiting three appears numerous domains including many maze navigation problems arcade learning environment bellemare games like chess given deterministic transition matrix optimal policy make decisions states contain rewards words policy arrived reward state decided follow optimal policies reaches collectible reward decomposition definition optimal policy derived smdp sutton denoted state space contains initial state reward states action space replaced set options corresponds following optimal policy reaching state addition action space transition matrix deterministic since otherwise finally reward signal discount factor remain general optimal policies smdps guaranteed optimal original mdp smdp includes options regular primitive actions sutton related study mann analyzed landmark options specific type options plan reach state deterministic mdp landmark options related reward decomposition collectible rewards also represent policies plan reach specific state mdp given set landmark options mann analyzed errors searching optimal solution smdp planning landmark options instead searching original mdp planning primitive actions definition given policies value functions optimal errors equal zero thus solution smdp guaranteed optimal mdp addition analysis mann provides bounds dealing options nondeterministic dynamics may help extend analysis cases future work finally optimal policy derived solving definition see look graph includes initial state reward states define length edge graph value following option state path graph defined set indices length path given dit definition given undirected graph nodes edges length find path graph maximizes discounted cumulative return arg max dit summarize modeling approach allows deal curse dimensionality three different ways first option learned function approximation techniques tessler bacon note true stochastic see recall stochastic shortest path shortest expectation thus stochasticity may lead states require changing decision approximating optimal discounted tsp using local policies deal raw high dimensional inputs like vision text second formulating problem smdp reduces state space include reward states effectively reduces planning horizon sutton mann third formulation derive approximate solutions dealing exponentially large state spaces emerge modeling events like collectible rewards basic idea proof supplementary material first reward larger reward opt collects next propose simple easy implement stochastic adjustment vanilla algorithm better upper bound call algorithm starts collecting one rewards say random continues executing algorithm local heuristics algorithm first random pick start defining local policy policies analyze local policies local policy mapping inputs current state history containing previous steps taken policy particular encodes collectible states already visited discounted return reward current state whose output distribution options notice local policy full information graph local distances formally definition local policy local policy mapping input graph nodes first node flip coin outcome heads perform visit node random denote end follow executing following theorem shows stochastic modification improves guarantees factor log theorem performance discounted reward salesman graph log nodes opt due space considerations proofs found supplementary material improvement may seem small log observation stochasticity improves performance guarantees local policies essential work following sections derive sophisticated randomized algorithms better performance guarantees performance impossibility results start analysis one natural heuristics tsp famous algorithm context problem policy selecting option highest estimated value exactly like gpi barreto shall abuse notation slightly use name algorithm value confusion arise tsp without discount general graphs know rosenkrantz eterministic ocal olicies set distributions finite set opt however algorithm guarantees value opn theorem states next subsection prove lower deterministic local policies opn implies optimal deterministic policies observation least opt motivated use component stochastic algorithms theorem performance discounted reward traveling salesman graph nodes discount factor opt previous subsection saw heuristic guarantees performance least opt next show impossibility result deterministic local policies indicating policy guarantee opt makes optimal policies theorem impossibility deterministic local policies deterministic local policy exists graph nodes discount factor opt proof sketch consider family graphs consists star central vertex leaves distance starting vertex central vertex reward leaf graph family corresponds different subset leaves connect pairwise edges length leaves connected central vertex central vertex local policy distinguish among rewards distance origin therefore choice graphs approximating optimal discounted tsp using local policies follows given policy exists graph adjacent rewards visited last follows simple algebra opt tochastic ocal olicies previous subsection saw deterministic local policies could guarantee opt showed optimal policies small stochastic adjustment improve guarantees observations motivated look better local policies broader class stochastic local policies begin providing better impossibility result policies theorem theorem impossibility stochastic local policies stochastic local policy exists graph nodes discount factor opt proof supplementary material similar previous one considers family graphs leaves connected form clique instead policy achieves lower bound propose analyze two stochastic policies addition substantially improve deterministic upper bound see policies satisfy occam razor principle policies better guarantees also complicated require computational resources randomized dfs rdfs describe policy algorithm best performing local policy able derive policy performs probability local policy call rdfs probability rdfs starts random node continues performing dfs edges shorter chosen random specify later runs edges shorter rdfs continues performing algorithm rdfs input graph nodes first node let log flip coin outcome heads perform rdfs visit node random denote choose random uniform fix set initiate dfs edges shorter end follow executing performance guarantees method stated theorem analysis conducted three steps first two steps assume opt achieved value collecting rewards segment length log first step considers case second step remove requirement analyze performance worst value third step considers value collected opt necessarily segment length completes proof second third steps loose two logarithmic factors one since use segment length opt collects value least log second guessing good enough approximation setting theorem performance instance rewards opt log opt otherwise log proof step assume opt collects set sopt rewards segment length log distance first reward last reward include distance starting point first reward let dmin dmax shortest longest distances reward sopt respectively triangle inequality dmax dmin assume dmin value opt collects rewards sopt negligible show rdfs start following lemma lemma path length less edges larger proof contradiction assume edges longer length given thus contradiction assumption path length lemma assures pruning edges larger graph connected comx ponents ccs sopt addition holds edges inside connected component shorter next lower bound total gain rdfs say rdfs starts reward component since edges shorter collects either rewards least rewards thus rdfs collects approximating optimal discounted tsp using local policies min rewards notice first random step leads rdfs vertex probability half rewards ccs rdfs dmax dmax min dmax half rewards sopt ccs let number ccs notice get step finally consider general case opt may collect value segment length larger notice value opt collects rewards follow first segments length tour since means exists least one segment length opt collects least logopt value combining analysis previous step proof complete random ascent rdfs min dmax dmax dmax guarantees probability log guess true value factor guess work true value factor approximations degrade bounds factor log dmax dmax jensen setting guarantee value rdfs least dmax since dmax dmin rdfs dmax opt min last inequality follows triangle inequality step assume opt gets value rewards collects segment length rewards opt collects negligible value recall policy either probability rdfs probability picking single reward closest starting point gets least value opt otherwise probability rdfs starts one rewards picked opt analysis step sets rdfs collects value collected opt use step follows opt lower bound smallest case collect opt first notice since known guessed order choose done setting random uniform describe policy algorithm similar spirit policy performs probability local policy call probability starts random node sorts nodes increasing order distance visits nodes order algorithm simple implement require guessing parameters like rdfs guess however comes cost worse worst case bound algorithm input graph nodes first node flip coin outcome heads perform visit node random denote sort remaining nodes increasing distances call permutation visit nodes increasing order else execute end performance guarantees method given theorem analysis follows steps proof algorithm emphasize pruning parameter used analysis purposes part algorithm consequently see one logarithmic factor performance bound theorem contrast two theorem theorem performance discounted reward traveling salesman graph nodes discount factor opt log opt otherwise log approximating optimal discounted tsp using local policies simulations section evaluate compare performance deterministic stochastic local policies measuring cumulative discounted reward achieved algorithm different instances function number rewards set place rewards opt collect almost within constant discount always place initial state origin define log denotes short distance next describe five scenarios figure ordered left right considered evaluation graph types generate nmaps different graphs report reward achieved algorithm average nmaps graphs figure top minimal among nmaps scores figure bottom algorithms stochastic report average results graph run algorithm nalg times report average score finally provide visualization different graphs tours taken different algorithms helps understanding numerical results random cities vanilla tsp rewards randomly distributed plane known algorithm yields tour longer optimal average johnson mcgeoch used similar input compare algorithms specifically discounted cumulative reward random cities line generated graph rewards uniform distribution inspecting figure see algorithm performs best average worst case observation suggests rewards distributed random selecting nearest reward reasonable thing addition see nnrdfs performs best among stochastic policies predicted theoretical results hand policy performs worst among stochastic policies happens sorting rewards distances introduces undesired behavior collecting rewards equal distance line graph demonstrates scenario greedy algorithms like likely fail rewards located three different groups contains rewards group rewards located cluster left origin group located cluster right origin bit closer group group also located right rewards placed increasing distances reward located inspecting results see indeed perform worst understand consider tour algorithm takes goes group loses lot going stochastic tours depend choice belongs group random clusters circle rural urban average results min results figure evaluation deterministic stochastic local policies different cumulative discounted reward policy reported average worst case scenarios approximating optimal discounted tsp using local policies collect left right perform relatively belongs group first collect rewards left ascending order come back collect remaining rewards right performing relatively however group visit group going tempted group going loses lot random clusters graph demonstrates advantage stochastic policies first randomly place cluster centers circle radius centers small random guassian distance circle draw reward first draw cluster center uniformly draw cjx cjx cjy cjy scenario motivated maze navigation problems collectible rewards located rooms clusters rooms fewer rewards collect inspecting results see perform best particular worst case scenario reason picks nearest reward value comes rewards collected cluster hand stochastic algorithms visit larger clusters first higher probability achieve higher value circle graph circles centered origin radii ith circle rewards placed circle place equal distances performs best among policies since collects rewards closer first greedy algorithms hand tempted collect rewards take outer circles results lower values rural urban rewards sampled mixture two normal distributions half rewards located city position gaussian random variable small standard deviation half located village position gaussian random variable graph see worst case scenario stochastic policies perform much better happens mistakenly choosing rewards take remote places rural area stochastic algorithms remain near city high probability collect rewards visualization present visualization tours taken different algorithms note order distinguish algorithms qualitatively compare algorithm stochastic algorithms rdfs without balancing graphs present rewards displayed grid using gray dots graph type present single graph sampled appropriate distribution top display tours taken different algorithms row corresponds single algorithm stochastic algorithms present best figure worst tours figure among different runs display tour since deterministic finally better interpretability display first rewards tour policy collects value unless mentioned otherwise discussion random cities inspecting worst tours see stochastic tours longer tour due distance first reward drawn random addition observe behavior tour collecting rewards equal distanced close causes perform worst scenario best tours exhibit similar behavior case located closer line recall graph rewards located three different groups containing rewards ordered left right group cluster rewards located left origin distance group also cluster located right slightly shorter distance origin group group also located right rewards placed increasing distances reward located visualization purposes added small variance locations rewards groups rescaled axes two vertical lines rewards represent two groups cropped graph first rewards group observed finally chose first half tour displayed see first two groups visited tour examining best tours figure see first visits group tempted going right group harms performance hand stochastic algorithms staying closer origin collect rewards groups first case worst case tours algorithms perform relatively random clusters see first visits cluster nearest origin nearest cluster necessarily largest one practice collects rewards cluster traverses remaining clusters result lower performance approximating optimal discounted tsp using local policies figure visualization best tours taken deterministic stochastic local policies different figure visualization worst tours taken deterministic stochastic local policies different approximating optimal discounted tsp using local policies hand since stochastic algorithms selecting first reward random high probability reach larger clusters achieve higher performance circles scenario distance adjacent rewards circle longer distance adjacent rewards two consecutive circles examining tours see indeed taking tours lead outer circles hand rdfs staying closer origin local behavior beneficial rdfs achieves best performance scenario however performs well best case performance much worse algorithms worst case hence average performance worst scenario rural urban graph large rural area city located near origin hard visualize improve visualization chose first half tour displayed since half rewards belong city choosing ensures tour reaching city first segment tour tour reaches city displayed looking best tours see taking longest tour reaches city stochastic algorithms reach earlier stochastic algorithms collect many rewards traversing short distances therefore perform much better scenario related work rules option selection used several studies karlsson suggested policy chooses greedily withprespect sum local qvalues argmaxa humphrys suggested choose option highest local argmaxa greedy combination local policies optimized separately may necessarily perform well barreto considered transfer framework similar definition focus collectible reward decomposition definition instead proposed framework rewards linear reward features wit similar humphrys suggested using rule option selection referred gpi addition authors provided performance guarantees using gpi form additive based regret error bounds provide impossibility results contrast prove multiplicative performance guarantees well three stochastic policies also proved first time impossibility results local option selection methods notice definition special case framework considered barreto different approach tackle challenges multitask learning optimize options parallel policy options russell zimdars sprague ballard van seijen one method achieves goal local sarsa algorithm russell zimdars sprague ballard similar karlsson function learned locally option concerning local reward however local functions learnt using sarsa respect policy options argmaxa instead learned learning russell zimdars showed policy options updated parallel local sarsa updates local sarsa algorithm promised converge optimal value function conclusions work provided theoretical guarantees reward decomposition deterministic mdps allows one learn many policies parallel combine composite solution efficiently safely particular focused approximate solutions local therefore easy implement require many computational resources local deterministic policies like nearest neighbor used practice hierarchical reinforcement learning study provides important theoretical guarantee reward collected three policies well impossibility results local policy policies outperform worst case scenario evaluated average worst case scenarios suggesting better use one approximating optimal discounted tsp using local policies references bacon harb jean precup doina optioncritic architecture barreto andre munos remi schaul tom silver david successor features transfer reinforcement learning advances neural information processing systems barto andrew mahadevan sridhar recent advances hierarchical reinforcement learning discrete event dynamic systems mann timothy arthur mannor shie precup doina approximate value iteration temporally extended actions journal artificial intelligence research jair mnih volodymyr kavukcuoglu koray silver david rusu andrei veness joel bellemare marc graves alex riedmiller martin fidjeland andreas ostrovski georg control deep reinforcement learning nature beattie charles leibo joel teplyashin denis ward tom wainwright marcus heinrich lefrancq andrew green simon sadik amir deepmind lab arxiv preprint matej schmid martin burch neil viliam morrill dustin bard nolan davis trevor waugh kevin johanson michael bowling michael deepstack artificial intelligence poker science bellemare marc naddaf yavar veness joel bowling michael arcade learning environment evaluation platform general agents journal artificial intelligence research jair junhyuk guo xiaoxiao lee honglak lewis richard singh satinder video prediction using deep networks atari games advances neural information processing systems bellman richard dynamic programming treatment travelling salesman problem journal acm jacm junhyuk singh satinder lee honglak kohli pushmeet task generalization deep reinforcement learning proceedings international conference machine learning blum avrim chawla shuchi karger david lane terran meyerson adam minkoff maria approximation algorithms orienteering tsp siam journal computing rosenkrantz daniel stearns richard lewis philip analysis several heuristics traveling salesman problem siam journal computing farbstein boaz levin asaf discounted reward tsp algorithmica russell stuart zimdars andrew reinforcement learning agents held michael karp richard dynamic programming approach sequencing problems journal society industrial applied mathematics silver david huang aja maddison chris guez arthur sifre laurent van den driessche george schrittwieser julian antonoglou ioannis panneershelvam veda lanctot marc mastering game deep neural networks tree search nature higgins irina pal arka rusu andrei matthey loic burgess christopher pritzel alexander botvinick matthew blundell charles lerchner alexander darla improving zeroshot transfer reinforcement learning proceedings international conference machine learning humphrys mark action selection methods using reinforcement learning animals animats johnson david mcgeoch lyle traveling salesman problem case study local optimization local search combinatorial optimization karlsson jonas learning solve multiple goals phd thesis kempka wydmuch marek runc grzegorz toczek jakub wojciech vizdoom research platform visual reinforcement learning computational intelligence games cig ieee conference ieee kulkarni tejas narasimhan karthik saeedi ardavan tenenbaum josh hierarchical deep reinforcement learning integrating temporal abstraction intrinsic motivation advances neural information processing systems silver david hubert thomas schrittwieser julian antonoglou ioannis lai matthew guez arthur lanctot marc sifre laurent kumaran dharshan graepel thore mastering chess shogi general reinforcement learning algorithm arxiv preprint sprague nathan ballard dana reinforcement learning modular sarsa sutton richard precup doina singh satinder mdps framework temporal abstraction reinforcement learning artificial intelligence teh yee bapst victor pascanu razvan heess nicolas quan john kirkpatrick james czarnecki wojciech hadsell raia distral robust multitask reinforcement learning advances neural information processing systems tesauro gerald temporal difference learning communications acm tessler chen givony shahar zahavy tom mankowitz daniel mannor shie deep hierarchical approach lifelong learning minecraft aaai approximating optimal discounted tsp using local policies van seijen harm fatemi mehdi romoff joshua laroche romain barnes tavian tsang jeffrey hybrid reward architecture reinforcement learning advances neural information processing systems vezhnevets alexander sasha osindero simon schaul tom heess nicolas jaderberg max silver david kavukcuoglu koray feudal networks hierarchical reinforcement learning proceedings international conference machine learning yao andrew probabilistic computations toward unified measure complexity proceedings annual symposium foundations computer science ieee computer society approximating optimal discounted tsp using local policies performance theorem performance discounted reward traveling salesman graph nodes discount factor opt proof denote nearest reward origin distance origin distance first reward collected opt least thus rewards ordered order opt collects get opt thus opt stochastic local policies theorem impossibility results stochastic local policies stochastic local policy exists graph nodes discount factor dot opt dot hand heuristic chooses first round thus cumulative reward least get opt proof consider family graphs consists star central vertex leaves starting vertex central vertex reward leaf length edge chosen graph corresponds subset leaves pairwise connect form clique since therefore opt impossibility results deterministic local policies theorem impossibility results deterministic local policies deterministic local policy exists graph nodes discount factor opt hand local policy central vertex distinguish among rewards therefore every graph picks first reward distribution policy continues choose rewards distribution hits first reward clique graph family corresponds different subset leaves connect pairwise edges length leaves connected central vertex central vertex local policy distinguish among rewards distance origin therefore choice graphs following choice also long hit one special rewards argue formally every policy small expected reward graph use yao principle yao consider expected reward policy uniform distribution let probability picks first vertex clique assuming first vertex clique let probability second vertex clique let defined similarly picks vertex clique reward without cumulative discount however time misses clique collects single reward suffers discount neglecting rewards collected hits clique total value follows given policy exists graph adjacent rewards visited last finally since since value proof consider family graphs consists star central vertex leaves starting vertex central vertex reward leaf length edge chosen approximating optimal discounted tsp using local policies last inequality follows triangle inequality theorem performance instance nodes discount factor opt log opt otherwise log step assume opt gets value rewards collects segment length rewards opt collects negligible value recall policy either probability probability picking single reward closest starting point gets least value opt otherwise probability starts one rewards picked opt analysis step sets collects value collected opt use step follows proof step assume opt collects set sopt rewards segment length log distance first reward last reward include distance starting point first reward let dmin dmax shortest longest distances reward sopt respectively triangle inequality dmax dmin assume dmin value opt collects rewards sopt negligible let threshold fix denote ccs sopt created deleting edges longer among vertices sopt lemma assume starts vertex component since diameter collects first vertices including within total distance collects least rewards traveling total distance collects least rewards shall omit floor function sequal follows collects brevity min rewards notice first random step leads rdfs vertex probability half rewards ccs min dmax dmax dmax dmax min dmax jensen dmax dmax setting guarantee value least dmax since dmax dmin dmax opt min lower bound smallest case collect opt step arguments step analysis follows opt log opt otherwise log analyze performance guarantees method analysis conducted two steps first step assume opt achieved value collecting rewards consider case second step considers general case analyzes performance worst value emphasize unlike previous two algorithms assume time opt collects rewards segment length theorem discounted reward traveling salesman graph nodes half rewards sopt ccs pmore let number ccs notice get dmax opt opt log proof step assume opt collects rewards define log replace fractional power affect asymptotics result denote ccs obtained pruning edges longer define large contains log rewards observe since ccs lemma least one large exists lemma assume large component let path covered starting reaches large component let length let number rewards collected including last reward back large component including note therefore perform third step like analysis previous methods approximating optimal discounted tsp using local policies proof let prefix ends ith reward let length let distance ith reward reward since ith reward neighbor distance reward thus initial condition solution recurrence lemma log visits large ccs large exists unvisited reward distance shorter proof let reward large component collected rewards therefore exists reward collected distance lemma lemma imply following corollary corollary assume log let path kth reward large reward large connected component let denote length number rewards excluding first including last following lemma concludes analysis step lemma let prefix length let number segments connect rewards large ccs contain internally rewards small ccs let number rewards collects ith segment log assume splits exactly segments fact last segment may incomplete requires minor adjustment proof proof since log lemma follows assume log corollary rmax argmax since equation rmax implies since log get taking logs lemma follows lemma guarantees collects log rewards traversing distance next notice chance defined algorithm belongs one large ccs nnlog larger finally similar assume value opt greater constant fraction means opt must collected first rewards traversing distance denote fraction rewards sopt denote dmin dmax see recall traversing distance opt achieved less since already traversed achieve less remaining rewards thus contraction assumption achieved shortest longest distances sopt respectively triangle inequality dmax dmin therefore constant get sopt taking expectation probability first random pick follows log dmax log opt dmin log step similar analysis assume opt collects value rewards collects segment length rewards opt collects negligible value recall either probability random pick probability followed picking single reward closets starting point gets least value opt notice need assume anything length tour opt takes collect rewards since use step follows log log opt thus worst case scenario log implies log therefore log opt approximating optimal discounted tsp using local policies exact solutions possible states optimal policy could present variation algorithm rdtsp note similar tsp denotes length tour visiting cities last one tsp length shortest tour however formulation required definition additional recursive quantity accounts value function discounted sum rewards shortest path using notation observe identical tabular smdp since known exponential complexity follows solving using smdp algorithms also exponential complexity since able classify state space polynomial size contains states optimal policy describe dynamic programming scheme algorithm finds optimal policy algorithm computes table maximum value get collecting rewards starting defined analogously starting algorithm first initializes entries either entries correspond cases rewards left right agent collected cases agent continues collect remaining rewards one one order iterates counter number rewards left collect value define combinations partitioning rewards right left agent fill increasing value fill entry take largest among value collect rewards appropriately discounted value collect fill analogously algorithm tsp blue rdtsp black input graph nodes end arg end end end opt opt return opt exact solutions simple geometries provide exact polynomial time solutions based dynamic programming simple geometries like line star note solutions exact polynomial derived general geometries dynamic programming line given instance rewards located single line denoted integers left right easy see optimal policy collects time either nearest reward right left current location thus time set rewards already collected lie continuous interval first uncollected reward left origin denoted first uncollected reward right origin denoted action take next either collect collect follows state optimal agent uniquely defined triple current location agent observe therefore optimal value starting position note algorithm computes value function get policy one merely track argmax maximization step dynamic programming star consider instance rewards located rewards connected central connection point via one lines rewards along ith line denote rewards ith line mij ordered origin end line focus case agent starts easy see optimal policy collects time uncollected reward nearest origin along one lines thus time set rewards already collected lie continuous intervals origin first uncollected reward along line denoted action take next collect one nearest uncollected rewards follows state optimal agent uniquely defined current location agent observe tuple therefore dnd possible states optimal policy could since able classify state space polynomial size contains states optimal policy describe dynamic programming scheme algorithm finds optimal policy algorithm computes table maximum value get collecting rev wards mini starting algorithm first initializes entries except exactly one entry entries correspond cases rewards collected except one line segment cases agent continues collect remaining rewards one one order iterates counter number rewards left collect value define combinations partitioning rewards among lines fill increasing value take fill entry largest among values collecting rewards general case solved applying algorithm origin reached followed algorithm approximating optimal discounted tsp using local policies algorithm optimal solution line rewards denoted left right denote distance reward reward denote maximum value get collecting rewards starting reward similarly denote maximum value get collecting rewards starting leftmost rightmost reward collected define input graph nodes init end max end max end end end mini mdnd appropriately discounted note algorithm computes value function get policy one merely track argmax maximization step algorithm optimal solution denote amount rewards collect ith line denote mij rewards along line center star end line denote dmti mkj distance reward line reward line first uncollected reward along line denoted maximum value get collecting remaining rewards starting reward defined rewards collected line define input graph rewards init mdnd end end end mij max end end end end
| 2 |
quantum mds codes constacyclic codes large minimum distance liangdong lua wenping maa ruihu lib yuena mab yang liub hao caoa mar state key laboratory integrated services networks xidian university shaanxi china email kelinglv college science air force engineering university shaanxi china abstract formalism allows arbitrary classical linear codes transform quantum error correcting codes eaqeccs using entanglement sender receiver work propose decomposition defining set constacyclic codes using method construct four classes quantum mds eaqmds codes based classical constacyclic mds codes exploiting less maximally entangled states show class eaqmds minimum distance upper limit greater much larger minimum distance known quantum mds qmds codes length eaqmds codes new sense parameters covered codes available literature index terms quantum error correcting codes eaqeccs mds codes cyclotomic cosets constacyclic code introduction quantum codes play important role quantum information processing quantum computation stabilizer formalism allows standard quantum codes constructed classical codes however condition forms barrier development quantum coding theory ref brun proposed stabilizer formalism shown classical codes used construct eaqeccs shared entanglement available sender receiver let prime power eaqecc encodes information qubits channel qubits help pairs bell states ebits correct errors minimum distance code called standard quantum code denote eaqecc currently many works focused construction eaqeccs based classical linear codes see classical coding theory one central tasks quantum coding theory construct quantum codes codes best possible minimum distance theory singleton bound eaqecc satisfies eaqecc achieving bound called eaqmds code bound quantum singleton bound code achieving bound called quantum qmds code classical linear codes qmds codes eaqmds codes form important family quantum codes constructing qmds codes eaqmds codes become central topic quantum codes many classes qmds codes constructed different methods particular constructions obtained constacyclic codes negacyclic codes containing hermitian dual according mds conjecture mds code exceed therefore larger distance code length one need construct mds code following proposition one frequently used construction methods proposition classical code parity check matrix stabilizes eaqecc number maximally entangled states required conjugate matrix resent years many scholars constructed several quantum codes good parameters proposed concept decomposition defining set cyclic codes construct good quantum codes help concept constructed families quantum codes primitive quaternary bch codes paper construct several families mds codes length classical constacyclic codes chen points constacylic codes exist order divisor construct mds codes constacyclic codes negacyclic codes respectively precisely main contribution new quantum mds codes follows odd prime power even odd prime power form even odd prime power form even odd prime power divisor construction obtain mds codes minimal distance upper limit greater consuming four maximally entangled states construction consumed one pair maximally entangled states eaquantum mds code half length comparing standard qmds code minimum distance constructed construction consuming one pair maximally entangled states obtain family mds codes minimal distance larger standard quantum mds codes ref comparing parameters known mds codes find quantum mds codes new sense parameters covered codes available literature paper organized follows section basic notations results eaquantum codes constacyclic codes provided concept decomposition defining set constacyclic codes stated extended decomposition defining set cyclic codes proposed section give new classes mds codes conclusion given section preliminaries section review basic results constacyclic codes bch codes eaqeccs purpose paper details bch codes constacyclic codes found standard textbook coding theory eaqeccs please see refs let prime number power denotes finite field elements conjugation denoted given two vectors hermitian inner product defined linear code length hermitian dual code defined called hermitian dual containing code called hermitian code recall results classical constacyclic codes negacyclic code cyclic codes vector linear code length called invariant shift nonzero element moreover called cyclic code called negacyclic code constacyclic code codeword customarily represented polynomial form code turn identified set polynomial representations codewords proper context studying constacyclic codes residue class ring corresponds constacyclic shift ring know linear code length constacyclic ideal quotient ring follows generated monic factors called generator polynomial let primitive rth root unity let gcd exists primitive root unity extension field field hence let let coset modulo containing let code length generator polynomial set called defining set let integer coset modulo contains defined set mod smallest positive integer mod see defining set union cosets module dim lemma let constacyclic code length defining set contains hermitian dual code denotes set mod let constacyclic code defining set denoting deduce defining set see ref since striking similarity cyclic codes constacyclic code give correspondence defining skew aymmetric skew asymmetric follows cyclotomic coset skew symmetric mod otherwise skew asymmetric otherwise skew asymmetric cosets come pair use denote pair following results cosets dual containing constacyclic codes bases discussion lemma let positive divisor order let code length defining set one following holds skew asymmetric coset skew asymmetric cosets pair according lemma described relationship cyclotomic coset however defining set classical codes gave definition decomposition defining set cyclic codes chen defined decomposition defining set negacyclic codes order construct mds codes larger distance code length introduce fundamental definition decomposition defining set constacyclic codes definition let primitive rth root unity let code length defining set denote tss tsas factor tss tsas called decomposition defining set determine tss tsas give following lemma characterize lemma let positive divisor order let gcd ordrn mod skew symmetric form skew asymmetric pair mod mod using decomposition defining set constacyclic code one give decomposition follow lemma let constacyclic code defining set tss tsas decomposition denote constacyclic codes defining set tsas tss respectively using decomposition defining set one calculate number needed ebits algebra method lemma let defining set constacyclic code tss tsas decomposition using stabilizer optimal number needed ebits tss proof according definition denote defining sets constacyclic codes tss tsas respectively parity check matrix parity check matrix respectively let since parity check matrix defining set tsas therefore according one obtain rank rank since parity check matrix defining set tss matrix hence rank lemma bch bound constacyclic codes let code length primitive rth root unity let primitive root unity extension field assume generator polynomial roots include set minimum distance least theorem let constacyclic code defining set decomposition tss tsas stabilizes eaqecc proof dimension proposition lemma lemma know stabilizes eaqecc parameters new mds codes section consider codes length construct eaquantum codes give sufficient condition decomposition defining set codes length contain hermitian duals first compute cosets modulo constacyclic codes negacyclic codes mds codes lenght mod subsection construct classes mds codes length prime odd prime power form mod first let consider negacyclic codes length construct codes give decomposition defining set sufficient condition negacyclic codes length contain hermitian duals lemma let mod cosets modulo containing odd integers form lemma let mod ary negacyclic code length define set decomposition defining set tss tsas forms pair proof since mod mod hence forms pair let according ref one obtain means since forms pair tss prises set least according concept decomposition defining set one obtain tsas order testify definition lemma need testify skew symmetric cyclotomic coset two cyclotomic coset form skew asymmetric pair tsas let need testy mod let lemma skew symmetric cyclotomic coset form skew asymmetric pair mod hence skew symmetric cyclotomic cosets two cyclotomic coset form skew asymmetric pair implies tss defining set theory let odd prime power mod exists mds codes even proof consider negacyclic codes length defining set odd prime power mod lemma since every coset must odd number obtain consists integers implies minimum distance least hence negacyclic code parameters combining theory singleton bound obtain mds code parameters even table mds codes length mod even even even even even even even even even even even even mds codes lenght mod subsection construct classes mds codes length prime odd prime power form mod let primitive rth root unity give decomposition defining set sufficient condition codes length contain hermitian duals lemma let mod cosets modulo containing odd integers form lemma let mod constacyclic code length define set decomposition defining set tss tsas forms skew asymmetric pair proof let lemma since mod forms pair let let need testy mod let lemma skew symmetric cyclotomic coset form skew asymmetric pair mod tss since forms pair tss comprises set least according concept decomposition defining set one obtain tsas order testify definition lemma need testify skew symmetric cyclotomic coset two cyclotomic coset form skew asymmetric pair tsas hence skew symmetric cyclotomic cosets two cyclotomic coset form skew asymmetric pair implies tss defining set theory let odd prime power mod exists mds codes even proof consider constacyclic codes length defining odd prime power mod set lemma since every coset must odd number obtain consists integers mod implies minimum distance least hence constacyclic code parameters combining theory singleton bound obtain mds code parameters even table mds codes length mod even even even even even even mds codes lenght subsection construct new mds codes length odd prime power form let primitive rth root unity give result coset modulo lemma let coset modulo containing odd integers form lemma let prime power form negacyclic code length define set decomposition defining set tss tsas skew symmetric proof since mod skew symmetric let since skew symmetric tss comprises set least according concept decomposition defining set one obtain tsas order testify definition lemma need testify skew symmetric cyclotomic coset two cyclotomic coset form skew asymmetric pair tsas let need testy mod tss implies lemma skew symmetric cyclotomic coset form skew asymmetric pair mod divide three parts hence skew symmetric cyclotomic cosets two cyclotomic coset form skew asymmetric pair implies tss defining set theory let odd prime power form positive integer let exists mds codes even defining set proof consider negacyclic codes length odd prime power form lemma since every coset must odd number obtain consists integers implies minimum distance least hence negacyclic code parameters combining theory singleton bound obtain mds code parameters even recently based constacyclic codes zhang constructed family new quantum mds codes parameters even however paper construction minimal distance quantum mds codes parameters even theory help ebit construct family new mds codes parameters table eaqmds codes even even even even lemma let prime power form negacyclic code length define set decomposition defining set tss tsas skew symmetric proof since mod skew symmetric let since skew symmetric tss comprises set least according concept decomposition defining set one obtain tsas order testify definition lemma need testify skew symmetric cyclotomic coset two cyclotomic coset form skew asymmetric pair tsas let need testy mod tss implies lemma skew symmetric cyclotomic coset form skew asymmetric pair mod divide three parts hence skew symmetric cyclotomic cosets two cyclotomic coset form skew asymmetric pair implies tss defining set theory let odd prime power form positive integer let exists mds codes even defining set proof consider negacyclic codes length odd prime power form lemma since every coset must odd number obtain consists integers implies minimum distance least hence negacyclic code parameters combining theory eaquantum singleton bound obtain mds code parameters even table eaqmds codes even even even even mds codes lenght let odd prime power suppose let primitive root unity since clearly every coset modulo contains exactly one element using hermitian construction codes chen constructed family new quantum mds codes parameters odd prime power hard enlarge minimal distance type quantum mds codes subsection adding one ebit construct new family family new mds codes parameters lemma let odd prime power constacyclic code length define set decomposition defining set tsas skew symmetric proof since skew symmetric let mod since skew symmetric tss comprises set least according concept decomposition defining set one obtain tsas order testify definition lemma need testify skew symmetric cyclotomic coset two cyclotomic coset form skew asymmetric pair tsas let need testy mod tss implies lemma skew symmetric cyclotomic coset form skew asymmetric pair mod since hence divide three parts integer since hence integer divide two parts divide two parts hence skew symmetric cyclotomic cosets two cyclotomic coset form skew asymmetric pair implies defining tss set theory let odd prime power exists definh proof consider constacyclic codes length ing set odd prime power lemma since every coset one element must odd number obtain consists integers implies minimum distance least stacyclic code parameters hence combining theory singleton bound obtain mds code parameters table eaqmds codes summary paper propose concept decomposition defining set constacyclic codes using concept construct four classes quantum mds eaqmds codes based classical constacyclic mds codes exploiting less maximally entangled states table list quantum mds codes constructed paper consuming four maximally entangled states obtain mds codes minimal distance upper limit greater even mds codes improved parameters codes ref consumed one pair maximally entangled states mds code length half length comparing standard qmds code minimum distance constructed ref moreover consuming one pair maximally entangled states obtain family mds codes minimal distance larger standard quantum mds codes ref comparing parameters known mds codes find quantum mds codes new sense parameters covered codes available literature table new parameters eaqmds codes class length distance even even even remark paper submitted journal designs codes cryptography acknowledgment work supported national key program china grant national natural science foundation china grant national natural science foundation china grant natural science foundation shaanxi grant project grant key project science research anhui province grant references shor scheme reducing decoherence quantum memory phys rev calderbank shor good quantum codes exist phys rev calderbank rains shor sloane quantum error correction via codes ieee trans inf theory vol ketkar klappenecker kumar sarvepalli nonbinary stabilizer codes finite fields ieee trans inform theory vol brun devetak hsieh correcting quantum errors entanglement science hsieh devetak brun general quantum errorcorrecting codes wilde brun optimal entanglement formulas quantum coding phys rev lai brun entanglement increases ability quantum codes phys rev lai brun wilde duality quantum error correction ieee trans inf theory lai brun wilde dualities identities quantum codes quantum inf process hsieh yen hsu high performance quantum ldpc codes need little entanglement ieee trans inf theory fujiwara clark vandendriessche boeck tonchev entanglementassisted quantum codes phys rev vol wilde hsieh babar quantum turbo codes ieee trans inf theory quantum codes constructed primitive quaternary bch codes int quantum guo maximal entanglement quantum codes constructed linear codes quantum inf process guo linear plotkin bound quantum codes phys rev construction quantum mds codes odd prime power phys rev chen ling zhang application constacyclic codes quantum mds codes ieee trans inform theory chen new quantum mds codes distances bigger quantum inf jin kan wen quantum mds codes relatively large minimum distance hermitian codes des codes kai zhu new quantum mds codes negacyclic codes ieee trans inform theory kai zhu constacyclic codes new quantum mds codes ieee trans inform theory wang zhu new quantum mds codes derived constacyclic codes quantum inf zhang new classes quantum mds codes constacyclic codes ieee trans inform theory zhang quantum mds code large minimum distance des codes zuo liu study skew asymmetric coset application air force eng univ nat sci chinese zuo liu hermitian dual containing bch codes construction new quantum codes quantum inf comp macwilliams sloane theory codes amsterdam netherlands huffman pless fundamentals codes cambridge university press cambridge yang cai constacyclic codes finite fields designs codes vol qian zhang quantum codes arbitrary binary linear codes des codes qian zhang mds linear complementary dual codes entanglementassisted quantum codes des codes chen huang feng chen quantum mds codes constructed negacyclic codes quantum inf doi
| 7 |
finding large set covers faster via representation jesper department mathematics computer science eindhoven university technology netherlands aug abstract fastest known algorithm set cover problem universes elements still essentially simple dynamic programming algorithm consequences algorithm known motivated chasm study following natural question instances set cover solve faster simple dynamic programming algorithm specifically give monte carlo algorithm determines existence set cover size time approach also applicable set cover instances exponentially many sets reducing task finding chromatic number given graph set cover natural way show randomized algorithm given integer outputs yes constant probability high level results inspired representation method joux eurocrypt obtained evaluating randomly sampled subset table entries dynamic programming algorithm acm subject classification graph algorithms hypergraphs keywords phrases set cover exact exponential algorithms complexity digital object identifier introduction set cover problem determining satisfiability cnf formulas boolean circuits one canonical problems directly models many applications practical settings also algorithms routinely used tools theoretical algorithmic results problem whose study led development fundamental techniques entire field approximation however exact exponential time complexity set cover still somewhat mysterious know algorithms need use time assuming denoting universe size time assuming exponential time hypothesis large exponential clear particular consequences algorithm currently known even though one canonical problems amount studies exact algorithms set cover pales comparison amount literature exact algorithms many works focus finding algorithms funded nwo veni project work partly done author visiting simons institute theory computing program complexity algorithm design fall wikipedia page set cover quotes textbook vazirani jesper nederlof licensed creative commons license annual european symposium algorithms esa editors piotr sankowski christos zaroliagis article leibniz international proceedings informatics schloss dagstuhl informatik dagstuhl publishing germany finding large set covers faster via representation method special cases among others bounded clause width bounded clause density projections improved exponential time algorithms special cases problems also studied graph coloring traveling salesman graphs bounded degree paper interested exponential time complexity set cover study properties sufficient improved exponential time algorithms interest finding faster exponential time algorithms set cover stem canonical problem also unclear relation intriguingly one hand set cover similarities cnfsat problems take annotated hypergraph input improvability complexity essentially equivalent improvability complexity hitting set set cover hand problems quite different understanding algorithms set cover use dynamic programming variant inclusion exclusion algorithms based branching connection exponential time complexities problems known see one hope would better understanding exact complexity set cover might shed light unclarity moreover cygan also show would like improve run time several parameterized algorithms first need find algorithm set cover parameterized algorithms include classic algorithm subset sum well recent algorithms connected vertex cover steiner tree relevant previous work algorithmic results set cover relevant work follows folklore dynamic programming algorithm runs time notable special case set cover solved time due koivisto gives algorithm runs time algorithm sets size show problem solved poly time faster number sets exponentially large give randomized algorithm assumes sets size determines whether exist pairwise disjoint sets time depends main results investigate sufficient structural properties instances set cover closely related set partition picked sets need disjoint problems solvable time significantly faster currently known algorithms outline main results theorem monte carlo algorithm takes instance set cover elements sets integer input determines whether exists set cover size time remark generalizes result koivisto sense solves larger class instances time set sizes bounded constant set partition needs consist least sets theorem applies one way stating hitting set context instance set cover problem aim find time algorithm denotes number sets jesper nederlof although gives slower algorithm koivisto special case moreover seems hard extend approach koivisto general setting second result demonstrates techniques also applicable set cover instances exponentially many sets canonical example graph coloring theorem randomized algorithm given graph integer time outputs yes constant probability representation method set cover feel main technique used paper equally interesting result therefore elaborate origin technique high level inspired following simple observation ingeniously used howgravegraham joux suppose set solutions implicitly given seek solution listing sets performing pairwise checks see two combine restrict search various pairs guiding subsequent ways since many works including idea used speed attacks also called birthday attacks chapter refer uses idea representation method since crucially relies fact many representations pairs indicate power technique context set cover set partition show without changes already gives monte carlo algorithm set partition problem sets even general linear satisfiability problem variables latter problem improves time algorithm based attack fastest known first sight representation method seemed inherently useful improving algorithms based attack however main conceptual contribution work show also useful settings least improving dynamic programming algorithm set cover set partition problems solution size large high level show follows case set subset elements set partition instance define minimum number disjoint sets needed cover elements stated slightly oversimplified argue minimal set partition size large sets close relate later sections let remark refer set witness halve subsequently exploit presence many witness halves using dynamic programming algorithm samples set subsets size close evaluates table entries sample plus table entries required compute table entries sample organization paper organized follows section recall preliminaries introduce notation section discuss new observations basic results feel useful developing better understanding complexity set cover respect several structural properties instances section formally present notion witness halves prepare tools exploiting existence many witness halves section prove main results section suggest research algorithm set cover actually reduces set partition esa finding large set covers faster via representation method preliminaries notation real number denotes absolute value boolean predicate let denote true otherwise hand integer let denote usual denotes positive integers running times algorithms often stated using notation suppresses factors polynomial input size avoid superscript sometimes use exp denote denote logarithm denote extend notation reals let denote interval false positive negative algorithm instance incorrectly outputs yes respectively work call algorithm monte carlo false positives instance false negative probability denote vectors boldface clarity real number binary entropy thought well known example proved using stirling approximation easy see definition symmetric sense lemma following verified using standard calculus lemma hoeffding bound independent exp set cover set partition set cover problem given bipartite graph shorthand family universe respectively together integer task determine whether exists solution set partition problem given input set cover problem set determine whether exists additionally every refer solutions problems set covers set partitions throughout paper let respectively denote refer instances set cover set partition quantify parameters since work concerns set cover set partition large solutions record following basic observation follows constructing sets original instance set output instance observation polynomial time algorithm takes constant dividing set cover resp set partition input outputs equivalent set cover resp set partition often useful dispense linear sized sets end following achieved simply iterating checking set whether solution containing using poly algorithm set cover set partition disjoint jesper nederlof observation algorithm given real number takes instance set cover input outputs equivalent satisfying every algorithm runs poly time see theorem makes difference set partition problem whether empty sets allowed since need find set partition size exactly exclude sets simply say instance without empty sets observations basic results set cover set partition improve understanding properties instances set cover set partition allow faster algorithms techniques useful obtaining faster algorithms record observations basic results section stress proof techniques section main technical contribution postpone proofs appendix prefer state results terms set cover slightly natural common since set partition often easier deal purposes sometimes use following easy reduction whose steps contained theorem algorithm given real takes instance set cover input outputs equivalent set partition sets time completeness show fact set cover set partition equivalent respect solvable time never stated print best knowledge proof uses standard ideas found appendix theorem time algorithm set cover time algorithm set partition following natural result rather direct consequence paper koivisto reveals similarity problem koivisto maximum set size set cover solved analogous time similarly following result counterpart algorithms density clauses result never explicitly stated print best knowledge therefore proved appendix theorem algorithm solving set cover set partition time poly relevant work following subtlety solution sizes set partition shows set partition empty sets finding large solutions hard general case proof postponed appendix theorem suppose exist algorithm solving instances set partition time exists algorithm set partition koivisto showed set partition straightforward reductions section carry result set cover esa finding large set covers faster via representation method finally insightful see well representation method performs set partition problem sets consider running times number sets straightforward approach attack leads directly time algorithm show representation method combined analysis fact solves general linear sat problem linear sat one given integer matrix vectors task find satisfying theorem monte carlo algorithm solving linear sat best knowledge algorithm linear sat known determine get corollary given bipartite graph smallest size set partition time take first signal representation method useful solving set partition set cover instances small universe see consequence note reduce problem linear sat follows every add incidence vector column set cost picking column minimum subject number sets minimum set partition let remark page solves counting version set partition time drori peleg solve problem means algorithm fastest setting however use sophisticated branching find intriguing representation method work quite well even seemingly general linear sat problem exploiting presence many witness convenience work set partition section results straightforwardly extend set cover need subsequent section definition given instance set partition subset said witness exist disjoint subsets note similar intuitive definition outlined section except require adjusted definition set partition problem since set partition size see witness exists automatically yes instance section give randomized algorithms solve set partition promise instance exponential number witness halves sufficiently balanced size close first subsection outline basic algorithm second subsection show tools literature combined approach also give faster algorithm number sets exponential attempted find recent faster algorithm find though would surprised using recent tools branching algorithms one able significantly outperform algorithm set partition jesper nederlof basic algorithm theorem exists algorithm takes set partition real numbers satisfying input runs time poly following property exist least witness returns yes least constant probability exist set partition size returns note theorem guarantee anything algorithm partition sets exists witness halves address later high level description algorithm given figure algorithm assumes output estimate whether exists set partition size integer satisfying sample including every set probability every compute return yes return figure high level description algorithm implementing theorem define true exists every given set family denote following lemma concerns invoked algorithm proved via known dynamic programming techniques postponed appendix lemma exists algorithm given bipartite graph computes poly time thus preparation proof theorem need analyze maximum size algorithm figure lemma let may negative real numbers satisfying suppose proof let following upper bounds see second upper bound note set subsets size thus see upper bounded max min esa finding large set covers faster via representation method remainder proof therefore devoted upper bounding establish evaluating terms minimum setting first note assumption since similarly therefore may upper bound maximum two terms minimum obtained setting first term minimum note lemma item second term lemma item assumption note penultimate inequality monotone increasing substituting expression thus upper bounds ready wrap section proof theorem proof theorem implement line invoking algorithm lemma take time poly clearly bottleneck algorithm remains upper bound expectation applying lemma note assumption indeed also lemma applies expectation thus running time indeed claimed correctness easily checked algorithm never returns false positives moreover exist least witness loop line least witness halves size thus iteration see lemma part witness halve due sampling line actually get upper bound expectation running time markov inequality simply ignore iterations exceeds twice expectation jesper nederlof witness halve exists algorithm returns yes since holds definition witness halve therefore perform independent trials algorithm return yes probability least improvement case exponentially many input sets section show mild conditions existence many witness halves also exploited presence exponentially many sets largely builds upon machinery developed state result general possible assume sets given via oracle running sublinear input number sets close theorem exists algorithm given oracle access set partition real numbers satisfying runs time poly following property exist least witness outputs yes constant probability exist set partition size outputs oracle algorithm accepts input decides whether exists time proof theorem identical proof theorem therefore omitted except use following lemma instead lemma lemma exists algorithm given oracle access bipartite graph computes values poly time oracle algorithm accepts input decides whether exists time lemma mainly reiterates previous work developed since prove lemma include proof appendix finding large set covers faster section use tools previous sections prove main results theorems first connect existence large solutions existence many witness halves following lemma lemma set partition empty sets satisfies every solution exist least witness proof note backward direction trivial since definition existence witness halve implies existence solution direction suppose set partition denote suppose obtained including every element probability since random variable sum independent random variables equal probability hoeffding bound lemma see exp exp esa finding large set covers faster via representation method second inequality follows least subsets thus since determines thus gives rise distinct witness halve least witness ready prove first main theorem recall convenience theorem restated monte carlo algorithm takes instance set cover elements sets integer input determines whether exists set cover size time proof algorithm implementing theorem given figure algorithm ensure using observation every integer satisfying create set partition constructed adding vertex let return yes pick arbitrary subset find sizes smallest set covers instances induced elements respectively elements poly time standard dynamic programming return yes else return figure algorithm set cover large solutions implementing theorem first focus correctness algorithm clear algorithm never returns false positives line since algorithm also property yes returned line also clear exists solution suppose set cover size exists first suppose consider iteration loop line notice line reduced problem set partition without empty sets satisfying every therefore lemma applies see least witness thus apply theorem find set constant probability since suppose picking every element twice solution multiset implies every sizes smallest set covers defined algorithm satisfy thus lines find set algorithm returns yes running time line takes poly due observation line due theorem runs time poly poly poly poly poly jesper nederlof claimed theorem statement direct consequence tools previous section also get following result set partition theorem exists monte carlo algorithm set partition given oracle access satisfying every runs poly time oracle algorithm accepts input decides whether exists time proof lemma implies instance exist witness thus theorem implies theorem statement note theorem also implies poly time algorithm set partition sets given explicitly construct binary search tree implement oracle run query time remark would interesting see whether assumption needed removing assumption seems require ideas ones work example solution three sets size witness halve sufficiently balanced alternatively using observation seems slow however settle additive deal issue simple way particular consequence second result mentioned beginning paper theorem restated randomized algorithm given graph integer time outputs yes constant probability proof let define set partition instance every independent set element easy see instance set partition solution size check time whether independent set size independent set found remove set graph return yes obtained graph otherwise using time algorithm second step procedure clearly runs time always finds coloring using one color minimum number colors large enough independent set exists hand maximum independent set size may apply theorem poly since verified polynomial time whether given independent set theorem follows directions research section relate work presented notorious open problems obvious open question determine exact complexity set cover problem open problem set cover solved time question already stated several places known version set cover number solutions modulo counted solved strong exponential time hypothesis fails refer details esa finding large set covers faster via representation method less ambitiously natural wonder whether dependency improved algorithm analysis seem loose feel gain sharpening analysis outweigh technical effort currently better dependence need better bound lemma reduce set sizes efficiently observation research suggest find different algorithmic way deal case many witness halves unbalanced alone suffice give linear dependence since lemma expect get linear dependence even would also interesting see instances set cover solved time one might worthwhile studying whether includes instances optimal set covers sum set sizes least one may hope find exponentially many balanced witness halves well authors also give reduction subset sum set partition exact complexity subset sum small integers also something explicitly like state open problem especially since time algorithm target integer perhaps one famous exponential time algorithms open problem subset sum target solved time exclude existence assuming strong exponential time hypothesis note question asked present author would interesting study complexity subset sum similar vein paper special properties allowing faster algorithm example faster algorithm instances high density may used improving algorithm horowitz sahni note density subset sum instance inverse one would expect relating definition density formula another question already open open problem graph coloring solved time could techniques paper used make progress towards resolving question algorithm seems benefit existence many optimal colorings interesting paper actually shows existence optimal colorings exploited graphs small pathwidth related also hamiltonicity problem current understanding problem becomes easier promise hamiltonian cycles see also allows derandomizations known probabilistic algorithms case natural approach would deal explicitly instances many solutions sampling dynamic programming table similar vein done paper acknowledgements author would like thank per austrin petteri kaski mikko koivisto collaborations resulting mostly inspired work karl bringmann discussions applications set partition anonymous reviewers useful comments references per austrin petteri kaski mikko koivisto jesper nederlof subset sum absence concentration ernst mayr nicolas ollinger editors international symposium theoretical aspects computer science stacs march garching germany volume lipics pages schloss dagstuhl fuer informatik jesper nederlof per austrin petteri kaski mikko koivisto jesper nederlof dense subset sum may hardest nicolas ollinger heribert vollmer editors symposium theoretical aspects computer science stacs february france volume lipics pages schloss dagstuhl fuer informatik anja becker coron antoine joux improved generic algorithms hard knapsacks kenneth paterson editor advances cryptology eurocrypt annual international conference theory applications cryptographic techniques tallinn estonia may proceedings volume lecture notes computer science pages springer anja becker antoine joux alexander may alexander meurer decoding random binary linear codes improves information set decoding eurocrypt volume lecture notes computer science pages springer talk http andreas uniquely coloring graphs path decompositions corr andreas holger dell thore husfeldt parity set systems random restrictions applications exponential time problems kazuo iwama naoki kobayashi bettina speckmann editors automata languages programming international colloquium icalp kyoto japan july proceedings part volume lecture notes computer science pages springer andreas thore husfeldt petteri kaski mikko koivisto narrow sieves parameterized paths packings corr andreas thore husfeldt petteri kaski mikko koivisto trimmed moebius inversion graphs bounded degree theory comput andreas thore husfeldt petteri kaski mikko koivisto traveling salesman problem bounded degree graphs acm transactions algorithms andreas thore husfeldt mikko koivisto set partitioning via inclusionexclusion siam chris calabro russell impagliazzo ramamohan paturi duality clause width clause density sat annual ieee conference computational complexity ccc july prague czech republic pages ieee computer society timothy chan ryan williams deterministic apsp orthogonal vectors quickly derandomizing robert krauthgamer editor proceedings annual symposium discrete algorithms soda arlington usa january pages siam suresh chari pankaj rohatgi aravind srinivasan unique element isolation applications perfect matching related problems siam marek cygan holger dell daniel lokshtanov marx jesper nederlof yoshio okamoto ramamohan paturi saket saurabh magnus problems hard proceedings conference computational complexity ccc porto portugal june pages ieee marek cygan marcin pilipczuk faster algorithms graphs bounded average degree inf vilhelm exact algorithms exact satisfiability problems phd thesis university tcslab institute technology esa finding large set covers faster via representation method evgeny dantsin andreas goerdt edward hirsch ravi kannan jon kleinberg christos papadimitriou prabhakar raghavan uwe deterministic algorithm based local search theoretical computer science limor drori david peleg faster exact solutions problems theoretical computer science algorithms fedor fomin fabrizio grandoni dieter kratsch measure conquer approach analysis exact algorithms acm alexander golovnev alexander kulikov ivan mihajlin families infants general approach solve hard partition problems javier esparza pierre fraigniaud thore husfeldt elias koutsoupias editors automata languages programming international colloquium icalp copenhagen denmark july proceedings part volume lecture notes computer science pages springer godfrey hardy srinivasa ramanujan asymptotic combinatory analysis proceedings london mathematical society wassily hoeffding probability inequalities sums bounded random variables journal american statistical association ellis horowitz sartaj sahni computing partitions applications knapsack problem acm nick antoine joux new generic algorithms hard knapsacks henri gilbert editor advances cryptology eurocrypt annual international conference theory applications cryptographic techniques french riviera may june proceedings volume lecture notes computer science pages springer thore husfeldt ramamohan paturi gregory sorkin ryan williams exponential algorithms algorithms complexity beyond polynomial time dagstuhl seminar dagstuhl reports russell impagliazzo shachar lovett ramamohan paturi stefan schneider integer linear programming linear number constraints corr antoine joux algorithmic cryptanalysis chapman edition petteri kaski mikko koivisto jesper nederlof homomorphic hashing sparse coefficient extraction dimitrios thilikos gerhard woeginger editors parameterized exact computation international symposium ipec ljubljana slovenia september proceedings volume lecture notes computer science pages springer mikko koivisto partitioning sets bounded cardinality jianer chen fedor fomin editors parameterized exact computation international workshop iwpec copenhagen denmark september revised selected papers volume lecture notes computer science pages springer mceliece cryptosystem based algebraic coding theory deep space network progress report january daniel paulusma friedrich slivovsky stefan szeider model counting cnf formulas bounded modular treewidth algorithmica pages sigve hortemo jan arne telle martin vatshelle solving maxsat sat structured cnf formulas carsten sinz uwe egly editors theory applications satisfiability testing sat vienna austria july proceedings volume lecture notes computer science pages springer uwe probabilistic algorithm based limited local search restart algorithmica jesper nederlof vijay vazirani approximation algorithms new york new york usa missing proofs proof theorem given set cover use observation assure every easy see instance set partition instance bipartite graph one side every vertex neighborhood proof theorem backward direction algorithm set partition concatenated reduction theorem gives time algorithm set cover forward direction given set partition first use observation obtain instance set partition without loss generality may assume divides decreasing adding sets elements denoting iterate unordered integer partitions partition solve set cover instance constructed follows set partition instance assumed algorithm every add element every add set note possibly make least two copies every set easy see new set cover instance original set partition instance set partition size set contains one element set cover size needs correspond satisfying result hardy ramanujan unordered integer partitions hence concatenating reduction assumed time set cover algorithm leads time algorithm set partition proof theorem theorem given instance set integer family subsets cardinality partitions apgive number members counted time poly note result also immediately implies algorithm deciding set cover time poly since may reduce set cover instance set partition esa finding large set covers faster via representation method instance adding subsets given sets easily seen sandwiched algorithm implementing theorem given figure algorithm return else use algorithm theorem figure algorithm set cover set partition sets implementing theorem may bound running time algorithm analyzing branching tree recursive calls execution reaching line represents leaf note depth branching tree refer second recursive call line right branch path root leaf right branches since recursion step decrease size one call least thus number paths root leaf tree right branches branch size become since line reached sets size theorem applies thus spend poly per leaf type thus denoting total running time written poly poly poly exp equality follows binomial theorem inequality uses denoting setting see taking asymptotics growing allowed since may assume running time complexity monotone increasing running time becomes poly poly proof theorem given arbitrary instance set partition consisting integer first construct equivalent instance using observation min add vertices replace every copies easy see set partition instance set partitions size set partition size original instance one size thus transform new instance equivalent simply add sufficiently many isolated vertices running assumed algorithm instance results thus time algorithm jesper nederlof proof theorem recall main text linear sat problem defined follows given integer matrix vectors task find satisfying let denote column vectors denote hamming weight vector algorithm implementing theorem outlined algorithm algorithm assumes hamming weight vector minimizing minimum order facilitate running time analysis assume since minimum set columns summing consists columns rank least solution needs consist linearly independent columns solve problem time finding column basis iterating subsets columns basis compute unique way extend set columns summing exists standard technique called information set decoding introduced note ensure hamming weight simply assume solution size looking complement otherwise since gives maximization problem running time first note line line implemented time line implemented linear delay elementary methods thus runs time algorithm line line see vector weight included output thus lines run expected time similarly line take expectation iterations set pairs divide since probability pair needs satisfy hax algorithm iteration line line thus take expected time denotes note since pair also needs satisfy denoting using thus upper bound running time easily verified standard calculus mathematica computation exponent expression maximized note line line implemented time standard data structures similarly efficiently enumerate pairs line also note number enumerated pairs since vectors different thus algorithm indeed runs claimed running time correctness correctness first note whenever yes returned line clearly correct suppose instance linear sat esa finding large set covers faster via representation method assumes algorithm output yes multiple multiple pick uniformly random construct construct construct construct return yes return assumes algorithm output pick uniformly random construct construct add yet return assumes algorithm output construct graph vertex every every add arcs enumerate set paths arcs second type time initiate every path let vector add return figure time algorithm linear sat jesper nederlof exists minimum weight satisfying let consider iteration line since minimum weight columns need linearly independent since otherwise leaving linear combination would result smaller weight vector means denote denotes every coordinate let indicator function see markov inequality thus see choice conditioned inequality implies number least happens probability least apply reasoning show contains probability contains probability pair considered loop line since latter two probabilities independent see solution exists find probability least thus may repeat algorithm times ensure finds solution constant probability exists proof lemma let arbitrarily ordered integers define cij true exists every see true values read cji computing cij need entries cij restrict computation computing cij runtime bound follows proof lemma define true note within claimed time bound create table storing values using oracle define let define gxj see gxn compute gxj using gxj thus straightforward dynamic programming using compute every time next define gxi esa finding large set covers faster via representation method equivalently number tuples note fixed given entries every compute time poly using standard dynamic programming fast fourier transformation denote easily seen number disjoint sets whose union exactly note since every counted counted union since set equally many subsets inclusion exclusion see define hjx similarly see hnx compute hjx using hjx consequently use equation straightforward dynamic programming algorithm determine equals easy see true
| 8 |
hierarchical hyperlink prediction www nov labarta barcelona supercomputing center bsc universitat catalunya barcelonatech suzumura ibm watson research center barcelona supercomputing center bsc november abstract hyperlink prediction task proposing new links webpages used improve search engines expand visibility web pages increase connectivity navigability web hyperlink prediction typically performed webgraphs composed thousands millions vertices average webpage contains less fifty links algorithms processing graphs large sparse require scalable precise challenging combination similaritybased algorithms among scalable solutions within link prediction field due parallel nature computational simplicity algorithms independently explore nearby topological features every missing link graph order determine likelihood unfortunately precision algorithms limited prevented broad application far work explore performance algorithms particular problem hyperlink prediction large webgraphs propose novel method assumes existence hierarchical properties evaluate new approach several webgraphs compare performance current best algorithms remarkable performance leads argue applicability proposal identifying several use cases hyperlink prediction also describes approach took computation graphs perspective computing providing details implementation parallelization code web mining link prediction large scale graph mining introduction link prediction link mining task focused discovering new edges within graph typically first observe relationships links pairs entities network graph try predict unobserved links link prediction methods relevant field artificial intelligence knowledge base represented graph methods may produce new knowledge form relations cases link prediction methods use one two different types graph data learning graph structure topology properties graph components node attributes combinations also proposed solutions based properties components must rely previous study properties attributes present type relevance solutions therefore applicable domains properties hold hardly generalized hand solutions based structural information graph easily applicable domains information found graphs interpretation broadly shared graphs topology interpretation connectivity elements article focus link prediction methods based structural data problem link prediction commonly focused undirected unweighted graphs whereas plain edges universal semantics existence relationship two entities additional properties directionality weights may varying interpretations difference origin destination edge impact weights link prediction methods taking account one additional properties must make certain assumptions regarding semantics assumptions limit domains applied however semantics properties shared large set domains development specific link prediction methods assumptions may beneficial methods may increase quality results still applicable relevant set domains hyperlink prediction world wide web naturally represents webgraph large directed graph composed web pages connected hyperlinks contrast traditional based data sets webgraphs define highly distributed structures often referred networks networks became popular data structures explosion internet motivated change data processed fields like data mining machine learning fields shift focus towards finding patterns relations data mining statistical relational learning link mining network link analysis network science structural mining names used identify knowledge discovery tools share precept exploiting structural properties data sets learn relational patterns entities sake simplicity generally refer field science term graph mining defining contribution graph mining field identification certain problems successfully tackled traditional techniques tasks like community detection frequent subgraph discovery object ranking become main target graph mining algorithms one tasks link prediction problem finding new relations data set adding new edges given graph one particularly wide range applications one automatically produce new knowledge edges within domain graph using language topology vertices edges avoiding interpretation step infrequent enabling feature made technique widely applied domains could represented graph unfortunately scalable methods often either imprecise problems constrained popularization far prototypical domain application www webgraph large sparse graph growing continuously distributed manner algorithm applied large graphs webgraphs needs scalable fact size often motivates use high performance computing hpc tools infrastructure even using scalable methods challenges arising computation large networks noticed hpc community demonstrates popularization benchmark evaluate fast supercomputers process graphs development parallel programming models collaboration graph mining hpc seems therefore inevitable near future fields may benefit graph mining obtaining means goals hpc obtaining purpose capabilities context work presented provides following contributions www discuss importance hierarchical properties topological organization www study performance four different scores specific problem hyperlink prediction including one exploits hierarchical properties graph mining propose improved version inference score directed link prediction based hierarchies process informal graphs explicitly hierarchical compare results current best algorithms study distinct types local algorithms proposed combined solution significantly improve performance hpc present feasible applicable use case graph mining discuss scalability parallelism algorithms extended distributed memory setting computation graphs rest paper structured follows first two sections survey related work summarizes current contains brief survey previous attempts perform www data reviews role hierarchies www according previous work related work presented introduce formalize hybridinf score novel algorithm task hyperlink prediction describe experiments including webgraphs used evaluation methodologies analysis results results derive various possible applications discuss computational details describing implementation parallelization approach well extension distributed memory setting finally present conclusions context www motivate future work link prediction background algorithms often classified three families statistical relational models srm maximum likelihood algorithms mla algorithms sba srm may include probabilistic methods probabilistic srm build joint probability distribution representing graph based edges estimate likelihood edges found graph inference frequently based markov networks bayesian networks srm often based tensor factorization useful approach predicting edges heterogeneous networks composed one type relation also used combination probabilistic models second type algorithms mla assume given structure within graph hierarchy set communities etc fit graph model maximum likelihood estimation based model mla calculate likelihood edges mla provide relevant insight composition graph topology defined information used purposes beyond example mla hierarchical random graph builds dendrogram model representing hierarchical abstraction graph obtains connection probabilities accurately represents graph hierarchically third type methods sba compute similarity score pair vertices given topological neighbourhood edge evaluated thus potentially parallel without previously computing graph model unlike srm mla score edge sba focus paths pair vertices result sba categorized three classes according maximum path length explore local consider paths length global consider paths without length constrain consider limited path lengths larger global sba effectively build model whole graph consequent computational cost poor scalability sba compromise efficacy efficiency often based random walk model number paths pair vertices beyond previous classification methods heterogeneous solutions process decomposed two steps first step obtains clusters graph satisfying certain structural properties second step tries find new links clusters based attribute properties work particularly appropriate heterogeneous networks grow time approach taken mesoscale microscale components combined statistical inference model significantly microscale component based resource allocation algorithm different solution proposed authors identify topological features directed weighted links used input set traditional classifiers make approach feasible link features obtained local level considering hops away results indicate proposed algorithm hplp includes feature implements ensemble classifiers bagging random forests performs undersampling correcting class imbalance outperforms sba directed undirected weighted graphs local algorithms local sba among scalable approaches methods need explore direct neighbourhood given pair vertices estimate likelihood edge since estimation done parallel depend likelihood edges methods scale efficiently hardly waste resources perspective similarity scores local essential sba originate local one number steps set one sba equivalent local sba motivates research better local sba better sba derived remaining paper focus sba methods algorithms first compared nine algorithms tested five different scientific graphs field physics compared random predictor baseline although method clearly outperformed rest datasets three methods consistently achieved best results local algorithms common neighbours global algorithm katz results obtained common neighbours achieving best results among local algorithms new local algorithm called resource allocation proposed compared local algorithms testing six different datasets showed common neighbours provide best results among local algorithms also showed resource allocation capable improving common neighbours algorithm computes similarity two vertices size intersection neighbours formally let set neighbours definition scn algorithm follows idea also considers rareness edges shared neighbours weighted degree score becomes definition saa log resource allocation algorithm motivated resource allocation process networks algorithm simpler implementation vertex distributes single resource unit evenly among neighbours case similarity vertices becomes amount resource obtained definition sra link prediction www first application comes mind considering application www discover missing hyperlinks however contributions specifically focus application sba purpose contrast similar problems like predicting relations social network graphs thoroughly studied using wide variety methodologies webgraphs occasionally used evaluate scores always combination graphs types never independent case study webgraphs used relatively small graphs computed thousand vertices typically due complexity models used exception lack attention hyperlink prediction wikipedia webgraph composed wikipedia articles hyperlinks among encyclopedic information contained vertices graph served motivation researchers used enrich learning process result methods used wikipedia webgraph heterogeneous combinations methods neither scaled large graphs generalized webgraphs authors propose algorithm combining clustering natural language processing information retrieval finding missing hyperlinks among wikipedia pages specific target mind authors use supervised learning train disambiguation classifier order choose appropriate target article hyperlinks multiple contributions also exist semantic web community typically using rdf data implement link inference methods hierarchies www hierarchies widely used knowledge organization structure found domains human brain metabolic networks terrorist groups protein interactions food webs social networks many others within field link prediction hierarchies acknowledged relevant source information due power describe knowledge however link prediction methods predominantly focused undirected structures sake simplicity hierarchies necessarily directed structural divergence complicates use hierarchies traditional link prediction main reason combination exploited far one main contributions work adaptation application score based hierarchies problem hyperlink prediction implicitly assuming www topology contains partly defined hierarchical properties assumption discussed empirically evaluated exploited past presented section novel contribution successful application use case hyperlink discovery see consider hierarchies much smaller scale usual thus enabling novel hierarchical properties applications context www topology routing units www requests traffic first aspects internet shown hierarchical properties webgraph topology found compose structure defined set nested entities indicating natural hierarchical characterization structure web idea extended authors showed key properties found webgraphs real networks high degree clustering could satisfied hierarchical organization work ravasz propose hierarchical network model fitting networks composed small dense groups recursively combining compose larger sparser groups structure importance hierarchies model structure real networks explored application generative models models built produce artificial large scale networks mimicking topological properties real networks work leskovec particular interest authors define generative models satisfying complex properties never captured densification shrinking diameters complex model proposed leskovec forest fire model ffm builds network iteratively adding one new vertex graph model one randomly provides entry point form vertex point first edge ambassador second step random number edges added chosen among edges ambassador former higher probability selected latter process repeated recursively new vertex gets connected restriction ambassador considered notice ffm methodology contains explicit hierarchy yet generates data hierarchically structured discuss happens comparing ffm proposed algorithm score local sba obtaining best predictive results far based accumulated number shared neighbours additionally scores weight evidence provided neighbour degree thus increasing importance rare edges shown achieve best results among local sba citation social networks capable achieving similar results several domains even improve webgraph political blogs emphasize importance type scores let remark current best sba scores based one either alternative solutions also include illustrates potential benefits obtaining better local sba given scarce information available local sba topological neighbourhood around pair vertices seems difficult find scores significantly precise current best one main contributions article argue presence large amounts data implicit graph models may naturally exist models used significantly improve performance model assumed exist exploited learning benefit providing additional knowledge structure network adding complexity building model discussed www webgraph strongly related hierarchical organizations leads believe hyperlink prediction process could improved assumption hierarchical model hierarchies core inference inf score assuming hierarchical properties topology guide process inf designed mining explicitly hierarchical graphs implementing properties like edge transitivity original definition inf however seems inappropriate graphs without explicit hierarchical structure inf assumes computed graph implicitly implements hierarchy based assumption inf defines score ordered pair vertices measuring hierarchical evidence providing higher scores hierarchically coherent edges far inf shown good predictor graphs containing explicitly hierarchical relations ontology relation linguistic relation wordnet given directed graph given vertex original definition inf refers vertices linked edge ancestors vertices linked edge descendants definition based sets inf defines two named deductive ded inductive ind coherency sense ded follows reasoning process abstract specific resembling weighted deductive inference four grandparents mortal probably mortal case information vertex mortal obtained ancestors vertex grandparent mortal proportion number times relation satisfied four four living grandparent hence generalizations share property certain property applying ind hand follows reasoning process specific generic resembling weighted inductive inference author publications meticulous author likely meticulous example information vertex author meticulous obtained descendants vertex publication meticulous publication author proportion number times relation satisfied hence frequently specializations share property certain property applying see figure graphical representation processes inf defined ded definition definition sin inf algorithm defined definition maximizes predictions links according natural sense hierarchy evidence one provided common neighbour setting figure dashed edge represents link inference left graphic representation deductive process estimating link likelihood right graphic representation inductive process estimating link likelihood cases nodes represent evidence considered considered evaluating score edge potentially beneficial domains implicitly hierarchical webgraphs however inf purely proportional score makes unsuited real world data proportional accumulative scores inf originally defined target formal graphs relations high reliability due expert validation process wordnet originate formal properties ontologies exploit fact inf entirely based proportion evidence see definition formal graphs single relation reliable many regardless size one one ten ten unfortunately proportional evidence equally reliable working informal graphs webgraphs social networks networks etc edges often contain errors outliers imbalanced data setting considering one one evidence set certain scenario would precipitous prone error alternative proportional evidence accumulative evidence frequent approach used algorithms proportional scores inf jaccard coefficient weight evidence edges according local context provide normalized similarity edge regardless degree makes proportional scores unbiased towards edges among highdegree vertices accumulative scores hand measure absolute amount evidence ignoring local context source vertex degree scores edges evaluated ranked perspective benefits predictions around vertices previous results shown general accumulative scores perform better proportional scores top three scores found bibliography accumulative explained importance preferential attachment process rich get richer accumulative scores implicitly satisfy proportional scores avoid generally poor results proportional scores also caused volatility predictions among vertices proportional scores neglect often less reliable due arbitrariness inf hybrid score adapt inf score informal domains webgraphs extend consider proportional accumulative evidence although current solutions purely accumulative build hybrid solution goal combining certainty proportional scores reliability accumulative ones first normalize evidence given local context proportionally division weight proportion absolute size based accumulatively logarithm using logarithmic function weight score accumulative evidence dominates scores low degree vertices proportional evidence remains important high degree vertices log variant defined definition definition sin log log log original inf definitions combines evidence provided ded ind addition ded considering equally reliable however preliminary evaluation found ded consistently achieves higher precisions ind domains hypothesis understand behaviour formal graphs deductive reasoning process ded based may reliable inductive reasoning process ind informal graphs behaviour may caused fact vertex typically higher responsibility defining set vertices points set vertices point often imposed onto webmasters rarely choose links site higher reliability outgoing edges vertex would make ded precise defining vertex edges used ded even though ded reliable ind contribution ind inf score still relevant capable detecting type relational evidence ded consider solve problem adding multiplying factor ded ded seen definition definition sin log log log sampled value several domains found optimal value graphs provide consistent score evaluation set tests implementation score identified log experiments next evaluate inf log algorithm hyperlink prediction problem within family algorithms local sba scores although hyperlink prediction problem directed link prediction problem use undirected link prediction scores baseline current analogous directed local sba scores bibliography compare performance inf log six webgraphs obtained different sources webgraph university notre dame domain webnd webgraph stanford websb webgraph provided google programming contest webgl domain level webgraph webgraph project webgraphs chinese encyclopedias baidu hudong sizes graphs shown table graphs used publicly available referenced sources graph webnd websb webgl hudong baidu vertices edges table size computed webgraphs evaluation problem reduced binary classification problem context one set correct instances edges missing graph known correct set wrong instances rest missing edges goal predictors classify sets edges well possible frequently used metrics evaluate predictors performance context receiver operating characteristic roc curve curve roc curve sets true positive rate tpr false positive rate fpr making metric unbiased towards entities class regardless size classes unfortunately consideration errors result mistakenly optimistic interpretations roc curves represent missclassifications relative number errors could made domains negative class large one make millions even billions errors showing mistakes relative negative class size fpr may hide actual magnitude complicate realistic assessment predictive performance furthermore large highly imbalanced domains roc curve becomes irrelevant practice represents inapplicable precisions curves alternative roc curves show precision axis recall axis curves show number correct classifications negative class instead represent relative number predictions made allows idea actual predictive quality absolute terms makes whole curve relevant regardless problem size fact roc curves strongly related curve dominates another roc space also dominates space main difference roc curves errors represented difference huge visual impact consider example two curves represent random classifier always performs poorly large webgraph webnd websb webgl hudong baidu positive edges negative edges million million million million million million class imbalance table graph number correct edges number incorrect edges computed positive negative class imbalance evaluated set highly imbalanced data set roc curve always represents random classifier straight line points regardless class imbalance better random classifiers represented lines diagonal demanding curves hand represent random classifiers imbalanced data sets flat line axis precision imbalanced settings always close zero due previously outlined motives huge class imbalance found graphs see table use area curve measure curve aupr compare various scores evaluated let remark recommended approach setting even though fully assimilated community yet calculate curve one needs source graph wish predict edges test set edges missing source graph known correct build test set randomly split edges graph use source graph remaining test set size test sets evaluated graph shown table vertices becoming disconnected source graph removing test edges computed considered score evaluation frequently used methodology build test sets cross validation however due size graphs used necessary law large numbers make significant portion large domain tend towards stable sample thus making single run representative accurate sample performance build curves exhaustively possible edges graph computed approximate potentially dishonest methodologies test sampling avoided computing possible edges graphs size ones used equals evaluation billions edges see table proposed solution needs scalable order feasible sba fit requirement perfectly parallelized maximum computational efficiency see details implementation parallelization solution results analysis curves four scores six graphs seen figure inf log achieves best predictive performance graphs large margin huge improvement precision obtained inf log much higher axis values curve results increase one figure curves webgraphs recall axis precision axis hudong baidu curves zoomed clarity order magnitude aupr current see table significantly inf log achieves precisions small recall even scores never reach precisions seen results inf log recommend thousands edges higher reliability making mistakes process leap performance inf log consistence within webgraphs also stresses importance hierarchical properties hyperlink prediction much gained integrating implicit network models within predictive algorithms hybrid improvement inf log hybrid score outperforms best accumulative scores different six webgraphs see table far purely accumulative scores obtained best results almost every informal graph evaluated makes results important validate importance hybrid approach also compared purely proportional scores tested webnd websb webgl hudong baidu inf log improvement table auc obtained score curves shown figure also improvement best accumulative score percentage webgraph webnd websb webgl hudong baidu inf accum hybrid improve table graph tested basic inf score top accumulative score percentage improvement inf log inf known proportional score six webgraphs jaccard coefficient score set neighbours jaccard coefficient defined definition sjaccard results showed jaccard performance incomparably worse webgraphs plotted together scores jaccard curve flat line axis graphs jaccard shown tables aupr always zero given five decimals used consistently previous research find proportional scores imprecise however first time show proportional scores help improve significantly precise accumulative scores integrated explore impact making hierarchical assumptions webgraphs consider performance basic inf score inf purely proportional score assuming hierarchical model graph actually outperforms best accumulative scores two six webgraphs see table inf incomparably better also proportional jaccard coefficient shows handicaps using proportional approach overcome exploiting implicit data models found graph finally let compare performance inf log inf defined inf log hybrid version purely proportional score inf comparison therefore reliable test benefits turning proportional score hybrid score regard table shows inf log consistently improves performance inf two orders magnitude aupr measure graphs regardless inf achieving competitive results applications large graphs frequently imprecise scale generalized three reasons constrained application sba clearly generalizable applied virtually graph composed vertices directed sba clearly scalable perfectly parallelized compute graphs size unfortunately current sba rather imprecise hardly reaching precisions graphs close million vertices see figure webnd websb webgl hudong baidu table percentage improvement achieved inf log context shown considering inherent topological features hierarchies combining different approaches proportional accumulative hybrid scores one overcome limitation imprecision results obtained inf log huge improvement previous see table details importantly results show sba reach precisions enabling reliable discovery tens thousands hyperlinks straightforward fashion without building model graph unprecedented results many applications derived web search engines current search engines composed wide variety interacting metrics together produce complete ranking web page relevance measure hierarchical similarity webpages provided inf log may represent different sort evidence could used enrich ranking web pages different perspective utility inf log score characterize webpages within webgraph validated scores inf log could combined algorithms like page rank spreading relevance webs directly connected also algorithms estimate potential neighbours high reliability web connectivity hierarchical properties www spontaneously emerge global level see local level things rather chaotic webmaster must find appropriate webpages link domain billions websites humans aware every single webpage online hyperlink recommender could useful web masters find relevant web pages link tool could improve depending score require edge directionality inf log graph webnd websb webgl hudong baidu vertices edges edges evaluate computation time seconds minutes seconds minutes minutes minutes table size computed webgraphs missing edges evaluated time spent evaluating computational context defined connectivity coherency www specific web domain well significantly enrich directory webs taxonomy building taxonomies frequently used online shops encyclopedias among others organize content taxonomies often defined external experts requiring continuous expensive manual data mapping fitting web pages articles taxonomy entities according results seems feasible develop automatic taxonomy building system proposing taxonomy web pages based interrelations similarly proposed clauset taxonomy would benefit originating data making necessarily relevant domain question would also easily updated taxonomy like used optimize commercial organization online shop example considering user navigation paths source algorithms implementation tests performed paper algorithms compute billions edges processed graph see table computing edges sequentially one one clearly impractical makes high performance computing hpc parallelism necessary feasibility work review algorithmic design code parallelization developed maximize efficiency consider different computational contexts large scale graph mining problems set tackle particular problem parallelization graphs need parallel computing reinforce importance sba extremely efficient parallel algorithms main challenge implementation graph processing algorithm lies graphs easily translates data dependencies dependencies determine execution order constrains among portions code imply synchronization points one portion code must wait another portion executed first existence dependencies threads see work flow halted wait threads essence dependencies define bottlenecks parallel execution code reduce efficiency computational resources usage related concept within field parallel computing embarrassingly parallel problems notion applies algorithms parallelized without definition significant dependencies therefore problems achieve huge efficiency parallelization almost idle resources computation embarrassingly parallel problems capable decreasing computational time almost linearly number computing units available various threads must endure waiting times said one key features similarity based algorithms score edge calculated independently rest particularity gains huge relevance allow define embarrassingly parallel problem fully testing algorithm graph equals calculate similarity possible ordered pairs vertices possible directed edge since similarity calculated independently evaluate simultaneously without dependencies considering huge number edges test times order billions code parallelization design defines efficiency algorithm eventually size graph process algorithmic design divided two parallel sections first one calculate similarity possible edge parallel storing results obtained edges originating vertex separately partial scores data structure however score edge computed instead evidence accumulated find paths vertices leading given target vertices given goal vertices overview code seen algorithm notice iterations outermost loop line computed parallel without dependencies exception storage results first second part code perform reduction combining results obtained vertices since interested evaluating performance need know number true positive false positive predictions achieved every distinct threshold throughout graph allow simplify second part code define without dependencies second part code calculates performance algorithms given total number true positive false positive predictions found distinct similarity value calculate points composing curves discussed aggregation process task also embarrassingly parallel performance threshold point within curves calculated independently rest thresholds important parallelize task number distinct similarity values large graphs also large millions values overview second section code seen algorithm notice outermost loop parallelized dependency writing results computational setting paper computed graphs two million vertices graphs size already challenging process exhaustively due total number algorithm code skeleton similarity evaluation edges graph input graph output link prediction scores vertices stored map structure store scores sourceid targetid score map int map int loat graph scores vertex store partial scores targetid partial score map int loat partial scores vertex neighbor vertex neighbor new target initialize partial score partial scores partial initial score end update partial score else partial scores end end end edge computed graph back partial scores end potential edges graph nevertheless hyperlink prediction problem target much larger graphs recent research moving direction shows work capable training factorization model graph million vertices regardless size interesting graphs process remains several orders magnitude larger example webgraph covered billion web pages connected billion hyperlinks graphs represent kind problem sba feasible solution far main concern targeting graphs computational resources medium sized graphs ones one use shared memory context computational units direct access centralized memory space approach assumes graph data stored single memory location assumption hardly satisfied graphs grow furthermore even manage store graph single memory location number computing units cores available parallel execution physically constrained solve limitation hpc researchers use distributed memory setting memory split among several locations distributed memory memory location directly accessible subset computing units data locations fetched necessary communication channels using distributed memory context entails several additional problems distribute work among computational resources achieving balance split input data among locations minimize communications problems remain open issues need addressed want process graphs see figure graphical representation paradigms algorithm code skeleton full graph performance evaluation input list distinct similarities found graph together corresponding true false positive predictions output list performance rates one per distinct similarity value store results obtained thresholds vector pair int int ull results similarity value graph int true pos int alse pos similarity value graph true pos true pos alse pos alse pos end end ull back true pos alse pos end figure left structure shared memory architecture right structure distributed memory architecture code used tests presented paper parallelized using openmp shared memory model api api provides set compiler directives extend fortran base languages openmp directives define code parallelized data shared chose openmp portable scalable flexible standard within first section code parallelization done external loop line listing effectively distributing iterations among different threads design guarantees similarities given vertex full iteration outermost loop calculated single thread thus avoiding dependencies second section code also parallelized external loop line listing design guarantees possible threshold point within roc curves calculated unique thread avoiding dependencies parallelizing loop openmp one must decide distribute iterations among team running threads different ways splitting iterations found efficient problem dynamic scheduling splitting iterations chunks size get assigned threads request key define chunk sizes according problem size order minimize imbalances chunk size large end computation one threads remain idle long time rest threads finish last chunks chunks small constant scheduling threads finish chunks requesting may slow whole process case larger denser graph smaller chunks must iterations large graph time consuming thus increasing possibilities imbalance code found optimal chunk size tests used marenostrum supercomputer provided bsc used single intel ghz limit ram translates parallel threads time spent computing graph one hour half shown table times include computing four scores inf log missing edges graph time include graph loading time curve building time writing results distributed memory local scores like ones evaluated compute graphs million vertices shared memory environment eventually though interested working larger graphs distributed memory needed even local methods consider example webgraph defined internet billion web pages brain connectome graph composed billions neurons kind data sets feasible solution nowadays distributed memory space requirement also time complexity hundreds cores computing parallel needed mine graphs number computing cores accessing single shared memory space rarely dozens run methods distributed memory environment using algorithmic design parallelization described use ompss programming model supports openmp like directives adapted work clusters distributed memory even though ompss version graph data communicated among computing entities graph data distributed among locations preliminary results show relevant overhead added communication algorithmic design easy predict edge evaluated next therefore graph data needed next thread vertices neighbours must brought memory thanks foreseeability data sent needed thus avoiding idle threads consequent communication overhead computational point view means scale almost linearly distributed memory contexts conclusions introduced novel method assumes existence hierarchical properties improve task hyperlink prediction first conclusion according results shown task hyperlink prediction significantly improved consideration hierarchical erties tests inf log hierarchical score outperformed nonhierarchical scores six different webgraphs doubling aupr measure worse case size variety graphs used thoroughness evaluation methodology guarantee consistency results results align previous work discussed providing evidence importance hierarchies defining topology webgraphs practical point view main conclusion leap performance obtained shown predict thousands hyperlinks webgraph almost perfect precision scalable manner seen figure inf log achieves precisions close certain predictions according results considering hierarchies hyperlink prediction becomes feasible problem immediately enables multiple interesting cases application including limited increase improve connectivity web pages optimize navigability web sites tune search engines results web similarity analysis refine product recommendation item page linkage general www perspective work continues analysis relation hierarchies www step different direction contributions typically focused defining generative models real networks generative models problem strongly related since generative models must produce new edges within given graph illustrate let consider ffm briefly described link adding process ffm edges fact particular case ded see figure alone indicates inf log ffm based close hierarchical principles nevertheless huge differences ffm use vertex determine like inf log ind inf log calculates rates edges based similarity score ffm randomly accepts rejects edges seek faithfulness vertex level seeks topological coherency graph level finally ffm explores edges far away ambassador vertex though various iterations thanks computational simplicity inf log performs one step exploration local score although building version inf log one main lines future work see sum ffm inf log share set precepts model uses different purpose ffm uses define large coherent topology model graph scale inf log uses define high confidence exhaustive edge likelihood score applicable vertex level regard good results achieved methods respective fields partly support assumptions paper also reached interesting conclusion field general analysis results defined simple categorization sba based considered evidence proportionally accumulatively proposed score inf log actually hybrid combining approaches far results indicate accumulative solutions competitive proportional ones good results inf log open door consideration hybrid solutions relevance field motivates integration proportional features current accumulative scores great potential benefits code implementation parallelization derive conclusions graph mining hpc communities discussed sba defined parallel processes without dependencies thus amount resources made available sba used efficiently significantly feature consistent shared memory distributed memory settings use openmp ompss opens door computation graphs arbitrary size future work perspective goal develop precise prediction scores hybrid sba promising family methods needs thoroughly studied also good performance inf log motivates design scores based may achieve even precise predictions inf log assumes hierarchical structure thus works well domains satisfy model degree scores assume compute different underlying models communities also explored final line future work regards graphs current implementation allows compute arbitrarily large graphs ompss thanks intend develop hyperlink recommender purpose questions arising distributed memory context must considered split graph data among different physical locations data allocated location must data transfered code parallelized given new restrictions solving issues intend conclude argument starting work link prediction field bright future instead one challenging present acknowledgements work partially supported joint study agreement deep learning center agreement spanish government programa severo ochoa spanish ministry science technology project generalitat catalunya contracts references sisay fissaha adafre maarten rijke discovering missing links wikipedia proceedings international workshop link discovery acm lada adamic eytan adar friends neighbors web social networks charu aggarwal yan xie philip framework dynamic link prediction heterogeneous networks statistical analysis data mining issn doi albert hawoong jeong internet diameter web nature openmp arb openmp application program interface http javier bueno productive cluster programming ompss parallel processing springer aaron clauset cristopher moore mark newman hierarchical structure prediction missing links networks nature diane cook structural mining molecular biology data engineering medicine biology magazine ieee mark crovella azer bestavros world wide web traffic evidence possible causes networking transactions jesse davis mark goadrich relationship precisionrecall roc curves proceedings international conference machine learning acm stephen dill web acm transactions internet technology toit enming dong link prediction networks chaos solitons fractals alejandro duran ompss proposal programming heterogeneous architectures parallel processing letters ian foster designing building parallel programs addison wesley publishing company dario ulises link prediction large directed graphs exploiting hierarchical properties parallel workshop knowledge discovery data mining meets linked open data extended semantic web conference dario evaluating link prediction large graphs proceedings international conference catalan association artificial intelligence lise getoor christopher diehl link mining survey acm sigkdd explorations newsletter lise getoor ben taskar introduction statistical relational learning mit press roger marta missing spurious interactions reconstruction complex networks proceedings national academy sciences lawrence holder diane cook data encyclopedia data warehousing mining akihiro inokuchi takashi washio hiroshi motoda algorithm mining frequent substructures graph data principles data mining knowledge discovery springer brian karrer mark newman stochastic blockmodels community structure networks physical review leo katz new status index derived sociometric analysis psychometrika ashraf khalil yong liu experiments pagerank computation indiana university department computer science url http indiana pdf markus denny denny vrandecic max wikipedia semantic missing links proceedings wikimania citeseer kunegis konect koblenz network collection proceedings international conference world wide web companion international world wide web conferences steering committee jure leskovec jon kleinberg christos faloutsos graphs time densification laws shrinking diameters possible explanations proceedings eleventh acm sigkdd international conference knowledge discovery data mining acm david jon kleinberg problem social networks journal american society information science technology ryan lichtenwalter jake lussier nitesh chawla new perspectives methods link prediction proceedings acm sigkdd international conference knowledge discovery data mining acm weiping liu linyuan link prediction based local random walk epl europhysics letters linyuan jin tao zhou similarity index based local paths link prediction complex networks physical review linyuan tao zhou link prediction complex networks survey physica statistical mechanics applications grzegorz malewicz pregel system graph processing proceedings acm sigmod international conference management data acm robert meusel graph structure web revisited international world wide web conference seoul korea apr kurt miller michael jordan thomas griffiths nonparametric latent feature models link prediction advances neural information processing systems bengio curran associates david milne ian witten learning link wikipedia proceedings acm conference information knowledge management acm tsuyoshi murata sakiko moriyasu link prediction based structural properties online social networks new generation computing tsuyoshi murata sakiko moriyasu link prediction social networks based weighted proximity measures web intelligence international conference ieee mark newman clustering preferential attachment growing networks physical review maximilian nickel volker tresp kriegel model collective learning data proceedings international conference machine learning lawrence page pagerank citation ranking bringing order romualdo alexei alessandro vespignani dynamical correlation properties internet physical review letters ravasz hierarchical organization complex networks physical review sid redner networks teasing missing links nature han hee song scalable proximity estimation link prediction online social networks proceedings acm sigcomm conference internet measurement conference acm ilya sutskever ruslan salakhutdinov joshua tenenbaum modelling relational data using bayesian clustered tensor nips hanghang tong christos faloutsos yehuda koren fast directionaware proximity graph mining proceedings acm sigkdd international conference knowledge discovery data mining acm koji ueno toyotaro suzumura highly scalable graph search benchmark proceedings international symposium parallel distributed computing acm yang yang ryan lichtenwalter nitesh chawla evaluating link prediction methods knowledge information systems hyokun yun nomad stochastic algorithm asynchronous decentralized matrix completion arxiv preprint tao zhou linyuan zhang predicting missing links via local information european physical journal
| 8 |
theorem blichfeldt benjamin jun june abstract let permutation group objects let number fixed points let expository note give proof theorem blichfeldt asserts order divides also discuss sharpness bound keywords blichfeldt theorem number fixed points permutation character ams classification let consider permutation group finite set consisting elements lagrange theorem applied symmetric group follows order divisor order strengthen divisibility relation denote number fixed points subgroup moreover let hgi every maillet proved following see also cameron book theorem maillet let divides using newly established character theory finite groups blichfeldt showed suffices consider cyclic subgroups maillet theorem rediscovered kiyota theorem blichfeldt let divides convenience reader present elegant argument found theorem proof blichfeldt theorem since permutation character function sending generalized character difference ordinary complex characters conclude multiple regular character particular divides seems elementary proof avoiding character theory blichfeldt theorem published far aim note provide proof proof blichfeldt theorem suffices show fachbereich mathematik kaiserslautern kaiserslautern germany sambale since summands vanish expanding product see enough prove obviously arguing induction may assume let orbits let stabilizers conjugate particular recall orbit stabilizer theorem gives implies byproduct proof observe number orbits formula sometimes inaccurately called burnside lemma see one orbit group called transitive case rank number orbits stabilizer known blichfeldt theorem improved considering fixed point numbers elements prime power order seen follows let sylow every prime divisor since prime power order theorem implies divides every since orders pairwise coprime also divisor hand suffice take fixed point numbers elements prime order example given dihedral group order every involution moves exactly four letters independently chillag obtained another generalization theorem assumed generalized character replaced degree dual version conjugacy classes instead characters appeared chillag numerous articles addressed question equality blichfeldt theorem easy examples given regular permutation groups transitive groups whose order coincides degree fact cayley theorem every finite group regular permutation group acting multiplication wider class examples consists sharply permutation groups every pair tuples exists unique setting see element fixes less points hence hand fixed precisely choices follows therefore equality theorem note sharply regular thing interesting family sharply groups comes affine groups aff fpm fpm fpm fpm fpm field elements generally sharply groups frobenius groups abelian kernel definition frobenius group transitive satisfies kernel subset fixed point free elements together identity frobenius theorem asserts normal subgroup sharply groups proved elementary fashion see exercise far proof full claim known dihedral group order illustrates every frobenius group sharply typical example sharply group natural action set onedimensional subspaces leave claim exercise interested reader sharply groups eventually classified zassenhaus using near fields see passman book theorems hand many sharply groups large fact classical theorem jordan supplemented mathieu theorem jordan mathieu sharply permutation groups given follows symmetric group degree alternating group degree iii mathieu group degree mathieu group degree remark mathieu groups degree smallest members sporadic simple groups accordance examples permutation groups equality theorem called sharp permutation groups coined apart ones already seen examples instance symmetry group square acting four vertices order dihedral group fixed point numbers recently brozovic gave description primitive sharp permutation groups permutation group primitive transitive stabilizer maximal subgroup complete classification sharp permutation groups widely open finally use opportunity mention related result bochert divisibility relation replaced inequality usual denotes largest integer less equal theorem bochert primitive unless symmetric group alternating group degree acknowledgment work supported german research foundation project daimler benz foundation project references blichfeldt theorem concerning invariants linear homogeneous groups applications trans amer math soc bochert ueber die zahl der verschiedenen werthe die eine function gegebener buchstaben durch vertauschung derselben erlangen kann math ann brozovic classification primitive sharp permutation groups type comm algebra cameron permutation groups london mathematical society student texts vol cambridge university press cambridge cameron kiyota sharp characters finite groups algebra chillag character values finite groups eigenvalues nonnegative integer matrices proc amer math soc chillag congruence blichfeldt concerning order finite groups proc amer math soc ito kiyota sharp permutation groups math soc japan jordan sur limite des groupes bull soc math france kiyota inequality finite permutation groups combin theory ser maillet sur quelques des groupes substitutions ordre ann fac sci toulouse sci math sci phys mathieu sur des fonctions plusieurs sur les former sur les substitutions qui les laissent invariables math pures appl neumann lemma burnside math sci passman permutation groups dover publications mineola revised reprint original zassenhaus kennzeichnung endlicher linearer gruppen als permutationsgruppen abh math sem univ hamburg zassenhaus endliche abh math sem univ hamburg
| 4 |
improved approximation algorithms set problems mar zeev nutov open university israel nutov abstract graph paths every pair nodes subset nodes graph kconnected set subgraph induced set every least neighbors set short mdominating set problem goal find minimum weight graph consider case obtain following approximation ratios unit obtain ratio improving ratio general graphs obtain first approximation ratio introduction graph internally disjoint paths every pair nodes subset nodes graph set subgraph induced set every least neighbors set set short graph graph nodes located euclidean plane edge nodes iff euclidean distance consider following problem general graphs graphs set input graph node weights integers output minimum weight case set problem let denote best known ratio set currently graphs general graphs maximum degree input graph problem studied extensively recent papers zhang zhou fukunaga obtained ratio problem graphs graphs zhang also obtained improved ratio related paper zhang obtained ratio general graphs unit weights mentionning approximation algorithm arbitrary weights known let say graph designated set terminals root node contains every ratios expressed terms best ratio following known problem rooted subset input graph set terminals root node integer output minimum subgraph let denote best known ratios rooted subset kconnectivity problem respectively currently graphs general graphs also correction vakilian algorithm analysis see also main results summarized following theorem theorem suppose set problem admits ratio rooted subset problem admits ratios admits ratios general graphs graphs furthermore graphs admits ratio algorithm uses main ideas well partial results papers zhang fukunaga let say graph contains paths every pair nodes papers consider graphs reduce problem subset problem given graph edge costs subset terminals find minimum cost subgraph problem admits trivial ratio best known ratios see also fact ratios derived applying times algorithm rooted subset problem main reason improvement ratios reduction easier rooted subset problem small values present refined reduction unit disc graphs performance algorithm coincide since subset rooted subset admit ratio proof theorem arbitrary graph let denote maximum number internally disjoint say namely every let denote set neigbors proof following known statement found see also part lemma relies maders undirected critical cycle theorem lemma let let graph made adding set new edges furthermore forest suppose note edge set lemma computed polynomial time starting clique repeatedly removing edge remains reason case easier given following lemma lemma graph set proof known characterization graphs sufficient show holds subpartition edge since otherwise say since set result follows finally need following known fact lemma given pair nodes graph problem finding minimum weight node set pst pst internallydisjoint admits algorithm arbitrary show following algorithm achieves desired approximation ratio algorithm compute set construct graph adding new node connected set nodes set new edges compute node set subgraph induced let let forest new edges lemma graph every find node set puv puv let puv return prove solution computed feasible lemma computed solution feasible namely end algorithm proof since set superset thus node set returned algorithm set remains prove set first prove graph computed step menger theorem iff let holds since since set thus every node least neighbors cases holds hence graph implies graph thus furthermore set since applying lemma graph get graphs required lemma algorithm ratio proof let optimal solution clearly claim note feasible solution problem considered step algorithm solution reason set feasible solution problem considered step set puv computed solution thus puv finally note thus lemma follows concludes proof case general general graphs let consider unit disc graphs use following result theorem zhang zhou graph spanning subgraph maximum degree note graph minimum degree thus theorem implies searching subgraph unit disc graph one convert invoking ratio factor case case specifically given node weights define cuv subgraph maximum degree minimum degree since since may use conversion steps algorithm specifically step concludes proof case general graphs case use result mader graph least nodes degree step algorithm guess node edges incident edgeminimal optimal solution remove edges incident run step omitting steps lemma graph already references auletta dinitz nutov parente algorithm finding optimum spanning subgraph algorithms dinitz nutov algorithm finding optimum spanning subgraphs algorithms fleischer jain williamson iterative rounding algorithms vertex connectivity problems computer system sciences approximating domination general graphs analco pages fukunaga algorithms highly connected multidominating sets unit disk graphs fukunaga spider covers network activation problem soda pages kortsarz nutov approximating node connectivity problems via set covers algorithmica laekhanukit improved approximation algorithm subset icalp pages mader ecken vom grad minimalen graphen archive der mathematik mader vertices degree minimally graphs digraphs combinatorics paul eighty nutov approximating minimum cost connectivity problems via uncrossable bifamilies acm transactions algorithms nutov approximating subset problems discrete algorithms nutov improved approximation algorithms connectivity augmentation problems csr pages vakilian survivable network design problems master thesis university illinois zhang zhou approximation algorithm minimum weight virtual backbone unit disk graphs transactions networking appear zhang zhou approximation algorithm connected dominating set wireless networks infocom pages
| 8 |
linking cutting spanning trees russoa andreia teixeiraa alexandre franciscoa department computer science engineering instituto superior universidade lisboa jan abstract consider problem uniformly generating spanning tree connected undirected graph process useful compute statistics namely phylogenetic trees describe markov chain producing trees cycle graphs prove approach outperforms existing algorithms general graphs obtain analytical bounds experimental results show chain still converges quickly yields algorithm also due use proper fast data structures bound mixing time chain describe coupling analyse cycle graphs simulate graphs keywords spanning tree uniform generation markov chain mixing time link cut tree pacs msc introduction spanning tree undirected connected graph tree connected set edges without cycles spans every vertex every vertex occurs edge figure shows example vertexes graph represented circles set vertexes denoted edges represented dashed lines set edges represented edges spanning tree represented thick grey lines also use mean respectively size set size set number vertexes number edges case expression interpreted figure spanning tree graph corresponding author preprint submitted january set instead number avoid ambiguity writing respectively aim compute one spanning trees uniformly among possible spanning trees number trees may vary depending underlying graph borchardt cayley aigner chapter computing tree uniformly challenging several reasons number trees usually exponential structure resulting trees largely heterogeneous underlying graphs change contributions paper following present new algorithm given graph generates spanning tree uniformly random algorithm uses tree data structure compute randomizing operations log amortized time per operation hence overall algorithm takes log time obtain uniform spanning tree mixing time markov chain dependent theorem summarizes result propose coupling bound mixing time analysis coupling yields bound cycle graphs theorem graphs consists simple cycles connect bridges articulation points theorem also simulate procedure experimentally obtain bounds graphs tree data structure also key process section shows experimental results including classes graphs structure paper follows section introduce problem explain subtle nature section explain approach point using link cut tree data structure much faster repeating dfs searches section thoroughly justify results proving underlying markov chain necessary properties providing experimental results algorithm section describe related work concerning random spanning trees link cut trees mixing time markov chains section present conclusions challenge start describing intuitive process generating spanning trees obtain uniform distribution therefore produces trees higher probability others serves illustrate problem harder may seem glance moreover explain process biased using counting argument simple procedure build consists using data structure galler fisher guarantee contain cycle note structures strictly incremental meaning used detect cycles used remove edge cycle therefore possible action discard edge creates cycle let analyse concrete example resulting distribution spanning trees shall show distribution uniform first generate permutation process edges order edge produce cycle added edges would otherwise produce cycles discarded procedure continues next edge permutation consider complete graph vertexes focus probability generating star graph centered vertex beled figure illustrates star graph graph edges hence permutations produce star graph one permutation sary edges selected edge appears general edges must occur one permutation generates star graph moved figure star graph centhe right locations know sequences tered generate star graph reasoning applied moved right total counted sequences generate star graph centered sequences possible permute vertexes amongst hence multiplying previous count total counted sequences generate star graph therefore total probability obtaining star graph according cayley formula probability obtain star graph centered hence many sequences generating star graph centered next section bias discarding edge potential cycle necessarily edge creates main idea generate uniform spanning tree start generating arbitrary spanning tree one way obtain tree compute depth search case necessary time general wish mixing time chain much smaller specially dense graphs initial tree generated subsequent trees obtained randomizing process randomize repeat next process several times choose edge uniformly random consider set already belongs process stops otherwise contains cycle complete process choose edge uniformly remove hence step set transformed set illustration process shown figure edge swapping process adequately modeled markov chain states corresponds spanning trees transitions among states correspond process described section study ergodic properties chain let focus data structures used compute transition procedure simple solution problem would compute depth search dfs starting terminating whenever reached would allow identify time recall contains exactly elements edge could easily removed besides elements would also need represented adjacency list data structure purposes approach computation central algorithm complexity becomes factor overall performance hence explain perform operation log amortized time link cut tree data structure link cut tree lct data structure used represent forest rooted trees representation dynamic edges removed added whenever edge removed original tree cut two adding edge two trees links figure edge swap procedure inthem structure proposed sleator tarjan serting edge iniboth link cut operations computed log amortized time tial tree generates cycle edge removed algorithm edge swapping process procedure edgeswap lct representation current spanning tree chosen uniformly log time makes root reroot access chosen uniformly select cut link obtains representation path obtain edge end end procedure lct represent trees therefore edge swap procedure must cut edge afterwards insert edge link operation randomizing process needs identify select lct also compute process log amortized time lct works partitioning represented tree disjoint paths path stored auxiliary data structure edges accessed log amortized time compute process force path become disjoint path means completely stored one auxiliar data structure hence possible select edge moreover size also computed exact process force auxiliar structure make root represented tree access algorithm shows edge swapping procedure inspection process computed log amortized time bound crucial main result theorem graph spanning tree spanning tree chosen uniformly spanning trees log time mixing time ergodic edge swapping markov chain section prove process described indeed ergodic markov chain thus establishing result section pointing detail algorithm comment line point property must checked log time achieved time keeping array booleans indexed moreover also achieved log amortized time using lct data structure essentially delaying determined verifying details ergodic analysis section analyse markov chain induced edge swapping process clear process markov property probability reaching state depends previous state words next spanning tree depends current tree prove procedure correct must show stationary distribution uniform states let establish stationary distribution exists note given graph number spanning trees also precisely complete graphs cayley formula yields spanning trees value upper bound graphs spanning trees certain graph also spanning trees complete graph number vertexes therefore chain show irreducible aperiodic follows ergodic mitzenmacher upfal corollary therefore stationary distribution mitzenmacher upfal theorem chain aperiodic may occur transitions underlying state change transitions occur already therefore probability least edges spanning tree establish chain irreducible enough show pair states probability path first note probability transition chain least chosen elements chosen contains edges obtain path let represent respective trees consider following cases use transition otherwise possible choose note set equality follows assumption belongs last property note existed contradiction tree cycle mentioned probability transition least step resulting tree necessarily closer tree precisely necessarily however set smaller original size decreases edge exists second set therefore process iterated resulting set empty therefore resulting tree coincides maximal size size value occurs share edges multiplying probabilities process transforming obtain total probability least stationary distribution guaranteed exist show coincides uniform distribution proving chain time reversible mitzenmacher upfal theorem prove pair states exists transition probability exists necessarily transition probability transition exists means edges belongs cycle contained hence also means tree obtained tree adding edge removing edge words process figure similar top bottom process valid transition chain cycle transitions cycle contained obtain result observing transition factor comes choice factor choice transition factor comes choice factor choice hence established algorithm propose correctly generates spanning trees uniformly provided sample stationary distribution hence need determine mixing time chain number edge swap operations need performed initial tree distribution resulting trees close enough stationary distribution analyzing mixing time chain point possible use faster version chain choosing uniformly instead makes chain faster proving aperiodic trickier chain state prove state enough show values follow fact greatest common divisor case use time reverse property following deduction case observe cycle must contain least edges obtain insert remove move state state inserting removing finally move back inserting removing hence case coupling section focus bounding mixing time obtain general analytical bounds existing analysis techniques couplings levin peres mitzenmacher upfal strong stopping times levin peres canonical paths sinclair coupling technique yielded bound cycle graphs moreover simulation resulting coupling converges ladder graphs diving reasoning section need understanding cycles generated process consider closed walk sequence vertexes starting ending vertex two consecutive vertexes adjacent case cycles consider simple sense consist set edges closed walk formed traverses edges cycle moreover vertex repetitions allowed except vertex repeated end formally stated cycles occur randomizing process even regular cordless cycle graph cycle two vertices cycle connected edge belong cycle cycles produce also property otherwise chord existed would form cycle tree contradiction fact spanning tree graph alternatively set edges pair vertexes exactly one path linking coupling association two copies markov chain case edge swapping chain goal coupling make two chains meet fast possible obtain small value point say chains coalesced two chains may share information cooperate towards goal however analysed isolation chain must indistinguishable original chain obtaining high probability implies time chain well mixed precise statements claims given section use random variable represent state chain time variable represents state second chain consider chain state chain state one step chain transition state chain transition state set contains edges exclusive likewise set contains edges exclusive number edges provides distance measures far apart two states refer distance edge distance coupling extended states farther apart using path coupling technique bubley dyer use path coupling technique alter behavior chain general determined previous element path denote edge gets added edge gets removed corresponding cycle case cycle exists likewise represents edge inserted edge gets removed corresponding cycle case cycle exists edge chosen uniformly random chosen uniformly random edges obtained trying mimic chain still exhibiting behavior sense information let analyse means corresponding trees also equal case uses transition inserting set removing set edges exist distinct also need following sets set represents edges common set represents edges exclusive cycle point view confused represents edge exclusive figure schematic representation relations tree point view likewise represents edges exclusive also consider cycle cycle contained necessarily contains following lemma describes precise structure sets lemma either therefore form simple paths following properties hold notice particular means case partition schematic representation lemma shown figure several cases described aside lucky cases usually choose tries copy likewise possible would like set possible must choose ideally would choose must extra careful process avoid loosing behavior maintain behaviour must sometimes choose since provides information type edges use chosen uniformly select edges also twist choice makes coupling non markovian verify conditions equations choose often would otherwise permissible keeping track determined obtained deterministically example initial selection possible general might figure case determined changes case want take advantage underlying randomness therefore keep track random processes occur exact information store set edges moreover set contains edges equally likely information used set however action information must purged illustrate possible cases use figures edges drawn double lines represent generic path may contain several edges none precise cases following chain loops also loops therefore set change set set case chains coalesce swap states see figure set set case chains coalesce see figure chains coalesce edges longer exist set longer relevant figure case set cases depend whether start simpler establishes basic situations use bernoulli random variables balance probabilities whenever possible reduce cases considered possible present corresponding new situation following situations set case chains coalesce see figure set case chains coalesce fact exclusive edges remain unchanged see figures set remains equal likewise remains equal see figure otherwise set assign see figure iii select uniformly set see figure case set alternative considered next case figure case case case figure case case case figure case case case figure case case case true figure case case case set case shown figure case distance coupled states increases therefore include new state edge edge edge set contain edges provide alternatives case set choose higher probability therefore use bernoulli random variable success probability follows lemma prove properly balances necessary probabilities note expression yields coherent following cases yields true use choices following situations possible reduce case yields true figure case fails fails set see figure new case occurs fails situation set see figure reduce case yields true set see figures fails new situation set chains preserve distance see figure alternative considered next case iii fails new situation set distance increases see figure set use case figure otherwise use case figure following situations use case set see figure chains coalesce use case set see figures figure case figure case figure case true iii use new bernoulli random variable success probability follows lemma prove properly balances necessary probabilities note expression yields becomes coherent returns false use choices case fails considered next case successful new situation set chosen uniformly see figure fails use another bernoulli random variable success probability follows lemma prove properly balances necessary probabilities case yields true use case set see figure otherwise use case figure use case figure notice case applies thus solving situation particular case case shown figure may even case disjoint case drawn formally coupling markovian equations hold coupling pair chains chain represents original chain establish vital insight coupling structure start studying markovian lemma process described markovian coupling proof coupling equation alter behavior chain hence main part proof focus equation first let prove edge probability possibilities following occurs case may occurs case case case otherwise cases therefore occurs cases decisive condition choice therefore occurs case case focusing prove bernoulli random variables well expression denominators values analysis need well cycle must contain least edges therefore hence guarantees denominator argument proves thus implying expressions positive also establish hypothesis case guarantees therefore analysis seen analysis therefore denominators moreover also need prove general moreover hypothesis case therefore obtained removing sides implies therefore thus establishing last denominator also let establish note expression expressions parenthesis second property use new expression simplify deduction straightforward using equality obtained removing left side properties establish desired result analysis established analysis analysis also established note case also assumes hypothesis moreover analysis also established implies therefore last denominator also let also establish second property instead prove expressions parenthesis use following deduction equivalent inequalities establish last inequality part hypothesis case let focus edge wish establish analyse edge according following cases cycles equal involves cases occurs case determined fact therefore occurs case determined fact case therefore cycles size case possibilities following occurs case case determined fact therefore note according lemma hypothesis case never occurs occurs case case determined fact sets therefore occurs case case determined fact moreover sets uniformly selected following deduction use fact events independent implies involves case cases following occurs case true case occurrs make following deduction uses fact events independent success probability true true occurs case true case determined fact sets make following deduction uses fact events independent success probability true true occurs case also cases false following deduction uses event independence fact cases disjoint events succes probability false false false false false false false false false true true true concerns case cases following occurs case case true use following deduction true true false true occurs case case true make following deduction true true occurs case false following deduction false false false false false lemma process described coupling proof context markovian coupling analyse transition given information case use less information assume random variable provides know moreover otherwise chain makes move provides information let consider cases chooses nothing changes cases happen used therefore focus attention except trivial cases must hence instead alter condition otherwise note reasonable process established lemma case partition therefore also partition means loosing part process dividing cases process also reason even cases considered lemma must changed substitute original cases also substitute case cases remain unaltered except case previous deductions still apply exemplify deduction changes consider situation remaining situations use general argument hence precise assumptions correct according assumption general argument follows derivation whenever markovian coupling would produce obtain probability moreover every edge produced markovian coupling obtain another probability totals desired also occurs cases derivations become cumbersome finally argue property set initialised contain edge coupling proceeds chosen represent edges chosen simply restricted change precisely case chose case chose case chose obtained general bounds coupling presented may even bounds exponential even markov chain polynomial mixing time fact kumar ramesh proved case markovian couplings chain jerrum sinclair note according kumar ramesh coupling present considered markovian hence result applies type coupling using albeit considering chains immediate indeed exist polynomial markovian couplings chain presented cycle graphs class graphs establish polynomial bounds see figure theorem cycle graph mixing time edge swap chain normal version chain fast version proof two trees maximum distance edge hence coupling applies directly fast version case occur chosen hence cases might apply cases case chains preserve distance last case distance reduced hence corresponds probability case step coupling independent means use previous result markov inequality obtain use probability coupling lemma obtain variation distance solving following equation slow version chain case applies time choices takes steps standard chain behave fast chain therefore time result stark contrast alternative algorithms random walk wilson see section require time levin peres recent algorithms also least case moreover graph connect set cycles connected bridges articulation points also establish similar result figure shows one graph figure cycles connected bridges articulation points theorem graph consists simple cycles connect bridges articulation points size smallest cycle mixing time fast edge swap chain following log log mixing time slow version obtained using instead previous expression proof obtain result use path coupling argument two chains distance assume edge occurs largest cycle general edges inserted deleted alter situation hence term however probability chain inserts edge creates cycle diference occurs case probability chains coalesce hence applying path coupling mixing time must verify following equation slow version chain choose correct edge probability instead experimental results convergence testing looking performance algorithm started testing convergence edge swap chain estimate variation distance varying number iterations results shown figures describe structure consider example figure structure following bottom left plot shows graph properties number vertexes axis number edges axis dense case graph vertexes edges moreover graph vertexes edges graph indexes used remaining plots top left plot show number iterations chain axis estimated variation distance axis graphs top right plot similar top left axis contains number iterations divided besides data plot also show plot reference bottom right plot top right plot using logarithmic scale axis avoid plots becoming excessively dense plot points experimental values instead plot one point however lines pass experimental points even explicit variation distance two distributions countable state space given real value however size quickly becomes larger compute instead compute simpler variation distance reduced set spanning trees set integers correspond edge distance section generated tree random spanning tree precisely generate random trees using random walk algorithm described section trees compute simpler distance stationary distribution distribution obtained computing steps edge swapping chain obtain start initial tree execute chain figure ladder graph figure cycle graph times process repeated several times obtain estimates corresponding probabilities keep two sets estimates stop kmt moreover estimate values use criteria estimate case trees generated random walk algorithm value obtained maximum value obtained trees generated dense graphs sparse graphs graphs sparse graphs ladder graphs illustration graphs shown figure cycle graphs consist single cycle shown figure dense graphs actually complete graphs also generated dense graphs labelled bik consisted two complete graphs connected two edges also generated graphs based duplication model dmp let undirected unweighted graph given partial duplication model builds graph partial duplication follows chung start time time perform duplication step uniformly select random vertex add new vertex edge sparse figure estimation variation distance function number iterations sparse graphs see section details neighbor add edge probability values end dmp namely figures correspond choices graphs show convergence markov chain moreover seems reasonable bound still results entirely binding one hand estimation variation distance groups several spanning trees distance means within group distribution might uniform even global statistics good actual variation distance may larger convergence might slower hand chose exponent experimentally trying force data graphs converge point actual value may smaller larger coupling simulation mentioned obtained general bounds coupling presented fact experimental simulation coupling converge classes graphs obtained cycle figure estimation variation distance function number iterations cycle graphs see section details dense figure estimation variation distance function number iterations dense graphs see section details bik figure estimation variation distance function number iterations bik graphs see section details figure estimation variation distance function number iterations dmp graphs see section details figure estimation variation distance function number iterations dmp graphs see section details figure estimation variation distance function number iterations dmp graphs see section details experimental convergence cycle graph expected theorem ladder graphs remaining graphs used optimistic version coupling always assumes fails assumptions cases increase distance states eliminated coupling always converges note approach yield sound coupling practice procedure obtained good experimental variation distance moreover variation distance estimation tests simpler distance actual experimental variation distance obtained generating several experimental trees average possible tree obtained times simulation path coupling proceeds generating path steps essentially selecting two trees distance path obtained computing steps fast chain recall implementation simulations use fast version chain simulation ends path contracts size let number steps process point obtained estimate mixing time general wish obtain probability two general chains coalesce least hence repeat process times choose second largest value estimate table summarizes results experimental variation distance number possible spanning trees graph computed theorem generated times number possible trees computed variation distance stated got good results variation distance getting median well tested graph topologies present experimental results larger graphs use optimistic coupling experiments conducted computer intel xeon cpu cores ram present running times graph topologies sizes figures note beforehand coupling estimate needs computed graph estimate known generate many spanning trees want although edge swapping method always faster compared random walk wilson algorithm competitive practice dmp torus graphs faster bik cycle sparse ladder graphs expected less competitive dense graphs hence experimental results seem point edge swapping method competitive practice instances harder random walk based methods namely bik cycle graphs results bik dmp particular interest table variation distance different graph topologies median maximum computed runs network since dmp graphs random results dmp computed different graphs size graph median max dense bik cycle sparse torus dmp real networks seem include kind topologies include communities chung related work detailed exposure probability trees networks see lyons peres chapter far know initial work generating uniform spanning trees aldous broader obtained spanning trees performing random walk underlying graph author also studied properties random trees aldous namely giving general closed formulas counting argument presented section random walk process vertex chosen step vertex swapped adjacent vertex neighboring vertices selected equal probability time vertex visited time corresponding edge added growing spanning tree process ends vertexes get visited least amount steps known cover time obtain algorithm faster cover time wilson proposed approach vertex initially chosen uniformly goal hit vertex second vertex also chosen uniformly process random walk loop erasure feature whenever path second vertex intersects edges corresponding loop must removed path path eventually reaches becomes part spanning tree process continues choosing third vertex dense coupling estimate edge swapping random walk wilson algorithm time figure running times dense fully connected graphs averaged five runs including running time computing optimistic coupling estimate running time generating spanning tree based estimate edge swapping algorithm running time generating spanning tree random walk also running time wilson algorithm bik coupling estimate edge swapping random walk wilson algorithm time figure running times bik graphs averaged five runs including running time computing optimistic coupling estimate running time generating spanning tree based estimate edge swapping algorithm running time generating spanning tree random walk also running time wilson algorithm cycle coupling estimate edge swapping random walk wilson algorithm time figure running times cycle graphs averaged five runs including running time computing optimistic coupling estimate running time generating spanning tree based estimate edge swapping algorithm running time generating spanning tree random walk also running time wilson algorithm ladder coupling estimate edge swapping random walk wilson algorithm time figure running times sparse ladder graphs averaged five runs including running time computing optimistic coupling estimate running time generating spanning tree based estimate edge swapping algorithm running time generating spanning tree random walk also running time wilson algorithm torus coupling estimate edge swapping random walk wilson algorithm time figure running times square torus graphs averaged five runs including running time computing optimistic coupling estimate running time generating spanning tree based estimate edge swapping algorithm running time generating spanning tree random walk also running time wilson algorithm torus coupling estimate edge swapping random walk wilson algorithm time figure running times rectangular torus graphs averaged five runs including running time computing optimistic coupling estimate running time generating spanning tree based estimate edge swapping algorithm running time generating spanning tree random walk also running time wilson algorithm duplication model coupling estimate edge swapping random walk wilson algorithm time figure running times dmp graphs averaged five runs including running time computing optimistic coupling estimate running time generating spanning tree based estimate edge swapping algorithm running time generating spanning tree random walk also running time wilson algorithm also computing loop erasure path time necessary hit precisely enough hit vertex branch already linked process continues choosing random vertexes computing loop erasure paths hit spanning tree already computed implemented algorithms accessible although several theoretical results obtained recent years aware implementation algorithms survey results another approach problem relies theorem counts number spanning trees computing determinant certain matrix related graph relation researched kulkarni yielded time algorithm result improved colbourn day nel colbourn myrvold neufeld exponent corresponds fastest algorithm compute matrix multiplication improvements random walk approach obtained kelner culminating time algorithm straszak relies insight provided resistance metric interestingly initial work broader contains reference edge swapping chain presented paper section named swap chain author mentions mixing time chain albeit details omitted far tell natural approach problem receive much attention precisely due lack implementation even though link cut trees known time sleator tarjan application problem established prior work initial application network goldberg tarjan also found another reference edge swap work sinclair proposal canonical path technique author mentions particular chain motivating application canonical path technique still details omitted able obtain analysis considered lct version auxiliary trees implemented splay trees sleator tarjan auxiliary data structures mentioned section splay trees means step algorithm vertexes involved path get stored splay tree path oriented approach link cut trees makes suitable goals opposed dynamic connectivity data structures euler tour trees henzinger king splay trees binary search trees therefore vertexes ordered way inorder traversal tree coincides sequence vertexes obtained traversing also size set also obtained log amortized time node simply stores size values updated splay process consists sequence rotations moreover values also used select edge path starting root comparing tree sizes determine vertex desired edge left root right likewise second vertex edge question operations splay vertexes obtain therefore total time depends splay operation precise total time splay operation log however log term accumulate successive operations thus yielding bound log theorem general log term bottleneck graphs always case consists single cycle may large figure shows example graph section reviewing formal variational distance mixing time mitzenmacher upfal definition variation distance two distributions countable space given definition let stationary distribution markov chain state space let ptx represent distribution state chain starting state steps define max variation distance stationary distribution ptx maximum values states also define min max refer mixing time mean finally coupling lemma coupling approach lemma let coupling markov chain state space suppose exists every initial state variation distance distribution state chain steps stationary distribution distance property obtained using condition condition use markovian inequality path coupling technique bubley dyer constructs coupling chaining several chains distance therefore obtain xtd xtd conclusions future work paper studied new algorithm obtain spanning trees graph uniform way underlying markov chain initially sketched broader early study problem extended work proving necessary markov chain properties using link cut tree data structure allows much faster implementation repeating dfs procedure may actually reason approach gone largely unnoticed time main shortcoming approach lack general theoretical bound mixing time bound might possible using new approaches insight resistance metric straszak tarnawski although general analysis would valuable addressed problem simulating coupling sound optimistic lack analysis shortcoming algorithm practical implemented compared existing alternatives experimental results show competitive theoretical bound would still valuable specially case algorithm alternatives one hand computing mixing time underlying chain complex time consuming hard analyse theory hand user process certain number steps execute useful parameter used swap randomness time depending type application user may randomness underlying trees obtain faster results contrary spend extra time guarantee randomness existing algorithms provide possibility note point approach generalized assigning weights edges graph edge inserted selected probability corresponds weight divided global sum weights moreover edge remove cycle removed according weight probability weight divided sum cycle weights ergodic analysis section generalizes easily case chain also generates spanning trees uniformly albeit analysis coupling section might need adjustments proper weight selection might obtain faster mixing timer possibly something similar resistance edge acknowledgements work funded part european union horizon research innovation programme marie grant agreement national funds para tecnologia fct reference bibliography references aigner ziegler quarteroni proofs book vol springer aldous random tree model associated random graphs random structures algorithms aldous random walk construction uniform spanning trees uniform labelled trees siam discret math borchardt eine interpolationsformel eine art symmetrischer functionen und deren anwendung math abh der akademie der wissenschaften berlin broader generating random spanning trees ieee symposium fondations computer science bubley dyer oct path coupling technique proving rapid mixing markov chains proceedings annual symposium foundations computer science cayley theorem trees pure appl math chung complex graphs networks american mathematical soc chung dewey galas duplication models biological networks journal computational biology url https colbourn day nel unranking ranking spanning trees graph algorithms colbourn myrvold neufeld two algorithms unranking arborescences algorithms galler fisher may improved equivalence algorithm commun acm url http goldberg tarjan finding circulations canceling negative cycles acm random spanning tree journal algorithms henzinger king randomized dynamic graph algorithms polylogarithmic time per operation proceedings annual acm symposium theory computing acm jerrum sinclair approximating permanent siam comput kelner oct faster generation random spanning trees annual ieee symposium foundations computer science ueber die der gleichungen auf welche man bei der untersuchung der linearen vertheilung galvanischer wird annalen der physik kulkarni generating random combinatorial objects journal algorithms kumar ramesh markovian coupling conductance chain annual symposium foundations computer science levin peres markov chains mixing times vol american mathematical soc lyons peres probability trees networks vol cambridge university press graphs matrices back new techniques graph algorithms thesis massachusetts institute technology mitzenmacher upfal probability computing randomized algorithms probabilistic analysis cambridge university press new york usa straszak tarnawski fast generation random spanning trees resistance metric indyk proceedings annual acmsiam symposium discrete algorithms soda san diego usa january siam sinclair improved bounds mixing rates markov chains multicommodity combinatorics probability computing sleator tarjan data structure dynamic trees proceedings thirteenth annual acm symposium theory computing stoc acm new york usa sleator tarjan jul binary search trees acm wilson generating random spanning trees quickly cover time proceedings annual acm symposium theory computing stoc acm new york usa
| 8 |
maximizing diversity multimodal optimization jun olivetti cmcc federal university abc ufabc brazil folivetti introduction multimodal optimization algorithms use called niching methods order promote diversity optimization others like artificial immune systems try find multiple solutions main objective one algorithms called introduced line distance measures distance two solutions regarding basis attraction short abstract propose use line distance measure main order locate multiple optima population line distance line distance two solutions calculated taking middle point building three vectors distance point line formed calculated optimum line formed close function contour resulting small distance different optima proportional maximizing diversity maximization line distance two points maximum points different optima maximize objective expected algorithm returns several different optima objectivefunction may used main goal optization process supporting operator crossover mutation populational algorithm test claim use simples algorithm create initial population single solution drawn uniformly random problem domain next given number iterations repeat two procedures expansion current solutions suppression similar solutions expansion process solution new solutions created optimizing argmaxld random direction drawn uniformly random step walk given direction found unidimensional search method line distance new solution aggregated population distance original solution greater threshold given solution generated solutions discarded solution moved local optima population assumed nearby optima around solution already discovered suppression step solutions populations compared whenever two solutions distance greater worst two discarded notice optimization process indirectly performed distance function maximum whenever one solutions located local optima experimental results simple experiment chosen two multimodal functions rastrigin grienwank functions tested parameters function rastrigin grienwank repetitions iterations simple procedure could find different local optima rastrigin griewank cases global optima also found results depicted fig rastrigin grienwank fig optima found maximizing diversity conclusion extended abstract argued direct maximization diversity may better suited multimodal optimization algorithms shown using adequate diversity measure proper algorithm possible find many different local optima must noticed objectivefunctions may explosive growth number local optima growth dimension procedure may best suited evolutionary operator promote diversity references mahfoud niching methods genetic algorithms urbana coelho castro von zuben conceptual practical aspects ainet family algorithms international journal natural computing research ijncr franca von zuben castro artificial immune network multimodal function optimization dynamic environments proceedings conference genetic evolutionary computation acm
| 9 |
diffusion adaptation clustered multitask networks based affine projection algorithm vinay chakravarthi mrityunjoy department electronics electrical communication engineering oct indian institute technology kharagpur india phone fax vinaychakravarthi mrityun abstract distributed adaptive networks achieve better estimation performance exploiting temporal well spatial diversity consuming resources recent works studied single task distributed estimation problem nodes estimate single optimum parameter vector collaboratively however many important applications multiple vectors estimated simultaneously collaborative manner paper presents diffusion strategies based affine projection algorithm apa usage apa makes algorithm robust correlated input performance analysis proposed diffusion apa algorithm studied mean mean square sense also modified diffusion strategy proposed improves performance terms convergence rate steady state emse well simulations conducted verify analytical results ntroduction distributed adaptation networks emerged attractive challenging research area advent multiagent wireless wireline networks recent results field found adaptive networks interconnected nodes continuously learn adapt well perform assigned tasks parameter estimation observations collected dispersed agents consider connected network consisting nodes observing temporal data arising different spatial sources possibly different statistical profiles objective enable nodes estimate parameter vector interest wopt observed data centralized approach data local estimates nodes would conveyed central processor would fused vector parameters estimated order reduce requirement powerful central processor extensive amount communications traditional centralized solution distributed solution developed relying local data exchange interactions intermediate neighborhood nodes retaining estimation accuracy centralized solution distributed networks individual nodes share computational burden communications reduced compared centralized network power bandwidth usage also reduced due merits distributed estimation received attention recently widely used many applications precision agriculture environmental monitoring military surveillance transportation instrumentation mode cooperation allowed among nodes determines efficiency distributed implementation incremental mode cooperation node transfers information corresponding adjacent node sequential manner using cyclic pattern collaboration approach reduces communications nodes improves network autonomy compared centralized solution practical wireless sensor networks may difficult establish cyclic pattern required incremental mode cooperation number sensor nodes increase hand diffusion mode cooperation node exchanges information neighborhood set neighbors including directed network topology exist several useful distributed strategies sequential data processing networks including consensus strategies incremental strategies diffusion strategies diffusion strategies exhibit superior stability performance consensus based algorithms existing literature distributed algorithms shows works focus primarily case nodes estimate single optimum parameter vector collaboratively shall refer problems type problems however many problems interest happen oriented consider general situation connected clusters nodes cluster parameter vector estimate estimation still needs performed cooperatively across network data across clusters may correlated therefore cooperation across clusters beneficial concept relevant context distributed estimation adaptation networks initial investigations along lines traditional diffusion strategy appear well known case single adaptive filter one major drawback lms algorithm slow convergence rate colored input signals apa algorithm better alternative lms environment distributed networks highly correlated inputs also deteriorate performance algorithm paper therefore focus new distributed learning scheme networks obtain good compromise convergence performance computational cost analyze performance terms error convergence rate etwork odels ulti task learning consider network nodes deployed certain geographical area every time instant every node access time realizations denoting scalar zero mean reference signal regression vector covariance matrix utk data node assumed related via linear measurement model utk unknown optimal parameter vector estimated node observation noise variance assumed zero mean white noise also independent considering number parameter vectors estimated shall refer number tasks distributed learning problem oriented therefore distinguish among following three types networks illustrated fig depending figure three types networks direct links nodes communicate one hop network network clustered network parameter vectors related across nodes networks nodes network estimate parameter vector case networks node network determine optimum parameter vector however assumed similarities relationships exist among parameters neighboring nodes denote writing sign represents similarity relationship sense meaning become clear soon introduce expression ahead many situations practice objective parameters identical across clusters inherent relationships therefore beneficial exploit relationships enhance performance focus promoting similarity objective parameter vectors via distance clustered networks nodes grouped clusters one task per cluster optimum parameter vectors constrained equal within cluster optimum parameter vectors constrained equal within cluster similarities neighboring clusters allowed exist namely whenever connected denote two cluster indexes say two clusters connected exists least one edge linking node one cluster node cluster one observe networks particular cases clustered network case nodes clustered together clustered network reduces network hand case cluster involves one node clustered network becomes network building literature diffusion strategies networks shall generalize usage analysis distributed learning clustered networks results also applicable networks setting number clusters equal number nodes iii roblem ormulation clustered multitask networks nodes grouped cluster estimate coefficient vector thus consider cluster node belongs certain settings order provide independence input data correlation statistics introduce normalized updates respect input regressor node local cost function associated node assumed hessian matrix cost function positive local cost function defined utk wck kuk depending application may certain properties among optimal vectors mutual information among tasks could used improve estimation accuracy among possible options simple yet effective euclidian distance based regularizer enforced squared euclidean distance regularizer given wck wcl kwck wcl estimate unknown parameter vectors shown local cost regularizer combined level cluster formulation led following estimation problem defined terms nash equilibrium problems cluster estimates minimizing regularized cost function jcj wcj jcj wcj wcj wck kuk jcj wcj kwck wcl wcj parameter vector associated cluster regularization parameter symbol set difference note kept notation wck equation make role regularization term clearer even though wcj notation denotes collection weight vectors estimated clusters wcq wcj coefficients aim adjusting regularization strength coefficients chosen satisfy conditions otherwise impose since nodes belonging cluster estimate parameter vector solution problem requires every node network access statistical moments pud cluster however node assumed direct access information neighborhood may include nodes part cluster therefore enable distributed solution relies measured data neighborhood mentioned cost function relaxed following form clk utl kuk kwk blk kwk wol coefficients clk satisfy conditions clk clk coefficients blk also following line reasoning case extending argument problem using properties following procedure mentioned following diffusion strategy atc clustered normalized lms nlms derived distributed manner alk extending clustered diffusion strategy case derive following affine projection algorithm apa based clustered diffusion strategy utk alk denotes regularization parameter small positive value employed avoid inversion rank deficient matrix utk input data matrix desired response vector given follows clustered diffusion apa algorithm given algorithm diffusion apa clustered networks start repeat utk utk alk network single cluster consists entire set nodes get expression reduces diffusion adaptation strategy described algorithm algorithm diffusion apa networks start repeat utk utk alk case network size cluster one algorithm degenerates algorithm instantaneous gradient counterpart node algorithm diffusion apa networks start repeat utk utk ean quare rror erformance nalysis network global model structure algorithm leads challenge performance analysis proceed first let define global representations col col diag col block diagonal matrix diagonal matrices defined diag diag collect local regularization parameters linear model form global model network level obtained global optimal weight noise vectors given follows col col facilitate analysis network topology assumed static alk alk assumption compromise algorithm derivation operation used analysis analysis presented serves basis work using expressions global model diffusion apa therefore formulated follows iln denoting kronecker product symmetric matrix defines network topology asymmetric matrix defines regularizer strength among nodes objective study performance behavior diffusion apa governed form mean error behavior analysis global error vector related local error vectors col global weight error vector rewritten denoting using results recursive update equation global weight error vector written dut dut dut iln taking expectation sides using statistical pendence independence assumption recalling also independent thus write iln initial condition order guarantee stability diffusion apa strategy mean sense step size chosen satisfy iln denotes maximum eigen value argument matrix therefore using norm inequalities recalling fact combining matrix left stochastic matrix block maximum norm equal one iln iln iln let matrix gershgorin circle theorem therefore using result recalling fact right stochastic matrix sufficient condition hold choose maxk result clearly shows mean stability limit clustered utk utk diffusion apa lower diffusion apa due presence asymptotic mean bias given iln iln lim error behavior analysis recursive update equation weight error vector also rewritten follows dut iln using standard independent assumption mean square weight error weighted positive matrix free choose satisfies following relation vector eke eke ati order study behavior diffusion apa algorithm following moments must evaluated extract matrix expectation terms weighted variance relation introduced using column vectors bvec bvec bvec denotes block vector operator addition bvec also used recover original matrix one property bvec operator working block kronecker product used work namely bvec denotes block kronecker product two block matrices using block vectorization following terms right side given bvec iln bvec iln bvec iln iln bvec bvec iln iln iln bvec bvec iln iln iln bvec bvec iln iln iln bvec bvec dat iln iln iln bvec bvec bvec bvec therefore linear relation corresponding vectors formulated matrix given iln iln iln iln iln iln iln iln iln iln iln iln iln iln iln iln iln let denote diagonal matrix whose entries noise variances given diag using independence assumption noise signals term written vec vec vec finally let define last three terms right hand side term evaluated follows let consider term written bvec rtb bvec simplified follows consider second term iln written follows way third term bvec iln iln iln iln iln iln bvec therefore behavior diffusion apa algorithm summarized follows eke eke eke rtb therefore diffusion strategy presented mean square stable matrix stable iterating recursion starting get eke eke matrix stable first second terms equation initial condition converge finite value let consider third term rhs know uniformly bounded bibo stable recursion bounded driving term therefore written rtb provided stable exist matrix norm denoted kfkp applying norm using matrix norms triangular inequality write cip given small positive constant therefore eke converges bounded value algorithm said mean square stable selecting iln eke relate eke eke follows eke eke rewrite last two terms equation rtb therefore recursion presented rewritten eke eke eke rtb msd diffusion apa strategy given follows lim eke new approach improve performance clustered diffusion apa clustered diffusion strategy presented mainly drawbacks time instance assume node exhibiting poor performance node diffusion strategy forces node learn node adaptation step affects performance transient state underlying system however multitask diffusion strategy forces node learn node even steady state hampers steady state performance algorithm address problems control variable called similarity measure introduced control regularizer term diffusion strategy time instance node access neighborhood filter coefficient vectors since node learning neighborhood filter coefficient vectors reasonable check similarity among filter coefficient vectors similarity measure calculated follows sign estimated error variances calculated utk utk positive constant explain suppose index node performs better node node similarity measure sign implies node would learn weight information node adding difference current weight vectors correction term weight update hand suppose node perform better node node implies node would neglect weight vector similarity measure sign thus improves convergence rate steady state performance diffusion strategy presented therefore taking similarity measure account modified clustered diffusion apa given utk utk utk alk therefore using expressions global model modified diffusion apa formulated follows diag matrix indicates hadamard product asymmetric matrix defines regularizer strength among nodes empty matrices matrices defined network model section objective study performance behavior diffusion apa governed form mean error behavior analysis recursive update equation global weight error vector written denoting iln dut taking expectation sides addition statistical independence independence assumption assuming statistical independence also recalling also independent thus write iln quantity given follows otherwise initial condition order guarantee stability modified diffusion apa strategy mean sense step size chosen satisfy iln using arguments used iln iln gershgorin circle theorem sufficient condition hold choose maxk maxk recalling fact equal either write imply therefore presence similarity measure makes modified diffusion strategy mean stability better multi task diffusion strategy mentioned however lower diffusion apa due presence asymptotic mean bias given iln iln lim error behavior analysis recursive update equation modified diffusion apa weight error vector also rewritten iln addition standard independent assumption taken assume statistical independence mean square weight error vector weighted positive matrix free choose satisfies following relation eke eke following procudre mentioned extract matrix expectation terms weighted variance relation introduced using column vectors bvec bvec linear relation corresponding vectors matrix given iln iln iln iln iln iln iln iln iln iln iln iln iln iln iln iln iln noise term finally let define last three terms right hand side rtb bvec iln iln iln iln iln iln iln iln iln bvec bvec therefore behavior modified diffusion apa algorithm summarized follows eke eke therefore modified diffusion apa strategy presented mean square stable matrix stable iterating recursion starting get eke eke matrix stable first second terms equation initial condition converge finite value let consider third term rhs know uniformly bounded bibo stable recursion bounded driving term therefore written rtb provided stable exist matrix norm denoted applying norm using matrix norms triangular inequality write vcip given small positive constant therefore eke converges bounded value algorithm said mean square stable selecting iln relate eke eke follows eke eke eke rewrite last two terms equation rtb therefore recursion presented rewritten eke eke eke rtb msd modified diffusion apa strategy given follows lim eke imulation esults network consists nodes topology shown fig considered simulations nodes divided clusters first theoretical performance comparison purpose first considered randomly generated two dimension vectors form wck input regressors taken zero mean gaussian distribution correlation matrices observation noises gaussian random variables independent signals noise variance diffusion apa algorithm run different step sizes regularization parameters regularization strength set figure network topology settings usually leads asymmetrical regularization weights coefficient matrix taken identity matrix combiner coefficients alk set according metropolis rule simulations carried illustrate performance several learning strategies apa algorithm algorithm algorithm clustered algorithm algorithm algorithm obtained assigning cluster node setting algorithm obtained assigning cluster node setting note algorithm considered comparison since estimation method normalized msd taken performance parametric compare diffusion strategies projection order taken initial taps chosen zero simulated msd theoritical msd normalized msd iterations figure comparison transient nmsd different regularization parameters secondly modified diffusion apa compared diffusion apa randomly generated coefficient vectors form wck taps length chosen input signal vectors taken zero mean gaussian distribution correlation statistics shown fig observation noises gaussian random variables independent signals noise variances shown fig pole node number figure input signal statistics node number figure noise statistics projection order taken initial taps chosen zero regularization parameters adjusted compare steady state msd convergence rate properly simulation results obtained averaging runs learning curves diffusion strategies presented fig observed performance strategy poor nodes collaborate additional benefit case diffusion strategy performance improved strategy due regularization nodes cluster information addition regularization among nodes clustered multi task results better performance multi task diffusion strategies extra information information regularization among nodes results great improvement performance modified clustered diffusion strategy onclusions paper presented diffusion apa strategies suitable networks also robust correlated input conditions performance analysis proposed diffusion apa presented mean mean square sense introducing similarity measure modified diffusion apa algorithms proposed achieve improved performance diffusion strategies existed literature eferences chen zhao towfic diffusion strategies adaptation learning networks ieee signal process vol may cooperative apa diffusion apa clustered diffusion apa modified clustered diffusion apa normalized msd iterations figure comparison modified diffusion apa diffusion apa chellappa theodoridis diffusion adaptation networks academic press library signal processing eds amesterdam netherlands elsevier adaptive networks proc ieee vol april xiao boyd fast linear iterations distributed averaging syst control vol braca marano matta running consensus wireless sensor networks proc fusion cologne germany kar moura distributed consensus algorithms sensor networks imperfect communication link failures channel noise ieee trans signal vol bertsekas new class incremental gradient methods least squares problems siam journal vol lopes sayed incremental adaptive strategies distributed networks ieee trans signal vol leilei chambers lopes sayed distributed estimation adaptive incremental network based affine projection algorithm ieee trans signal vol lopes sayed diffusion squares adaptive networks formulation performance analysis ieee trans signal vol jul cattivelli sayed diffusion lms stretagies distributed estimation ieee trans signal vol may leilei chambers distributed adaptive estimation based apa algorithm diffusion networks changing topology proc ieee ssp chen sayed diffusion adaptation strategies distributed optimization learning networks ieee trans signal vol sayed diffusion strategies outperform consensus strategies distributed estimation adaptive networks ieee trans signal vol zhao sayed clustering via diffusion adaptation networks proc cip parador baiona spain may bogdanovic berberidis distributed lms node specific parameter estimation adaptive networks proc ieee icassp vancouver canada may chen richard sayed diffusion lms clustered multitask networks proc ieee icassp florence italy may chen richard sayed multitask diffusion adaptation networks ieee trans signal vol chen richard sayed diffusion lms multitask networks appear ieee trans signal basar olsder dynamic noncooperative game theory ieee trans signal london academic press edition sayed fundamentals adaptive filtering john wiley sons ozeki umeda adaptive filtering algorithm using orthogonal projection affine subspace properties transactions iece japan vol shin sayed performance family ane projection algorithms ieee trans acoustics speech signal vol january yousef sayed unified approach state tracking analyzes adaptive filters ieee trans acoustics speech signal vol february koning neudecker wansbeek block kronecker products vecb operator economics institute economics research univ groningen groningen netherlands research memo tracy singh new matrix product applications partitioned matrix differentiation statistica vol haykin adaptive filter theory prentice hall adaptive filters john wiley sons manolakis ingle kogon statistical adaptive signal processing
| 3 |
dec combining representation learning logic language processing tim dissertation submitted partial fulfillment requirements degree doctor philosophy university college london department computer science university college london december paula emily sabine lutz acknowledgements deeply grateful supervisor mentor sebastian riedel always great source inspiration supportive encouraging matters particularly amazed trust put first time met freedom gave pursue ambitious ideas contagious optimistic attitude many opportunities presented absolutely way could wished better supervisor would like thank second supervisors thore graepel daniel tarlow feedback well sameer singh whose collaboration guidance made start smooth motivating fun furthermore thank edward grefenstette karl moritz hermann thomas phil blunsom fortunate work internship deepmind summer thankful guidance thomas demeester andreas vlachos pasquale minervini pontus stenetorp isabelle augenstein jason naradowsky time university college london machine reading lab addition thanks dirk weissenborn many fruitful discussions would also like thank ulf leser philippe thomas roman klinger preparing well studies berlin pleasure work brilliant students university college london thanks michal daniluk luke hewitt ben eisner vladyslav kolesnyk avishkar bhoopchand nayen pankhania hard work trust life would much fun without lab mates matko bosnjak george spithourakis johannes welbl marzieh saeidi ivan sanchez thanks matko dirk pasquale thomas johannes sebastian feedback thesis furthermore thank examiners luke dickens charles sutton extremely helpful comments corrections thesis thank federation coffee brixton making best coffee world many thanks microsoft research supporting work scholarship programme google research fellowship natural language processing thanks generous support well funding university college london computer science department able travel conferences wanted attend greatly indebted thankful parents sabine lutz sparked interest science importantly ensured fantastic childhood always felt loved supported well protected many troubles life furthermore would like thank christa ulrike tillmann hella hans gretel walter unconditional support last years lastly thanks two wonderful women life paula emily thank paula keeping ups downs love motivation support thank giving family making feel home wherever emily greatest wonder joy life declaration tim confirm work presented thesis information derived sources confirm indicated thesis rockt abstract current many natural language processing automated knowledge base completion tasks held representation learning methods learn distributed vector representations symbols via optimization require little features thus avoiding need preprocessing steps assumptions however many cases representation learning requires large amount annotated training data generalize well unseen data labeled training data provided human annotators often use formal logic language specifying annotations thesis investigates different combinations representation learning methods logic reducing need annotated training data improving generalization introduce mapping logic rules loss functions combine neural link prediction models using method logical prior knowledge directly embedded vector representations predicates constants find method learns accurate predicate representations little training data available time generalizing predicates explicitly stated rules however method relies grounding logic rules scale large rule sets overcome limitation propose scalable method embedding implications vector space regularizing predicate representations subsequently explore tighter integration representation learning logical deduction introduce differentiable prover neural network recursively constructed prolog backward chaining algorithm constructed network allows calculate gradient proofs respect symbol representations learn representations proving facts knowledge base addition incorporating complex rules induces interpretable logic programs via gradient descent lastly propose recurrent neural networks conditional encoding neural attention mechanism determining logical relationship two natural language sentences impact statement machine learning representation learning particular ubiquitous many applications nowadays representation learning requires little features thus avoiding need assumptions time requires large amount annotated training data many important domains lack large training sets instance annotation costly domain expert knowledge generally hard obtain combination neural symbolic approaches investigated thesis recently regained significant attention due advances representation learning research certain domains importantly lack success domains research conducted investigated ways training representation learning models using explanations form logic rules addition individual training facts opens possibility taking advantage strong generalization representation learning models still able express domain expert knowledge hope work particularly useful applying representation learning domains annotated training data scarce empower domain experts train machine learning models providing explanations contents introduction aims contributions thesis structure background function approximation neural networks computation graphs symbols subsymbolic representations backpropagation logic syntax deduction backward chaining inductive logic programming automated knowledge base completion matrix factorization neural link prediction models models regularizing representations logic rules matrix factorization embeds ground literals embedding propositional logic embedding logic via grounding stochastic grounding baseline experiments training details results discussion contents relation learning relations distant labels comparison complete data related work summary lifted regularization predicate representations implications method experiments training details results discussion restricted embedding space constants relation learning relations distant labels incorporating background knowledge wordnet computational efficiency lifted rule injection asymmetry related work summary differentiable proving differentiable prover unification module module module proof aggregation neural inductive logic programming optimization training objective neural link prediction auxiliary loss computational optimizations experiments training details results discussion related work summary contents recognizing textual entailment recurrent neural networks background recurrent neural networks independent sentence encoding methods conditional encoding attention attention attention experiments training details results discussion qualitative analysis related work bidirectional conditional encoding generating entailing sentences summary conclusions summary contributions limitations future work appendices annotated rules annotated wordnet rules bibliography list figures two simple computation graphs inputs gray parameters blue intermediate variables outputs dashed operations shown next nodes computation graph backward pass computation graph shown fig simplified pseudocode symbolic backward chaining cycle detection omitted brevity see russell norvig gelder gallaire minker details full proof tree small knowledge base knowledge base inference matrix completion true training facts green unobserved facts question mark relation representations red entity pair representations orange complete computation graph single training example bayesian personalized ranking computation graph rule denotes placeholder output connected node given set training ground atoms matrix factorization learns predicate constant pair representations also consider additional logic rules red seek learn symbol representations predictions completed matrix comply given rules weighted map scores relation learning weighted map various models fraction freebase training facts varied freebase training facts get relation learning results presented table curve demonstrating joint method incorporates annotated rules derived data outperforms existing factorization approaches list figures example implications directly captured vector space weighted map injecting rules function fraction freebase facts module mapping upstream proof state left list new proof states right thereby extending substitution set adding nodes computation graph neural network representing proof success exemplary construction ntp computation graph toy knowledge base indices arrows correspond application respective rule proof states blue subscripted sequence indices rules applied underlined proof states aggregated obtain final proof success boxes visualize instantiations modules omitted unify proofs fail due rule applied twice overview different tasks contries dataset visualized nickel left part shows atoms removed task dotted lines right part illustrates rules used infer location test countries task facts corresponding blue dotted line removed training set task additionally facts corresponding green dashed line removed finally task also facts red line removed computation graph rnn cell computation graph lstm cell independent encoding premise hypothesis using lstm note sentences compressed dense vectors indicated red mapping conditional encoding two lstms first lstm encodes premise second lstm processes hypothesis conditioned representation premise list figures attention model rte compared fig model represent entire premise cell state instead output context representations informally visualized red mapping later queried attention mechanism blue also note used attention model rte compared fig querying memory multiple times allows model store information output vectors processing premise also note also used attention visualizations attention visualizations bidirectional encoding tweet conditioned bidirectional coding target stance predicted using last forward reversed output representations list tables example knowledge base using prolog syntax left list representation used backward chaining algorithm right example proof using backward chaining top rules five different freebase target relations implications extracted matrix factorization model manually annotated premises implications shortest paths entity arguments dependency tree present simplified version make patterns readable see appendix list annotated rules weighted map relations appear annotated rules omitted evaluation difference pre joint significant according weighted map reimplementation matrix factorization model compared restricting constant embedding space injecting wordnet rules fsl orginial matrix factorization model riedel denoted average score facts constants appear body facts head rule results countries mrr hits kinship nations umls results snli corpus chapter introduction attempting replace symbols vectors replace logic yann lecun vast majority knowledge produced mankind nowadays available digital unstructured form images text hard algorithms extract meaningful information data resources let alone reason issue becoming severe amount unstructured data growing rapidly recent years remarkable successes processing unstructured data achieved representation learning methods automatically learn abstractions large collections training data achieved processing input data using artificial neural networks whose weights adapted training representation learning lead breakthroughs applications automated knowledge base completion nickel riedel socher chang yang neelakantan toutanova trouillon well natural language processing nlp applications like paraphrase detection socher yin machine translation bahdanau image captioning speech recognition chorowski sentence summarization rush name representation learning methods achieve remarkable results usually rely large amount annotated training data moreover since representation learning operates subsymbolic level instance replacing words lists real numbers vector representations embeddings hard determine obtain certain prediction let alone correct systematic errors incorporate domain commonsense knowledge fact recent general data protection regulation european union introduces right chapter introduction explanation decisions algorithms machine learning models affect users council european union enacted profound implications future development research machine learning algorithms goodman flaxman especially nowadays commonly used representation learning methods moreover many domains interests enough annotated training data renders applying recent representation learning methods difficult many issues exist purely symbolic approaches instance given facts logic rules use prolog obtain answer well proof query furthermore easily incorporate domain knowledge adding rules however system generalize new questions instance given apple fruit apples similar oranges would like infer oranges likely also fruits summarize symbolic systems interpretable easy modify need large amounts training data easily incorporate domain knowledge hand learning subsymbolic representations requires lot training data trained models generally opaque hard incorporate domain knowledge consequently would like develop methods take best worlds aims thesis investigating combination representation learning logic rules reasoning representation learning methods achieve strong generalization learning subsymbolic vector representations capture similarity even logical relationships directly vector space mikolov symbolic representations hand allow formalize domain commonsense knowledge using rules instance state every human mortal every grandfather father parent rules often worth many training facts furthermore using symbolic representations take advantage algorithms reasoning like prolog backward chaining algorithm colmerauer backward chaining widely used question answering kbs also provides proofs addition answer question however symbolic reasoning relying complete specification background commonsense knowledge logical form example let assume asking grandpa person know grandfather person explicit rule connecting grandpa grandfather find answer however given large contributions use representation learning learn grandpa grandfather mean thing lecturer similar professor nickel riedel socher chang yang neelakantan toutanova trouillon becomes relevant want reason structured relations also use textual patterns relations riedel problem seek address thesis symbolic logical knowledge combined representation learning make use best worlds specifically investigate following research questions efficiently incorporate domain commonsense knowledge form rules representation learning methods use rules alleviate need large amounts training data still generalizing beyond explicitly stated rules synthesize representation learning symbolic reasoning used automated theorem proving learn rules directly data using representation learning determine logical relationship natural language sentences using representation learning contributions thesis makes following core contributions regularizing representations logic rules introduce method incorporating logic rules directly vector representations symbols avoids need symbolic inference instead symbolic inference regularize symbol representations given rules logical relationships hold implicitly vector space chapter achieved mapping propositional logical rules differentiable loss terms calculate gradient given rule respect symbol representations given logic rule stochastically ground free variables using constants domain add resulting loss term propositional rule training objective neural link prediction model automated completion allows infer relations little training facts mapping logical rules soft rules using algebraic operations long tradition fuzzy logic contribution connection representation learning using rules chapter introduction directly learning better vector representations symbols used improve performance downstream task automated completion content chapter first appeared following two publications tim matko bosnjak sameer singh sebastian riedel embeddings logic proceedings association computational linguistics workshop semantic parsing tim sameer singh sebastian riedel injecting logical background knowledge embeddings relation extraction proceedings north american chapter association computational linguistics human language technologies naacl hlt lifted regularization predicate representations implications subclass logic implication rules present scalable method independent size domain constants generalizes unseen constants used broader class training objectives chapter instead relying stochastic grounding use implication rules directly regularizers predicate representations compared method chapter method independent number constants ensures given implication two predicates holds possible pair constants test time method based order embeddings vendrov contribution extension task automated completion requires constraining entity representations chapter based following two publications thomas demeester tim sebastian riedel regularizing relation representations implications proceedings north american chapter association computational linguistics naacl workshop automated knowledge base construction akbc thomas demeester tim sebastian riedel lifted rule injection relation embeddings proceedings empirical methods natural language processing emnlp contribution work conceptualization model design experiments extraction commonsense rules wordnet differentiable proving current representation learning neural link prediction models deficits comes complex inferences transitive reasoning bouchard nickel automated theorem provers hand long tradition computer science contributions provide elegant ways reason symbolic knowledge chapter propose neural theorem provers ntps differentiable theorem provers automated completion based neural networks recursively constructed inspired prolog backward chaining algorithm calculate gradient proof success respect symbol representations allows learn symbol representations directly facts make use similarities symbol representations provided rules proofs addition demonstrate induce interpretable rules predefined structure three four benchmark kbs method outperforms complex trouillon neural link prediction model work chapter appeared tim sebastian riedel learning knowledge base inference neural theorem provers proceedings north american chapter association computational linguistics naacl workshop automated knowledge base construction akbc tim sebastian riedel differentiable proving advances neural information processing systems annual conference neural information processing systems nips recognizing textual entailment recurrent neural networks representation learning models recurrent neural networks rnns used map natural language sentences vector representations successfully applied various downstream nlp tasks including recognizing textual entailment rte rte task determine logical relationship two natural language sentences far either approached nlp pipelines features neural network architectures independently map two sentences vector representations instead encoding two sentences independently propose model encodes second sentence conditioned encoding first sentence furthermore apply neural attention mechanism bridge hidden state bottleneck rnn chapter work chapter first appeared tim edward grefenstette karl moritz hermann tomas kocisky phil blunsom reasoning entailment neural attention proceedings international conference learning representations iclr chapter introduction thesis structure chapter provide background representation learning computation graphs logic notation used throughout thesis furthermore explain task automated completion describe neural link prediction approaches proposed task chapter introduces method regularizing symbol representations logic rules chapter subsequently focus direct implications predicates subset logic rules class rules provide efficient method directly regularizing predicate representations chapter introduce recursive construction neural network automated completion based prolog backward chaining algorithm chapter presents rnn rte based conditional encoding neural attention mechanism finally chapter concludes thesis discussion limitations open issues future research avenues chapter background chapter introduces core methods used thesis section explains function approximation neural networks backpropagation subsequently section introduces logic backward chaining algorithm inductive logic programming finally section discusses prior work automated knowledge base completion linking first two sections together function approximation neural networks thesis consider models formulated differentiable functions parameterized task find functions learn parameters set training examples input desired output ith training example structured objects instance could fact world like directedby nterstellar olan corresponding target truth score rue define loss function measures discrepancy provided output predicted output input given current setting parameters seek find parameters minimize discrepancy training set denote global loss entire training data learning problem thus written arg min arg min note also function since might want measure discrepancy given predicted outputs also use regularizer parameters improve generalization sometimes omit brevity differentiable functions use optimization methods chapter background stochastic gradient descent sgd nemirovski yudin iteratively updating based training denotes learning rate denotes differentiation operation loss respect parameters given current batch time step computation graphs useful abstraction defining models differentiable functions computation graphs illustrate computations carried model precisely goodfellow directed acyclic graph nodes represent variables directed edges one multiple nodes another node correspond differentiable operation variables consider scalars vectors matrices generally denote scalars lower case letters vectors bold lower case letters matrices bold capital letters higherorder tensors euler script letters variables either inputs outputs parameters model figure shows simple computation graph calculates sigm refer scalar sigmoid operation dot denote dot product two vectors furthermore name ith intermediate expression figure shows slightly complex computation graph two parameters computation graph fact represents logistic regression symbols subsymbolic representations thesis use neural networks learn representations symbols instance symbols words constants predicate names say learn subsymbolic representation symbols mean map symbols vectors generally tensors done first note many alternative methods minimizing loss models thesis optimized variants sgd implementation purpose also consider structured objects tensors like tuples lists current deep learning libraries theano torch collobert tensorflow abadi come support tuples lists however brevity leave description function approximation neural networks sigm sigm dot dot figure two simple computation graphs inputs gray parameters blue intermediate variables outputs dashed operations shown next nodes matmul matmul matmul figure computation graph enumerating symbols assigning number ith symbol let set symbols denote vector ith symbol index everywhere else figure shows computation graph whose inputs vectors symbols indices first layer vectors mapped dense vector chapter background representations via matrix multiplication embedding lookup matrices case computation graph corresponds neural link prediction model explain detail section serves illustration symbols mapped vector representations remainder thesis often omit embedding layer clarity goal learn symbol representations automatically data end need able calculate gradient output computation graph respect parameters embedding lookup matrices case backpropagation learning data need able calculate gradient loss respect model parameters assume operations computation graph differentiable recursively apply chain rule calculus chain rule calculus assume given composite function chain rule allows decompose calculation gradient entire computation respect follows goodfellow jacobian matrix matrix partial derivatives gradient respect note approach generalizes matrices tensors reshaping vectors gradient calculation vectorization back original shape afterwards backpropagation uses chain rule recursively define efficient calculation gradients parameters inputs computation graph avoiding recalculation previously calculated expressions achieved via dynamic programming storing previously calculated gradient expressions reusing later gradient calculations refer reader goodfellow details order run backpropagation differentiable operation want use computation graph need ensure function differentiable respect one inputs example let take computation graph depicted fig example assume given upstream gradient want compute gradient respect inputs computations carried backpropagation function approximation neural networks sigm dot figure backward pass computation graph shown fig depicted fig instance recursively applying chain rule calculate follows note computation reused calculating get gradient entire computation graph including upstream nodes respect via later use computation graphs nodes used multiple downstream computations nodes receive multiple gradients downstream nodes backpropagation summed calculate gradient computation graph respect variable represented node chapter use backpropagation computing gradient differentiable propositional logic rules respect vector representations symbols develop models combine representation learning logic chapter take construct computation graph possible proofs knowledge base using backward chaining algorithm allow calculate gradient proofs respect symbol representations induce rules using gradient descent finally chapter use recurrent neural networks rnns computation graphs dynamically constructed input input sequences word representations chapter background syntax list representation fatherof abe homer parentof homer lisa parentof homer bart grandpaof abe lisa grandfatherof abe maggie grandfatherof fatherof parentof grandparentof grandfatherof fatherof abe homer parentof homer lisa parentof homer bart grandpaof abe lisa grandfatherof abe maggie grandfatherof fatherof parentof grandparentof grandfatherof fatherof parentof grandpaof grandfatherof grandparentof abe homer lisa bart maggie table example knowledge base using prolog syntax left list representation used backward chaining algorithm right logic turn brief introduction logic extent used subsequent chapters section follows syntax prolog datalog gallaire minker based lloyd nilsson maluszynski dzeroski syntax start defining atom symbol list terms use lowercase names refer predicate constant symbols fatherof bart uppercase names variables prolog one also considers function terms defines constants function terms zero arity however thesis work logic rules subset logic datalog supports hence term constant variable instance grandfatherof bart atom predicate grandfatherof two terms variable constant bart respectively define arity predicate number terms takes arguments thus grandfatherof binary predicate literal defined negated atom ground literal literal variables see rules table furthermore consider logic form body also called condition premise possibly empty conjunction atoms represented list head also use predicate relation synonymously throughout thesis use rule clause formula synonymously logic called conclusion consequent hypothesis atom examples rules table rules one atom head called definite rules thesis consider definite rules variables universally quantified rule rule ground rule literals ground call ground rule empty body fact hence rules table define set symbols containing constant symbols predicate symbols variable symbols call set definite rules like one table knowledge base logic program substitution assignment variable symbols terms applying substitution atom replaces occurrences variables respective term defined far syntax logic used thesis assign meaning language semantics need able derive truth value facts focus proof theory deriving truth fact facts rules next subsection explain backward chaining algorithm deductive reasoning used derive atoms atoms applying rules deduction backward chaining representing knowledge facts rules symbolic form appeal one use automated deduction systems infer new facts instance given logic program table automatically deduce grandfatherof abe lisa true fact applying rule using facts backward chaining common method automated theorem proving refer reader russell norvig gelder gallaire minker details fig excerpt pseudocode style functional programming language particularly making use pattern matching check properties arguments passed module note matches every argument order matters arguments match line subsequent lines evaluated denote sets euler script letters lists small capital letters lists lists blackboard bold letters use refer prepending element list atom list predicate symbol terms rule seen list atoms thus list lists head list rule sometimes call rule rule body clear context see dzeroski methods semantics example grandfatherof fatherof parentof chapter background unify fail fail substitute unify fail fail unify unify fail unify fail unify unify fail otherwise substitute substitute otherwise substitute figure simplified pseudocode symbolic backward chaining cycle detection omitted brevity see russell norvig gelder gallaire minker details given goal grandparentof backward chaining finds substitutions free variables constants facts achieved recursively iterating rules translate goal subgoals attempt prove thereby exploring possible proofs example could contain following rule applied find answers goal grandfatherof fatherof parentof proof exploration divided two functions called perform search space possible proofs function line attempts prove goal unifying head every rule yielding intermediate substitutions unification lines iterates pairs symbols two lists corresponding atoms want unify updates substitution set one two symbols variable logic rule remaining goals grandparentof grandfatherof fatherof parentof parentof homer table example proof using backward chaining example knowledge base fatherof abe homer parentof homer bart grandfatherof fatherof parentof query grandfatherof abe bart failure failure success fatherof abe success failure parentof bart homer failure failure failure success figure full proof tree small knowledge base returns failure two symbols identical two atoms different arity rules unification head rule succeeds body substitution passed lines attempts prove every atom body sequentially first applying substitutions subsequently calling repeated recursively unification fails atoms proven unification via grounding facts certain exceeded table shows proof query grandparentof given table using fig method substitute lines replaces variables atom corresponding symbol exists substitution variable substitution list figure shows full proof tree small knowledge base query grandfatherof abe bart numbers arrows correspond application respective rules visualize recursive calls together proof depth right side note proofs aborted early due unification failure though logic used complex reasoning drawback symbolic inference generalization beyond explicitly specify facts rules instance given large would denote queries prefix chapter background like learn automatically many cases observe fatherof also observe parentof approached statistical relational learning inductive logic programming particularly neural networks completion discuss remainder chapter inductive logic programming backward chaining used deduction inferring facts given rules facts inductive logic programming ilp muggleton combines logic programming inductive learning learn logic programs training data training data include facts also provided logic program ilp system supposed extend specifically given facts rules task ilp system find regularities form hypotheses unseen facts dzeroski crucially hypothesis formulated using logic different variants ilp learning tasks raedt focus learning entailment muggleton given examples positive facts negative facts ilp system supposed find rules positive facts deduced negative facts many variants ilp systems refer reader muggleton raedt dzeroski overview one prominent systems first order inductive learner foil quinlan greedy algorithm induces one rule time constructing body satisfies maximum number positive facts minimum number negative facts chapter construct neural networks proving facts introduce method inducing logic programs using gradient descent learning vector representations symbols automated knowledge base completion automated completion task inferring facts information contained resources text important task kbs usually incomplete instance placeofbirth predicate missing people freebase dong prominent recent approaches automated completion learn vector representations symbols via neural link prediction models appeal learning subsymbolic representations lies ability capture similarity even implicature directly vector space compared ilp systems neural link prediction models rely combinatorial search space logic programs instead learn local scoring function based automated knowledge base completion subsymbolic representations using continuous optimization however comes cost uninterpretable models straightforward ways incorporating logical background knowledge drawbacks seek address thesis another benefit neural link prediction models ilp inferring whether fact true often amounts efficient algebraic operations passes shallow neural networks makes inference scalable addition representations symbols compositional instance compose representation natural language predicate sequence word representations verga recent years many models automated completion proposed next sections discuss prominent approaches high level methods categorized neural link prediction models define local scoring function truth fact based estimated symbol representations models use paths two entities predicting new relations matrix factorization section describe matrix factorization relation extraction model riedel instance simple neural link prediction model discuss model detail basis develop rule injection methods chapter assume set observed entity pair symbols set predicate symbols either represent structured binary relations freebase large collaborative knowledge base bollacker unstructured open information extraction openie etzioni textual surface patterns collected news articles examples structured unstructured relations respectively textual pattern placeholders entities instance relationship elon musk tesla sentence elon musk tesla ceo spacex cites foundation trilogy isaac asimov major influence thinking could expressed ground atom tesla elon musk example elon musk appeared first textual pattern indicates constant used second argument predicate corresponding pattern way later introduce rule http chapter background without changing order variables body rule let set observed ground atoms model riedel maps symbols knowledge base subsymbolic representations learns dense vector representation every relation entity pair thus training fact represented two vectors vij respectively refer embeddings subsymbolic representations vector representations neural representations simply symbol representations clear context truth estimate fact modeled via sigmoid dot product two symbol representations psij vij fact expression corresponds computation graph shown fig vij score psij measuring compatibility relation entity pair representation interpreted probability fact true conditioned parameters model would like train symbol representations true ground atoms get score close one false ground atoms score close zero results matrix factorization corresponding generalized principal component analysis gpca collins matrices relation entity pair representations respectively factorization depicted fig known facts shown green task complete matrix cells question mark equation leads generalization respect unseen facts every relation entity pair represented space information bottleneck lead similar entity pairs represented close distance vector space likewise similar predicates distributional hypothesis states one shall know word company keeps firth used learning word meaning large collections text lowe mcdonald lapata applied automated completion one could say meaning relation entity pair estimated entity pairs relations respectively appear together automated knowledge base completion orst fess teac loye petrie ucl ferguson harvard andrew cambridge trevelyan cambridge textual patterns freebase figure knowledge base inference matrix completion true training facts green unobserved facts question mark relation representations red entity pair representations orange facts many entity pairs observe fatherof dadof true assume relations similar bayesian personalized ranking common problem automated completion observe negative facts recommendation literature problem called implicit feedback rendle applied completion would like infer recommend target relation unobserved facts know facts know facts know unknown either true true missing one method address issue formulate problem terms ranking loss sample unobserved facts negative facts training given known fact bayesian personalized ranking bpr rendle samples another entity pair adds vij vmn follows relaxation local closed world assumption cwa dong cwa knowledge true facts chapter background loss sum sigm dot dot vij matmul vmn matmul matmul figure complete computation graph single training example bayesian personalized ranking relation assumed complete sampled unobserved fact consequently assumed negative bpr assumption unseen facts necessarily false probability true less known facts thus sampled unobserved facts lower score known facts instead working fixed set samples resample negative facts every known fact every epoch epoch full iteration known facts denote sample set constant pairs leads overall approximate loss log vij vmn kvs kvij kvmn note set automated knowledge base completion strength regularization relation entity pair representations respectively furthermore implicit weight accounting often sampled training time resample unobserved fact every time visit observed fact training unobserved facts relations many observed facts likely sampled negative facts relations observed facts complete computation graph single training example matrix factorization using bpr regularization shown fig neural link prediction models example simple neural link prediction model matrix factorization approach previous section alternative methods define score psij different ways use different loss functions parameterize relation entity entity pair representations differently instance bordes train two projection matrices per relation one left one argument position respectively subsequently score fact defined norm difference two projected entity argument embeddings psij kmsleft msright note compared matrix factorization model riedel embeds learn individual entity embeddings similarly transe bordes models score norm difference right entity embedding translation left entity embedding via relation embedding psij kvi constrained computation graph model shown fig section rescal nickel represents relations matrices defines score fact psij contrast models mentioned section rescal optimized sgd using alternating least squares nickel trescal chang extends rescal entity type constraints freebase relations chapter background neural tensor networks socher add compatibility score rescal psij vsleft vsright optimized byrd distmult yang modeling score trilinear dot product psij vsk vik vjk multiplication model special case rescal constrained diagonal complex trouillon uses complex vectors representing relations entities let real denote real part imag imaginary part complex vector scoring function defined complex psij real real real real imag imag imag real imag imag imag real benefit complex rescal distmult using complex vectors capture symmetric well asymmetric relations building upon riedel verga developed factorization approach encoding surface form patterns using long memories lstms hochreiter schmidhuber instead learning noncompositional representation similarly toutanova uses convolutional neural networks cnns encode surface form patterns study verga propose method entity pair representations learned instead computed observed relations thereby generalizing new entity pairs test time models methods presented previous section model truth fact local scoring function representations relation entities entity pairs models score facts based either random walks kbs path ranking encoding entire paths vector space path encoding automated knowledge base completion path ranking path ranking algorithm pra lao cohen lao learns predict relation two entities based logistic regression features collected random walks entities predefined length lao extend pra inference openie surface patterns addition structured relations contained kbs related approach pra programming personalized pagerank proppr wang probabilistic logic programming language uses prolog selective linear definite clause resolution sld kowalski kuehner search strategy theorem proving construct graph proofs instead returning deterministic proofs given query proppr defines stochastic process graph proofs using pagerank page furthermore proppr one use features head rules whose weights learned data guide stochastic proofs experiments proppr conducted comparably small kbs contrary neural link prediction models extensions pra yet scaled large kbs gardner shortcoming pra proppr operating symbols instead vector representations symbols limits generalization results explosion number paths consider increasing path length overcome limitation gardner extend pra include vector representations verbs verb representations obtained via pca matrix verbs tuples collected large corpus subsequently representations used clustering relations thus avoiding explosion path features prior pra work improving generalization gardner take approach introducing vector space similarity random walk inference thus dealing paths containing unseen surface forms measuring similarity surface forms seen training following relations proportionally similarity path encoding gardner introduced vector representations pra representations trained task data instead pretrained external corpus means relation representations adapted training neelakantan propose rnns learning embeddings entire paths input rnns trainable relation representations given known chapter background relation two entities path connecting two entities rnn target relation trained output encoding path dot product encoding relation representation maximal das note three limitations work neelakantan first parameter sharing rnns encode different paths different target relations second aggregation information multiple path encodings lastly use entity information along path relation representations fed rnn das address first issue using single rnn whose parameters shared across paths address second issue das train aggregation function encodings multiple paths connecting two entities finally obtain entity representations fed rnn alongside relation representations sum learned vector representations entity annotated freebase types chapter regularizing representations logic rules chapter introduce paradigm combining neural link prediction models automated knowledge base completion section background knowledge form logic rules investigate simple baselines enforce rules symbolic inference matrix factorization main contribution novel joint model learns vector representations relations entity pairs using distant supervision logic rules rules captured directly vector space symbol representations end map symbolic rules differentiable computation graphs representing losses added training objective existing neural link prediction models test time inference still efficient local scoring function symbol representations used logical inference needed present empirical evaluation incorporate automatically mined rules matrix factorization neural link prediction model experiments demonstrate benefits incorporating logical knowledge freebase relation extraction specifically find joint factorization distant logic supervision efficient accurate robust noise section incorporating logical rules able train relation extractors training facts observed matrix factorization embeds ground literals section introduced matrix factorization method learning representations predicates constant pairs automated completion section elaborate matrix factorization indeed embeds ground atoms chapter regularizing representations logic rules vector space lay foundation developing method embeds logic rules let denote rule instance could ground rule without body fact like parentof homer bart furthermore let jfk denote probability rule true conditioned parameters model restrict ground atoms discuss ground literals propositional rules rules later slight abuse notation let also denote mapping predicate constant symbols pair constant symbols subsymbolic representation assigned model note mapping depends neural link prediction model matrix factorization function symbols constant pairs predicates dense vector representations rescal see section maps constants predicates using notation matrix factorization decomposes probability fact jrs jrs jei vij training objective riedel used bayesian personalized ranking bpr rendle training objective encouraged score known true facts higher unknown facts section however later model probability rule probability ground atoms scored neural link prediction model need ensure scores interval instead bpr thus use negative loss directly maximize probability rules including ground atoms omit regularization brevity log jfk therefore instead learning rank facts optimize representations assign score close rules including facts model thus seen generalization neural link prediction model rules beyond ground atoms matrix factorization neural link prediction model embedding ground atoms vector space predicate constant pair representations next extend ground literals afterwards propositional logic rules embedding propositional logic negation let negation ground atom model probability follows jfk jgk using ground literals training objective say matrix factorization embedding ground literals via learning predicate constant pair representations words given known ground literals negated nonnegated facts matrix factorization embeds symbols vector space scoring function assigns high probability ground literals symbols embedded vector space method generalize unknown facts predict probability test time placing similar symbols close distance embedding space note far gained anything matrix factorization explained section equations introducing notation make easier embed complex rules later embed negated facts ask question whether also embed propositional logic rules embedding propositional logic know propositional logic negation conjunction operators model boolean operator propositional rule effectively turned symbolic logical operation negation differentiable operation used learn subsymbolic representations automated completion find differentiable operation conjunction could backpropagate propositional logical expression learn vector representations symbols encode given background knowledge propositional logic conjunction product fuzzy logic conjunction modeled using product tnorm lukasiewicz let conjunction two propositional expressions probability defined follows jfk jak jbk words replaced conjunction symbolic logical operation multiplication differentiable operation note alternatives modeling conjunction chapter regularizing representations logic rules exist instance one could take min jak jbk given probability ground atoms use product fuzzy logic calculate probability conjunction atoms however step assume know ground truth probability conjunction two atoms use negative loss measure discrepancy predicted probability conjunction ground truth contribution backpropagating discrepancy propositional rule neural link prediction model scores ground atoms calculate gradient respect vector representations symbols subsequently update representations using gradient descent thereby encoding ground truth propositional rule directly vector representations symbols test time predicting score unobserved ground atom done efficiently calculating jrs disjunction let disjunction two propositional expressions using morgan law eqs model probability follows jfk jak jbk jak jbk jak jbk note holds ground atoms propositional logical expression furthermore propositional logical expression normalized conjunctive normal form thus eqs way construct differentiable computation graph thus loss term symbolic expression propositional logic implication particular class logical expressions care practice propositional implication rules form body possibly embedding propositional logic loss loss rule sigm atoms sigm sigm dot dot dot jparentofk jhomer bartk jmotherofk jfatherofk figure computation graph rule denotes placeholder output connected node empty conjunction atoms represented list head atom let probability modeled jfk jhk jhk jbk jhk jbk jhk jbk jhk jhk jbk jhk jbk jbk jhk jbk jhk chapter regularizing representations logic rules say want ensure fatherof homer bart parentof homer bart motherof homer bart way map rule differentiable expression use alongside facts optimize symbol representations using gradient descent previously matrix factorization computation graph allows calculate gradient rule respect symbol representations shown fig structure bottom part atoms computation graph determined neural link prediction model middle part rule determined propositional rule note use neural link predictor see section instead matrix factorization obtaining probability ground atoms requirement ground atom scores need lie interval however models case always apply transformation sigmoid independence assumption equation underlies strong assumption namely probability arguments conjunction conditionally independent given symbol embeddings already get violation assumption simple case jfk results jfk jfk jfk however dependent arguments get approximation probability conjunction still used gradient updates symbol representations demonstrate empirically conjunction modeled useful improving automated completion chapter present way avoid independence assumption implications embedding logic via grounding backpropagate propositional boolean expressions turn embedding logic rules vector space symbol representations note process chain rules perform logical inference training test time instead provided rule used construct loss term optimizing vector representations symbols training model using gradient descent attempt find minimum global loss probability rules including facts high minimum might attained prediction neural link prediction model scoring ground atoms agrees single rules thereby predicting ground atoms embedding logic via grounding evidence sparse training matrix regularized embeddings completed matrix facts rules figure given set training ground atoms matrix factorization learns predicate constant pair representations also consider additional logic rules red seek learn symbol representations predictions completed matrix comply given rules chained rules overview process matrix factorization neural link prediction model ground atoms shown fig assuming finite set constants ground rule replacing free variables constants domain let denote universally quantified rule two free variables set entity pairs eqs obtain following ground loss log log stochastic grounding large domains constants becomes costly optimize would result many expressions added training objective underlying neural link prediction model rules pairs constants reduce number terms drastically considering pairs constants train appeared together training facts still train might prohibitively large set constant pairs thus resort heuristic similar bpr sampling constant pairs given rule obtain ground propositional rule every constant pair least one atom rule known training fact substituting free variables constants addition sample many constant pairs appeared chapter regularizing representations logic rules together training facts atoms rule unknown substituting free variables sampled constant pairs baseline background knowledge form rules seen hints used generate additional training data inference first perform symbolic logical inference training data using provided rules add inferred facts additional training data example rule add additional observed training facts pair constants true fact training data repeated facts inferred subsequently run matrix factorization extended set observed facts intuition additional training data generated rules provide evidence logical dependencies relations matrix factorization model time allowing factorization generalize unobserved facts deal ambiguity noise data logical inference performed training factorization model expect learned embeddings encode provided rules one drawback inference rules enforced observed atoms dependencies predicted facts ignored contrast loss add terms rule directly matrix factorization objective thus jointly optimizing embeddings reconstruct known facts well obey provided logical background knowledge however stochastically ground rules guarantee given rule indeed hold possible entity pairs test time next demonstrate approach still useful completion despite limitation chapter introduce method overcomes limitation simple implication rules experiments two orthogonal questions evaluating method first regularizing symbol embeddings logic rules indeed capture logical knowledge vector space used improve completion second obtain background knowledge form rules useful particular completion task latter problem hipp schoenmackers niepert thus focus experiments evaluation ability various approaches benefit rules directly extract training data using simple method distant supervision evaluation follow procedure riedel evaluating knowledge base completion freebase bollacker textual data nyt corpus sandhaus training matrix consists columns representing freebase relations textual patterns rows constant pairs training facts belong freebase relations constant pairs divided train test remove freebase facts test pairs training data primary evaluation measure weighted mean average precision map manning riedel let set test relations let fmj set test facts relation furthermore let rkj ranked list facts relation scored model fact fkj reached map defined precision rkj map precision calculates fraction correctly predicted test facts predicted facts weighted map average precision every relation weighted number true facts respective relation riedel note map metric operates ranking facts predicted model take absolute predicted score account rule extraction annotation use simple technique extracting rules matrix factorization model based sanchez first run matrix factorization complete training data learn symbol representations training iterate pairs relations freebase relation every relation pair iterate training atoms evaluate score jrt using calculate average arrive score proxy coverage rule finally rank rules score manually inspect filter top resulted annotated rules see table top rules five different freebase target relations appendix list annotated rules note rule extraction approach observe relations test constant pairs extracted rules simple logic expressions models experiments access rules except matrix factorization baseline methods proposed methods injecting logic symbol embeddings prefactorization inference pre section baseline method performs chapter regularizing representations logic rules rule score table top rules five different freebase target relations implications extracted matrix factorization model manually annotated premises implications shortest paths entity arguments dependency tree present simplified version make patterns readable see appendix list annotated rules regular matrix factorization propagating logic rules deterministic manner joint optimization joint section maximizes objective combines terms facts logic rules additionally evaluate three baselines matrix factorization section model uses ground atoms learn relation constant representations access rules furthermore consider pure symbolic logical inference inf since restrict set consistent simple rules inference performed efficiently final approach inference post first runs matrix factorization performs logical inference known predicted facts computationally expensive since premises rules iterate rows constant pairs matrix assess whether premise predicted true training details since negative training facts follow riedel sampling unobserved ground atoms assume false rules use stochastic grounding described section thus addition loss score training facts loss sampled unobserved ground atoms assume negative well loss terms ground rules words learn symbol embeddings minimizing includes known unobserved atoms well ground propositional rules sampled using stochastic grounding addition use symbol representations minimizing training loss use adagrad duchi remember test time predicting score unobserved ground atom done efficiently calculating jrs note results discussion involve explicit logical inference instead expect vector space symbol embeddings incorporate given rules hyperparameters based riedel use dimension symbol representations models parameter initial learning rate adagrad run epochs runtime adagrad update defined single cell matrix thus training data provided one ground atom time matrix factorization adagrad epoch touches observed ground atoms per epoch many sampled negative ground atoms provided rules additionally revisits observed ground atoms appear atom rules many sampled negative ground atoms thus general rules expensive however updates ground atoms performed independently thus data needs stored memory presented models take less minutes train ghz intel core machine results discussion asses utility injecting logic rules symbol representations present comparison variety benchmarks first study scenario learning extractors relations freebase alignments number entity pairs appear textual patterns structured freebase relations zero measures well different models able generalize logic rules textual patterns section section describe experiment number freebase alignments varied order assess effect combining distant supervision background knowledge accuracy predictions although methods presented target relations insufficient alignments also provide comparison complete distant supervision dataset section relation learning start scenario learning extractors relations appear textual alignments scenario occurs practice new relation added facts connect new relation existing relations textual surface forms accurate extraction relations rely background domain knowledge form rules identify relevant textual alignments however time correlations textual patterns utilized improved generalization simulate setup remove alignments entity chapter regularizing representations logic rules figure weighted map scores relation learning relation inf post pre joint written birth death map weighted map table weighted map relations appear annotated rules omitted evaluation difference pre joint significant according pairs freebase relations distant supervision data use extracted logic rules section background knowledge assess ability different methods recover lost alignments figure provides detailed results unsurprisingly matrix factorization performs poorly since predicate representations learned freebase results discussion weighted mean average precision inf post pre joint fraction freebase training facts figure weighted map various models fraction freebase training facts varied freebase training facts get relation learning results presented table relations without observed facts fact see map score matrix factorization due random predictions symbolic logical inference inf limited number known facts appear premise one implications thus performs poorly although inference post able achieve large improvement logical inference explicitly injecting logic rules symbol representations using inference pre joint optimization joint gives superior results finally observe jointly optimizing probability facts rules able best combine logic rules textual patterns accurate learning relation extractors table shows detailed results freebase test relations except written death jointly optimizing probability facts rules yields superior results chapter regularizing representations logic rules relations distant labels section study scenario learning relations distant supervision alignments structured freebase relations textual patterns observed particular observe behavior various methods amount distant supervision varied run methods training data contains different fractions freebase training facts therefore different degrees pattern alignment keep textual patterns addition set annotated rules figure summarizes results performance symbolic logical inference depend amount distant supervision data since take advantage correlations data matrix factorization make use logical rules thus baseline performance using distant supervision factorization based methods small fraction training data needed achieve around weighted map performance thus demonstrating efficiently exploiting correlations predicates generalizing unobserved facts inference however outperform inference par matrix factorization curve suggests effective way injecting logic symbol representations ground atoms also available contrast joint model learns symbol representations outperform methods freebase training data interval beyond seem sufficient freebase facts matrix factorization encode rules thus yielding diminishing returns comparison complete data although focus work injecting logical rules relations without sufficient alignments knowledge base also present evaluation complete distant supervision data riedel compared riedel matrix factorization model reimplementation achieves lower wmap higher map attribute difference different loss function bpr negative show curve fig demonstrating joint optimization provides benefits existing factorization distant supervision techniques even complete dataset obtains weighted map map respectively improvement matrix factorization model explained noting joint model reinforces annotated rules related work joint precision recall figure curve demonstrating joint method incorporates annotated rules derived data outperforms existing factorization approaches related work embeddings knowledge base completion many methods embedding predicates constants pairs constants based training facts knowledge base completion proposed past see section work goes learn embeddings follow factual also logic knowledge note method regularizing symbol embeddings rules described chapter generally compatible existing neural link prediction model provides scores experiments worked matrix factorization neural link prediction model based work guo able incorporate transitivity rules transe bordes models entities separately instead learning representation every entity pair logical inference common alternative adding logic knowledge trivial perform symbolic logical inference bos markert baader bos however purely symbolic approaches deal uncertainty inherent natural language generalize poorly chapter regularizing representations logic rules probabilistic inference ameliorate drawbacks symbolic logical inference probabilistic logic based approaches proposed schoenmackers garrette beltagy since logical connections relations modeled explicitly approaches generally hard scale large kbs specifically approaches based markov logic networks mlns richardson domingos encode logical knowledge dense loopy graphical models making structure learning parameter estimation inference hard scale data contrast model logical knowledge captured directly symbol representations leading efficient inference test time calculate forward pass neural link prediction model furthermore symbols embedded vector space natural way dealing linguistic ambiguities label errors appear openie textual patterns included predicates automated completion riedel stochastic grounding related locally grounding query programming personalized pagerank proppr wang one difference use stochastically grounded rules differentiable terms representation learning training objective whereas proppr grounded rules used stochastic inference without learning symbol representations weakly supervised learning work also inspired weakly supervised approaches ganchev use structural constraints source indirect supervision methods used several nlp tasks chang mann mccallum druck singh information extraction work carlson spirit similar goal using commonsense constraints jointly train multiple information extractors main difference learning symbol representations allow arbitrarily complex logical rules used regularizers representations combining symbolic distributed representations number recent approaches combine trainable subsymbolic representations symbolic knowledge grefenstette describes isomorphism firstorder logic tensor calculus using matrices exactly memorize facts based isomorphism combine logic matrix factorization learning symbol embeddings approximately satisfy given rules generalize unobserved facts toy data work extends workshop paper proposing simpler formalism without logical related work connectives presenting results large task demonstrating utility approach learning relations textual alignments chang use freebase entity types hard constraints tensor factorization objective universal schema relation extraction contrast approach imposing soft constraints formulated universally quantified rules lacalle lapata combine logic knowledge topic model improve surface pattern clustering relation extraction since rules specify relations clustered capture variety dependencies embeddings model asymmetry lewis steedman use distributed representations cluster predicates logical inference approach expressive learning subsymbolic representations predicates clustering deal asymmetric logical relationships predicates several studies investigated use symbolic representations dependency trees guide composition symbol representations clark pulman mitchell lapata coecke hermann blunsom instead guiding composition using logic rules prior domain knowledge form regularizers directly learn better symbol representations combining symbolic information neural networks long tradition towell shavlik introduce artificial neural networks whose topology isomorphic facts inference rules facts input units intermediate conclusions hidden units final conclusions inferred facts output units unlike work learned symbol representations hitzler prove every logic program exists recurrent neural network approximates semantics program theoretical insight unfortunately provide way constructing neural network recently bowman demonstrated neural tensor networks ntns socher accurately learn natural logic reasoning method presented chapter also related recently introduced neural equivalence networks eqnets allamanis eqnets recursively construct neural representations symbolic expressions learn equivalence classes approach recursively construct neural networks evaluating boolean expressions use regularizers learn better symbol representations automated completion chapter regularizing representations logic rules summary chapter introduced method mapping symbolic logic rules differentiable terms used regularize symbol representations learned neural link prediction models automated completion specifically proposed joint training objective maximizes probability known training facts well propositional rules made continuous replacing logical operators differentiable functions inspired fuzzy logic contribution backpropagating gradient negative loss propositional rule neural link prediction model scores ground atoms calculate gradient respect vector representations symbols subsequently update representations using gradient descent thereby encoding ground truth propositional rule directly vector representations symbols leads efficient predictions test time calculate forward pass neural link prediction model described stochastic grounding process incorporating logic rules experiments automated completion show proposed method used learn extractors relations little observed textual alignments time benefiting correlations textual surface form patterns chapter lifted regularization predicate representations implications method incorporating logic rules symbol representations introduced previous chapter relies stochastic grounding moreover vector representations predicates also representations pairs constants optimized maximize probability provided rules problematic following reasons scalability even stochastic grounding incorporating logic rules method described far dependent size domain constants example take simple rule ismortal ishuman assume observe ishuman seven billion constants single rule would already add seven billion loss terms training objective generalizability since backpropagate upstream gradients rule predicate representations also representations pairs constants theoretical guarantee rule indeed hold constant pairs observed training flexibility training loss previous method compatible losses bayesian personalized ranking bpr instead use negative loss results lower performance compared bpr automated knowledge base completion independence assumption explained section assume probability ground atoms conditionally independent given symbol representations assumption already violated simple boolean expression jfk results jfk jfk jfk ideally would like way incorporate logic rules symbol representations way independent size domain chapter lifted regularization predicate representations implications constants generalizes unseen constant pairs iii used broader class training objectives assume probability ground atoms conditionally independent given symbol representations chapter present method satisfies desiderata simple implication rules instead general logic rules matrix factorization neural link prediction model however note simple implications commonly used improve automated completion method propose incorporates implications vector representations predicates maintaining computational efficiency modeling training facts achieved enforcing partial order predicate representations vector space entirely independent number constants domain involves representations predicates mentioned rules well general constraint embedding space constant pairs example given require every component vector representation jishumank smaller equal corresponding component predicate representation jismortalk show implication holds representation constant hence method avoids need separate loss terms every ground atom resulting grounding rules statistical relational learning type approach often referred lifted inference learning poole braz deals groups random variables level sense approach lifted form rule injection allows impose large number rules learning distributed representations predicates constant pairs furthermore constraints satisfied injected rules always hold even unseen inferred ground atoms addition rely assumption conditional independence ground atoms method want incorporate implications form consider matrix factorization neural link prediction model scoring atoms jrk jei section necessary condition implication hold every possible assignment constants score needs least large discrete case means true within small epsilon needs vice versa sigmoid monotonic method jhomer bartk jfatherofk jparentofk jmarge bartk jmotherofk figure example implications directly captured vector space function rewrite terms grounded rules following condition jhk jei jbk jei ordering symbol representations vector space implications directly captured inspired order embeddings vendrov example holds illustrated fig fatherof motherof imply parentof since every component jparentofk larger corresponding component jfatherofk jmotherofk predicate representations blue red purple area implied fatherof motherof parentof respectively constant pair representations jhomer bartk jmarge bartk thus score jfatherof homer bart score jparentof homer bart larger vice versa note holds representation constant pairs jei long placed quadrant main insight make condition independent make sure jei constant representations thus becomes jhk jbk jei comparison words ensuring implies pair constants need one loss term chapter lifted regularization predicate representations implications makes sure components jhk least large jbk one general restriction representations constant pairs representation constant pairs many choices ensuring constant pair representations positive one option initialize constant pair representations vectors projecting gradient updates make sure stay another option apply transformation constant pair representations used neural link prediction model scoring atoms instance could use exp relu max however choose restrict constant representations even required decided use transformation approximately boolean embeddings kruszewski every constant pair representation jei obtain representation jei applying sigmoid function thus matrix factorization objective bpr facts becomes following approximate loss log vij vmn kvs kvij sample constant pairs section denote extension sigmoidal constant pair representations matrix factorization model riedel sigmoidal implication loss various ways modeling jhk jbk incorporate implication propose use loss max jbki jhki small positive margin ensure gradient disappear inequality actually satisfied nice property loss compared method presented previous chapter implication holds jhk larger jbk gradient zero predicate representations updated rule every given implication rule add corresponding loss term fact loss global approximate loss facts rules set thus experiments denote resulting model fsl logic furthermore use margin experiments chapter predict probability atom test time via jrs efficient logical inference experiments follow experimental setup previous chapter section evaluate nyt corpus riedel test well presented models incorporate rules alignment textual surface forms freebase relations relation learning number freebase training facts increased relations distant labels addition experiment rules automatically extracted wordnet miller improve automated completion full dataset incorporating background knowledge wordnet use wordnet hypernyms generate rules nyt dataset end iterate surface form patterns dataset attempt replace words pattern hypernyms resulting surface form contained dataset generate corresponding rule instance generate rule since patterns contained dataset know wordnet official hypernym diplomat resulted generated rules subsequently annotated manually yielding rules listed appendix note rules surface form patterns thus none rules freebase relation head predicate although test relations originate freebase still hope see improvements transitive effects better surface form representations turn help predict freebase facts training details models implemented tensorflow abadi use hyperparameters riedel size symbol representations weight regularization use adam kingma optimization initial learning rate batch size embeddings initialized sampling values uniformly chapter lifted regularization predicate representations implications test relation written birth death sports team owned stadium served weighted map fsl table weighted map reimplementation matrix factorization model compared restricting constant embedding space injecting wordnet rules fsl orginial matrix factorization model riedel denoted results discussion turning injection rules compare model model show restricting constant embedding space regularization effect rather limiting expressiveness model section demonstrate model fsl capable learning section take advantage alignments textual surface forms freebase relations alongside rules section show injecting wordnet rules leads improved predictions full dataset section lastly provide details computational efficiency lifted rule injection method section demonstrate correctly captures asymmetry implication rules section restricted embedding space constants incorporating external commonsense knowledge relation representations curious much lose restricting embedding space results discussion constant symbols approximately boolean embeddings surprisingly find expressiveness model suffer strong restriction table see restricting constant embedding space yields percentage points higher weighted mean average precision map compared constant embedding space result suggests restriction regularization effect improves generalization also provide original results matrix factorization model riedel denoted comparison due different implementation optimization procedure results model compared slightly worse wmap relation learning section observed injecting implications head freebase relation training facts available infer freebase facts based rules correlations textual surface patterns repeat experiment lifted rule injection model fsl reaches weighted map comparable method presented last chapter experiment initialized predicate representations freebase relations implied rules negative random vectors sampled uniformly reason without negative training facts relations components due lifted implication loss consequently starting high values optimization would impede freedom representations ordered embedding space demonstrates fsl performs bit worse joint model chapter still used relation learning relations distant labels figure shows relation extraction performance improves freebase facts added last chapter measures well proposed models matrix factorization propositionalized rule injection joint lifted rule injection model fsl make use provided implication rules well correlations textual surface form patterns increasing numbers freebase facts although fsl starts lower performance joint freebase training facts present outperforms joint plain matrix factorization model substantial margin provided freebase facts indicates addition much faster joint make better use provided rules training facts attribute chapter lifted regularization predicate representations implications weighted mean average precision joint fsl fraction freebase training facts figure weighted map injecting rules function fraction freebase facts able use bpr loss ground atoms regularization effect restricting embedding space constants pairs former compatible method approach maximizing expectation propositional rules presented previous chapter incorporating background knowledge wordnet column fsl table show results obtained injecting wordnet rules compared obtain increase weighted map well compared reimplementation matrix factorization model demonstrates imposing partial order based implication rules used incorporate logical commonsense knowledge increase quality information extraction automated completion systems note evaluation setting guarantees indirect effects rules measured use rules directly implying freebase test relations consequently increase prediction performance due improved predicate embedding space beyond predicates explicitly stated provided rules example injecting results discussion head rule body average rules jhead jbody fsl jhead jbody table average score facts constants appear body facts head rule rule contribute improved predictions freebase test relation via entity pairs computational efficiency lifted rule injection order assess computational efficiency proposed method measure time needed per training epoch using single cpu core measure average per epoch using rules model using filtered unfiltered rules model fsl respectively increasing number rules leads increase computation time furthermore using rules adds overhead computation time needed learning ground atoms demonstrates lifted rule injection scales well number rules asymmetry one concern incorporating implications vector space vector representation predicates head body simply moving closer together would violate asymmetry implications experiments might observe problem testing well model predicts facts freebase relations well predict textual surface form patterns thus perform following experiment incorporating wordnet rules form head body select constant pairs observe body training set implication holds jhead yield high score conversely select based known facts head assume head body equivalent expect lower score jbody jhead body might true table lists scores five sampled wordnet rules average wordnet rules injecting rules model fsl model table see score head atom average much higher chapter lifted regularization predicate representations implications score body atom incorporate rule vector space fsl contrast run matrix factorization restricted constant embedding space often see high predictions body head atom suggests matrix factorization merely captures similarity predicates contrast injecting implications trained predicate representations indeed yield asymmetric predictions given high score body ground atom model predicts high score head vice versa reason also get high score body ground atoms fourth rule newspaper daily synonymously used training texts related work recent research combining rules learned vector representations important new developments field automated completion wang demonstrated different types rules incorporated using integer linear programming approach wang cohen learned embeddings facts logic rules using matrix factorization yet approaches method presented previous chapter ground rules constants domain limits scalability towards large rule sets kbs large domains constants formed important motivation lifted rule injection model construction suffer limitation wei proposed alternative strategy tackle scalability problem reasoning filtered subset ground atoms proposed use path ranking algorithm pra section capturing interactions entities conjunction modeling pairwise relations model differs substantially approach consider pairs constants instead separate constants inject provided set rules yet creating partial order relation embeddings result injecting implication rules model fsl also capture interactions beyond predicates directly mentioned rules demonstrated section injecting rules surface patterns measuring improvement predictions structured freebase relations combining logic distributed representations also active field research outside automated completion recent advances include work faruqui injected ontological knowledge wordnet word embeddings improve performance downstream nlp tasks furthermore vendrov proposed enforce partial order embeddings space images phrases method related order embeddings since summary define partial order relation embeddings extend work automated completion ensure implications hold pairs constants introducing restriction embedding space constant pairs another important contribution recent work proposed framework injecting rules general neural network architectures jointly training target outputs predictions provided teacher network although quite different first sight work could offer way use model various neural network architectures integrating proposed lifted loss teacher network summary presented fast approach incorporating implication rules distributed representations predicates automated completion termed approach lifted rule injection main contribution previous chapter fact avoids costly grounding implication rules thus independent size domain constants construction rules satisfied observed unobserved ground atom presented approach requires restriction embedding space constant pairs however experiments show impair expressiveness learned representations contrary appears beneficial regularization effect incorporating rules generated wordnet hypernyms model improved matrix factorization baseline completion especially domains annotation costly small amounts training facts available approach provides way leverage external knowledge sources efficiently inferring facts downside lifted rule injection method presented applicable implication rules using matrix factorization neural link prediction model furthermore unclear far regularizing predicate representations pushed without constraining embedding space much specifically unclear complex rules transitivity incorporated lifted way hence exploring direct synthesis representation learning logic inference next chapter chapter differentiable proving current methods automated knowledge base completion use neural link prediction models learn distributed vector representations symbols subsymbolic representations scoring atoms nickel riedel socher chang yang toutanova trouillon subsymbolic representations enable models generalize unseen facts encoding similarities vector predicate symbol grandfatherof similar vector symbol grandpaof predicates likely express similar relation likewise vector constant symbol lisa similar maggie similar relations likely hold constants live city parents simple form reasoning based similarities remarkably effective automatically completing large kbs however practice often important capture complex reasoning patterns involve several inference steps example abe father homer homer parent bart would like infer abe grandfather bart transitive reasoning inherently hard neural link prediction models learn score facts locally contrast symbolic theorem provers like prolog gallaire minker enable exactly type reasoning furthermore inductive logic programming ilp muggleton builds upon provers learn interpretable rules data exploit reasoning kbs however symbolic provers lack ability learn subsymbolic representations similarities large kbs limits ability generalize queries similar identical symbols connection logic machine learning addressed statistical relational learning approaches models traditionally support reasoning subsymbolic representations kok domingos chapter differentiable proving using subsymbolic representations trained training data gardner beltagy neural reasoning models neelakantan peng das weissenborn shen address aforementioned limitations extent encoding reasoning chains vector space iteratively refining subsymbolic representations question comparison answers many ways models operate like basic theorem provers lack two crucial ingredients interpretability straightforward ways incorporating knowledge form rules approach problem inspired recent neural network architectures like neural turing machines graves memory networks weston neural grefenstette joulin mikolov neural programmer neelakantan neural reed freitas hierarchical attentive memory andrychowicz differentiable forth interpreter bosnjak architectures replace discrete algorithms data structures differentiable counterparts operate vectors heart approach idea translate concept basic symbolic theorem provers hence combine advantages reasoning interpretability easy integration domain knowledge ability reason vector representations predicates constants specifically keep variable binding symbolic compare symbols using subsymbolic vector representations chapter introduce neural theorem provers ntps differentiable provers basic theorems formulated queries use prolog backward chaining algorithm recipe recursively constructing neural networks capable proving queries using subsymbolic representations success score proofs differentiable respect vector representations symbols enables learn representations predicates constants ground atoms well parameters logic rules predefined structure ntps learn place representations similar symbols close proximity vector space induce rules given prior assumptions structure logical relationships transitivity furthermore ntps seamlessly reason provided rules ntps operate distributed representations symbols single rule leveraged many proofs queries symbols similar representation finally ntps demonstrate high degree interpretability induce latent rules decode symbolic rules differentiable prover figure module mapping upstream proof state left list new proof states right thereby extending substitution set adding nodes computation graph neural network representing proof success contributions threefold present construction ntps inspired prolog backward chaining algorithm differentiable unification operation using subsymbolic representations propose optimizations architecture joint training neural link prediction model batch proving approximate gradient calculation iii experimentally show ntps learn representations symbols rules predefined structure enabling learn perform reasoning benchmark kbs outperform complex trouillon neural link prediction model three four kbs differentiable prover following describe recursive construction ntps neural networks differentiable proving allow calculate gradient proof successes respect vector representations symbols define construction ntps terms modules similar dynamic neural module networks andreas module takes inputs discrete objects atoms rules proof state returns list new proof states see figure graphical representation proof state tuple consisting substitution set constructed proof far neural network outputs success score partial proof discrete objects substitution set used construction neural network network constructed continuous proof success score calculated many different goals training test time summarize modules instantiated discrete objects substitution set construct neural network representing partial proof success score recursively instantiate submodules continue proof shared signature modules domain controls construction network domain proof states number output proof states furthermore let denote substitution set chapter differentiable proving proof state let denote neural network calculating proof success akin pseudocode backward chaining chapter use pseudocode style functional programming language define behavior modules auxiliary functions unification module unification two atoms goal want prove rule head central operation backward chaining two symbols predicates constants checked equality proof aborted check fails however want able apply rules even symbols goal head equal similar meaning grandfatherof grandpaof thus replace symbolic comparison computation measures similarity symbols vector space module unify updates substitution set creates neural network comparing vector representations symbols two sequences terms signature module domain lists terms unify takes two atoms represented lists terms upstream proof state maps new proof state substitution set proof success end unify iterates list terms two atoms compares symbols one symbols variable substitution added substitution set otherwise vector representations two symbols compared using radial basis function rbf kernel broomhead lowe hyperparameter set experiments following pseudocode implements unify note matches every argument order matters arguments match line subsequent lines evaluated fail fail otherwise exp min otherwise differentiable prover refers new proof state refers set variable symbols substitution variable symbol symbol denotes embedding lookup symbol index unify parameterized embedding matrix set symbols dimension vector representations symbols furthermore fail represents unification failure due mismatching arity two atoms failure reached abort creation neural network branch proving addition constrain proofs checking whether variable already bound note simple heuristic prohibits applying rule twice sophisticated ways finding avoiding cycles proof graph rule still applied multiple times gelder leave future work example assume unifying two atoms grandpaof abe bart given upstream proof state latter input atom placeholders predicate constant neural network would output evaluated furthermore assume grandpaof abe bart represent indices respective symbols global symbol vocabulary new proof state constructed unify grandpaof abe bart min exp exp thus output score neural network high subsymbolic representation input close grandpaof input close bart however score higher due upstream proof success score forward pass neural network note addition extending neural network module also outputs substitution set graph creation time used instantiate submodules furthermore note unification applied multiple times proof involves one step resulting chained application rbf kernel min operations choice min stems property successful proof unifications successful conjunction could also realized multiplication unification scores along proof would likely result unstable optimization longer proofs due exploding gradients chapter differentiable proving module based unify define module attempts apply rules signature domain goal atoms domain integers used specifying maximum proof depth neural network furthermore number possible output proof states goal given structure provided implement denotes rule given head atom list body atoms contrast symbolic method module able use grandfatherof rule query involving grandpaof provided subsymbolic representations predicates similar measured rbf kernel unify module example goal would instantiate submodule based rule grandfatherof fatherof parentof follows fatherof parentof result unify module implementing first define auxiliary function called substitute applies substitutions variables atom possible realized via substitute substitute otherwise substitute example substitute fatherof fatherof creation neural network dependent also structure goal instance goal would result different neural network hence different number output proof states differentiable prover signature domain lists atoms number possible output proof states list atoms known structure provided module implemented fail fail fail substitute first two lines define failure proof either upstream unification failure passed module line maximum proof depth reached line line specifies proof success list subgoals empty maximum proof depth reached lastly line defines recursion first subgoal proven instantiating module substitutions applied every resulting proof state used proving remaining subgoals instantiating modules example continuing example section module would instantiate submodules follows fatherof parentof result unify parentof fatherof result substitute result unify proof aggregation finally define overall success score proving goal using parameters arg max predefined maximum proof depth initial proof state set empty substitution set proof success score hence success proving chapter differentiable proving ork fatherof abe homer grandfatherof example knowledge base fatherof abe homer parentof homer bart grandfatherof fatherof parentof andk fatherof parentof substitute ork fatherof fatherof abe homer fatherof parentof homer bart fatherof andk parentof substitute ork parentof homer fail andk parentof substitute ork parentof bart fail fail figure exemplary construction ntp computation graph toy knowledge base indices arrows correspond application respective rule proof states blue subscripted sequence indices rules applied underlined proof states aggregated obtain final proof success boxes visualize instantiations modules omitted unify proofs fail due rule applied twice goal operation output neural networks representing possible proofs depth example figure illustrates examplary ntp computation graph constructed toy note ntp constructed training used proving goals structure training test time index input predicate indices input constants final proof states used proof aggregation underlined neural inductive logic programming use ntps ilp gradient descent instead combinatorial search space rules example done first order inductive learner foil quinlan specifically using concept learning entailment muggleton induce rules let prove known ground atoms give high proof success scores sampled unknown ground atoms let representations unknown predicates indices respectively prior knowledge transitivity three unknown predicates specified via call parameterized rule corresponding predicates unknown representations learned data rule used proofs training optimization test time way given rule training predicate representations parameterized rules optimized jointly subsymbolic representations thus model adapt parameterized rules proofs known facts succeed proofs sampled unknown ground atoms fail thereby inducing rules predefined structures like one inspired wang cohen use rule templates conveniently defining structure multiple parameterized rules specifying number parameterized rules instantiated given rule structure see section examples rule decoding implicit rule confidence inspection training decode parameterized rule searching closest representations known predicates given induced rule trained find closest representation known predicate every parameterized predicate representation rule formally decode predicate symbol set predicates using decode arg max exp addition provide users rule confidence taking minimum similarity unknown decoded predicate representations using rbf kernel unify let list predicate representations parameterized rule confidence rule calculated min max exp confidence score upper bound proof success score achieved induced rule used proofs optimization section present basic training loss use ntps training loss neural link prediction models used auxiliary task well various computational optimizations training objective let set known facts given usually observe negative facts thus resort sampling corrupted ground atoms done previous work bordes specifically every obtain corrupted ground atoms sampling set chapter differentiable proving constants corrupted ground atoms resampled every iteration training denote set known corrupted ground atoms together target score known ground atoms corrupted ones use negative proof success score loss function ntp parameters given lntpk log log training ground atom target proof success score note since application training facts ground atoms make use proof success score substitution list resulting proof state prove known facts trivially unification resulting parameter updates training hence generalization therefore training masking calculation unification success known ground atom want prove specifically set unification score temporarily hide training fact assume proven facts rules neural link prediction auxiliary loss beginning training subsymbolic representations initialized randomly unifying goal facts consequently get noisy success scores early stages training moreover maximum success score result gradient updates respective subsymbolic representations along maximum proof path take long time ntps learn place similar symbols close vector space make effective use rules speed learning subsymbolic representations train ntps jointly complex trouillon chapter complex ntp share subsymbolic representations feasible rbf kernel unify also defined complex vectors ntp responsible reasoning neural link prediction model learns score ground atoms locally test time ntp used predictions thus training loss complex seen auxiliary loss subsymbolic representations learned optimization ntp term resulting model based loss section joint training loss defined lntpk log log training atom ground truth target complex psij defined computational optimizations ntps described suffer severe computational limitations since neural network representing possible proofs predefined depth contrast symbolic backward chaining proof aborted soon unification fails differentiable proving get unification failure atoms whose arity match detect cyclic rule application propose two optimizations speed ntps first make use modern gpus batch processing many proofs parallel section second exploit sparseness gradients caused min max operations used unification proof aggregation respectively derive heuristic truncated forward backward pass drastically reduces number proofs considered calculating gradients section batch proving let matrix subsymbolic representations unified representations adapt unification module calculate unification success batched way using exp vectors ones respectively square root taken practice partition rules structure goals rule heads per partition time graphics processing unit gpu furthermore substitution sets bind variables vectors symbol indices instead single symbol indices min max operations taken per goal chapter differentiable proving max gradient approximation ntps allow calculate gradient proof success scores respect subsymbolic representations rule parameters backpropagating large computation graph give exact gradient computationally infeasible consider parameterized rule let assume given contains facts binary predicates bound respective representations goal substituted every possible second argument facts proving first atom body moreover substitutions need compare facts proving second atom body rule resulting proof success scores however note since use max operator aggregating success different proofs subsymbolic representations one proofs receive gradients overcome computational limitation propose following heuristic assume unifying first atom facts unlikely unification successes top successes attain maximum proof success unifying remaining atoms body rule facts unification first atom keep top substitutions success scores continue proving means partial proofs contribute forward pass stage consequently receive gradients backward pass backpropagation term max heuristic note guarantee anymore gradient proof success exact gradient large enough get close approximation true gradient experiments consistent previous work carry experiments four benchmark kbs compare complex ntp terms area countries mean reciprocal rank mrr hits bordes kbs described training details including hyperparameters rule templates found section countries countries dataset introduced bouchard testing reasoning capabilities neural link prediction models consists countries regions europe subregions western europe northern america facts neighborhood countries experiments test country region test country region locatedin locatedin neighborof locatedin locatedin neighborof locatedin locatedin locatedin train country locatedin subregion locatedin train country subregion figure overview different tasks contries dataset visualized nickel left part shows atoms removed task dotted lines right part illustrates rules used infer location test countries task facts corresponding blue dotted line removed training set task additionally facts corresponding green dashed line removed finally task also facts red line removed location countries subregions follow nickel split countries randomly training set countries train development set countries dev test set countries test every dev test country least one neighbor training set subsequently three different task datasets created tasks goal predict locatedin every test country five regions access training atoms varies see fig ground atoms locatedin test country region removed since information subregion test countries still contained task solved using transitivity rule locatedin locatedin locatedin addition ground atoms locatedin removed test country subregion location test countries needs inferred location neighboring countries locatedin neighborof locatedin task difficult neighboring countries might region rule always hold addition ground atoms locatedin region chapter differentiable proving training country test dev country neighbor also removed location test countries instance inferred using rule locatedin neighborof neighborof locatedin kinship nations umls use nations alyawarra kinship kinship unified medical language system umls kbs kok domingos left animals dataset contains unary predicates thus used evaluating reasoning nations contains binary predicates unary predicates constants true facts kinship contains predicates constants true facts umls contains predicates constants true facts since baseline complex deal unary predicates remove unary atoms nations split every training facts development facts test facts evaluation take test fact corrupt first second argument possible ways corrupted fact original subsequently predict ranking every test fact corruptions calculate mrr hits training details use adam kingma initial learning rate size known corrupted atoms optimization apply regularization model parameters clip gradient values subsymbolic representations rule parameters initialized using xavier initialization glorot bengio train models epochs repeat every experiment countries corpus ten times statistical significance tested using independent models implemented tensorflow abadi use maximum proof depth add following rule templates number front rule template indicates often parameterized rule given structure instantiated note rule template specifies two predicate representations body shared countries countries results discussion table results countries mrr hits kinship nations umls corpus metric model complex countries ntp examples induced rules confidence locatedin locatedin locatedin locatedin neighborof locatedin locatedin neighborof neighborof locatedin kinship mrr hits hits hits nations mrr hits hits hits blockpositionindex blockpositionindex expeldiplomats negativebehavior negativecomm intergovorgs umls mrr hits hits hits interacts interacts interacts isa isa isa derivative derivative derivative countries kinship nations umls results discussion results different model variants benchmark kbs shown table another method inducing rules differentiable way automated completion introduced recently yang evaluation setup equivalent protocol however neural link prediction baseline complex already achieves much higher hits results umls kinship thus focus comparison ntps complex first note vanilla ntps alone work particularly well compared complex outperform complex countries nations kinship umls demonstrates difficulty learning subsymbolic chapter differentiable proving representations differentiable prover unification alone need auxiliary losses complex auxiliary loss outperforms models majority tasks difference complex significant countries tasks major advantage ntps inspect induced rules provide interpretable representation model learned right column table shows examples induced rules note predicates kinship anonymized countries ntp recovered rules needed solving three different tasks umls ntp induced transitivity rules relationships particularly hard encode neural link prediction models like complex optimized locally predict score fact related work combining neural symbolic approaches relational learning reasoning long tradition let various proposed architectures past decades see avila garcez review early proposals networks limited propositional rules shavlik towell kbann towell shavlik avila garcez zaverucha approaches focus inference learn subsymbolic vector representations training facts shruti shastri neural prolog ding lifted relational neural networks sourek tensorlog cohen logic tensor networks serafini avila garcez spirit similar ntps need fully ground logic rules however support function terms whereas ntps currently support terms recent architectures peng weissenborn shen translate query representations implicitly vector space without explicit rule representations thus easily incorporate knowledge addition ntps related random walk lao gardner path encoding models neelakantan das however instead aggregating paths random walks encoding paths predict target predicate reasoning steps ntps explicit unification uses subsymbolic representations allows induce interpretable rules well incorporate prior knowledge either form rules form rule templates define structure logical relationships expect hold another line work vendrov demeester summary regularizes distributed representations via rules approaches learn rules data support restricted subset logic ntps constructed prolog backward chaining thus related unification neural networks komendantskaya however ntps operate vector representations symbols instead scalar values expressive ntps learn rules data related ilp systems foil quinlan sherlock schoenmackers learning dyadic datalog metagol muggleton ilp systems operate symbols search discrete space logical rules ntps work subsymbolic representations induce rules using gradient descent recently yang introduced differentiable rule learning system based tensorlog neural network controller similar lstms hochreiter schmidhuber method scalable ntps introduced however umls kinship baseline already achieved stronger generalization learning subsymbolic representations still scaling ntps larger kbs competing scalable relational learning methods open problem seek address future work summary proposed differentiable prover automated completion operates subsymbolic representations end used prolog backward chaining algorithm recipe recursively constructing neural networks used prove queries specifically contribution use differentiable unification operation vector representations symbols construct neural networks allowed compute gradient proof successes respect vector representations symbols thus enabled train subsymbolic representations facts induce logic rules using gradient descent benchmark kbs model outperformed complex neural link prediction model three four kbs time inducing interpretable rules chapter recognizing textual entailment recurrent neural networks cram meaning whole sentence single vector raymond mooney ability determine logical relationship two natural language sentences integral part machines supposed understand reason language previous chapters discussed ways combining symbolic logical knowledge subsymbolic representations trained via neural networks first steps towards models reason natural language used textual surface form patterns predicates automated knowledge base completion however automated completion assumed surface patterns atomic generalize unseen patterns compositional representation chapter using recurrent neural networks rnns learning compositional representations natural language sentences specifically tackling task recognizing textual entailment rte determining whether two natural language sentences contradicting whether unrelated iii whether first sentence called premise entails second sentence called hypothesis instance sentence two girls guy involved pie eating contest entails three people stuffing faces contradicts three people drinking beer boat task important since many natural language processing nlp tasks information extraction relation extraction text summarization machine translation rely explicitly implicitly could benefit accurate rte systems dagan systems rte far relied heavily engineered nlp pipelines extensive manual creation features well various external resources chapter recognizing textual entailment recurrent neural networks specialized subcomponents negation detection lai hockenmaier zhao beltagy despite success neural networks paraphrase detection socher yin differentiable neural architectures far failed reach acceptable performance rte due lack large datasets differentiable solution rte desirable since avoids specific assumptions underlying language particular need language features like tags dependency parses furthermore generic solution allows extend concept capturing entailment across sequential data natural language recently bowman published stanford natural language inference snli corpus accompanied long memory lstm baseline hochreiter schmidhuber achieves accuracy rte dataset first instance generic neural model without features got close accuracy simple lexicalized classifier engineered features rte explained high quality size snli compared two orders magnitude smaller partly synthetic datasets used far evaluate rte systems bowman lstm encodes premise hypothesis independently dense vectors whose concatenation subsequently used perceptron mlp classification section contrast proposing neural network capable comparison pairs words phrases processing hypothesis conditioned premise using neural attention mechanism contributions threefold present neural model based two lstms read two sentences one determine entailment opposed mapping sentence independently vector space section extend model neural attention mechanism encourage comparison pairs words phrases section iii provide detailed qualitative analysis neural attention rte section benchmark lstm achieves accuracy snli outperforming simple lexicalized classifier tailored rte percentage points extension neural attention surpasses strong benchmark lstm result percentage points achieving accuracy recognizing entailment snli background rnn cell tanh matmul concat figure computation graph rnn cell background section describe rnns lstms sequence modeling explaining lstms used independent sentence encoding model rte bowman recurrent neural networks rnn parameterized differentiable cell function maps input vector previous state output vector next state simplicity assume input size output size applying cell function obtain output vector next state given sequence input representations start state output rnn entire input sequence obtained recursively applying cell function rnn chapter recognizing textual entailment recurrent neural networks usually output separated list output vectors states note rnn applied input sequences varying length sequences word representations recurrent neural network basic rnn parameterized following cell function tanh trainable transition matrix trainable bias tanh application hyperbolic function call fullyconnected rnn cell function modeled dense layer illustrate computation graph single application rnn cell fig note recurrent application cell function sequence inputs parameters transition matrix bias shared time steps long memory rnns lstm units hochreiter schmidhuber successfully applied wide range nlp tasks machine translation sutskever constituency parsing vinyals language modeling zaremba recently rte bowman lstms encompass memory cells store information long period time well three types gates control flow information cells input gates forget gates output gates given input vector background lstm cell matmul sigm output gates tanh matmul sigm forget gates concat matmul matmul sigm input gates tanh figure computation graph lstm cell time step previous output cell state lstm hidden size computes next output cell state tanh tanh chapter recognizing textual entailment recurrent neural networks wedding party taking pictures someone got premise married hypothesis figure independent encoding premise hypothesis using lstm note sentences compressed dense vectors indicated red mapping trained matrices trained biases parameterize gates transformations input previous chapters denotes application sigmoid function multiplication two vectors corresponding computation graph illustrated fig independent sentence encoding lstms readily used rte independently encoding premise hypothesis dense vectors taking concatenation input mlp classifier bowman let xpn sequence words representing premise let represent hypothesis premise hypothesis encoded vectors taking last output vector applying rnn function lstm cell function subsequently prediction three rte classes obtained mlp eqs followed softmax rnn lstm rnn tanh tanh hpn tanh softmax methods furthermore trainable start state hpn denotes last element list premise output vectors similarly independent sentence encoding eqs visualized fig given predicted output distribution three rte classes ntailment eutral ontradiction target vector encoding correct class loss commonly used log independent sentence encoding straightforward model rte however questionable efficiently entire sentence represented single vector hence next section investigate various neural architectures tailored towards comparison premise hypothesis thus require represent entire sentences vectors embedding space methods first propose encode hypothesis conditioned representation premise section subsequently introduce extension lstm rte neural attention section attention section finally show attentive models easily used attending ways premise conditioned hypothesis hypothesis conditioned premise section models trained using loss predict probability rte class using differ way calculated conditional encoding contrast learning sentence representations interested neural models read sentences determine entailment thereby comparing pairs words phrases figure shows structure model premise left read lstm second lstm different parameters reading delimiter hypothesis right memory state initialized last state previous lstm example processes hypothesis chapter recognizing textual entailment recurrent neural networks wedding party taking pictures someone got premise married hypothesis figure conditional encoding two lstms first lstm encodes premise second lstm processes hypothesis conditioned representation premise conditioned representation first lstm built premise formally replace eqs rnn lstm rnn spn denotes last state lstm encoded premise hpm denotes last element list hypothesis output vectors model illustrated rte example fig note premise still encoded vector lstm processes hypothesis keep track whether incoming words contradict premise whether entailed whether unrelated inspired automata proposed natural logic inference angeli manning attention attentive neural networks recently demonstrated success wide range tasks ranging handwriting synthesis graves digit classification mnih machine translation bahdanau image captioning speech recognition chorowski sentence summarization rush code summarization allamanis geometric reasoning vinyals idea allow model attend past output vectors lstms mitigates cell state bottleneck fact methods wedding party taking pictures premise someone got married hypothesis figure attention model rte compared fig model represent entire premise cell state instead output context representations informally visualized red mapping later queried attention mechanism blue also note used standard lstm store relevant information future time steps internal memory see fig compare fig fig lstm attention rte capture entire content premise cell state instead sufficient output vectors reading premise populating differentiable memory premise accumulating representation cell state informs second lstm output vectors premise needs attend determine rte class formally let matrix consisting output vectors first lstm produced reading words premise furthermore let vector ones last output vector premise hypothesis processed two lstms attention mechanism produce vector attention weights weighted representation premise via tanh softmax trainable projection matrices trainable parameter vector note outer product repeating linearly chapter recognizing textual entailment recurrent neural networks transformed many times words premise times hence intermediate attention representation ith column vector ith word premise obtained combination premise output vector ith column vector transformed attention weight ith word premise result weighted combination parameterized values final sentence pair representation obtained combination representation premise last output vector thus replacing tanh trainable projection matrices attention model illustrated fig note model represent entire premise cell state instead output context representations later queried attention mechanism informally illustrated red mapping input phrases output context attention determining whether one sentence entails another desirable check entailment contradiction individual word phrase pairs encourage behavior employ neural attention similar bahdanau hermann rush difference use attention generate words obtain sentence pair encoding comparison via word phrase pairs premise hypothesis case amounts attending first lstm output vectors premise second lstm processes hypothesis one word time consequently obtain attention premise output vectors every word hypothesis modeled follows tanh softmax tanh note number word representations used output context representation known illustration purposes depicted case information three words contribute output context representation methods wedding party taking pictures someone got premise married hypothesis figure attention model rte compared fig querying memory multiple times allows model store information output vectors processing premise also note also used trainable projection matrices note dependent previous attention representation inform model attended previous steps see eqs previous section final sentence pair representation obtained combination last representation premise conditioned last word hypothesis last output vector using tanh attention model illustrated fig compared attention model introduced earlier querying memory multiple times allows model store information output vectors processing premise informally illustrate fewer words contributing output context representations red mapping attention inspired bidirectional lstms read sequence reverse improved encoding graves schmidhuber experiment attention rte idea use model structure weights chapter recognizing textual entailment recurrent neural networks attend premise conditioned hypothesis well attend hypothesis conditioned premise simply swapping two sequences produces two sentence pair representations concatenate classification experiments conduct experiments stanford natural language inference corpus snli bowman corpus two orders magnitude larger existing rte corpora sentences involving compositional knowledge sick marelli furthermore large part training examples sick generated heuristically examples contrast sentence pairs snli stem human annotators size quality snli make suitable resource training neural architectures ones proposed chapter training details use pretrained vectors mikolov word representations keep fixed training words training set randomly initialized uniformly sampling values optimized words encountered inference time validation test corpus set fixed random vectors tuning representations words vectors ensure test time representation stays close unseen similar words contained use adam kingma optimization first momentum coefficient second momentum coefficient every model perform grid search combinations initial learning rate strength subsequently take best configuration based performance validation set evaluate configuration test set results discussion results snli corpus summarized table total number model parameters including tunable word representations denoted without word representations ensure comparable number parameters bowman model encodes premise hypothesis independently found words snli could obtain embeddings resulting tunable parameters standard configuration recommended kingma zaremba apply dropout inputs outputs network results discussion table results snli corpus model train dev test lexicalized classifier bowman lstm bowman conditional encoding shared conditional encoding shared conditional encoding attention attention attention attention using one lstm also run experiments conditional encoding parameters lstms shared conditional encoding shared opposed using two independent lstms addition compare attentive models two benchmark lstms whose hidden sizes chosen least many parameters attentive models set respectively since tuning word vectors embeddings total number parameters models considerably smaller also compare models benchmark lexicalized classifier used bowman uses features based bleu score premise hypothesis length difference word overlap bigrams tags well cross bigrams conditional encoding found processing hypothesis conditioned premise instead encoding sentences independently gives improvement percentage points accuracy bowman lstm argue due information able flow part model processes premise part processes hypothesis specifically model waste capacity encoding hypothesis fact need encode hypothesis read hypothesis focused way checking words phrases contradiction entailment based semantic representation premise see fig one interpretation lstm approximating automaton rte angeli manning another difference bowman model using instead glove word representations importantly word embeddings drop accuracy train test set less severe models suggest word embeddings could cause overfitting chapter recognizing textual entailment recurrent neural networks lstm outperforms simple lexicalized classifier percentage points best knowledge time publication first instance neural differentiable model outperforming nlp pipeline textual entailment dataset attention incorporating attention mechanism observe percentage point improvement single lstm hidden size percentage point increase benchmark model uses two lstms conditional encoding one premise one hypothesis conditioned representation premise attention model produces output vectors summarizing contextual information premise useful attend later reading hypothesis therefore reading premise model build semantic representation whole premise instead representation helps attending premise output vectors processing hypothesis see fig contrast output vectors premise used baseline conditional model thus models build representation entire premise carry cell state part processes bottleneck overcome degree using attention attention enabling model attend output vectors premise every word hypothesis yields another percentage point improvement compared attending argue explained model able check entailment contradiction individual word phrase pairs demonstrate effect qualitative analysis attention allowing model also attend hypothesis based premise improve performance rte experiments suspect due entailment asymmetric relation hence using lstm encode hypothesis one direction premise direction might lead noise training signal could addressed training different lstms cost doubling number model parameters qualitative analysis instructive analyze output representations model attending deciding class rte example note interpretations based attention weights taken care since model forced solely rely representations obtained attention see eqs following visualize discuss attention patterns presented attentive models attentive model examples ten sentence pairs randomly drawn development set results discussion figure attention visualizations attention figure shows extent attentive model focuses contextual representations premise lstms processed premise hypothesis respectively note model pays attention output vectors words semantically coherent premise riding rides animal camel contradiction caused single word blue pink multiple words swim lake frolicking grass interestingly model shows sensitivity context attending yellow color toy pink color coat however involved examples longer premises found attention uniformly distributed suggests conditioning attention last output representation limitations multiple words need considered deciding rte class attention visualizations attention depicted fig found attention easily detect hypothesis simply reordering words premise furthermore able resolve synonyms airplane aircraft capable matching expressions single words garbage trashcan also noteworthy irrelevant parts premise words capturing little meaning whole uninformative relative clauses correctly neglected determining entailment also rope leading attention seems also work well words premise hypothesis connected via deeper semantics snow found outside chapter recognizing textual entailment recurrent neural networks mother adult furthermore model able resolve relationships kids boy girl attention fail example two sentences words entirely unrelated cases model seems back attending function words sentence pair representation likely dominated last output vector instead representation see related work methods chapter published since many new models proposed roughly classified sentence encoding models extend independent encoding lstm bowman section models related conditional encoding architecture presented section results works collected leaderboard http current best held bidirectional lstm matching aggregation layers introduced wang achieves test accuracy outperforms best independent encoding model percentage points fact independent encoding models vendrov mou bowman munkhdalai reach performance conditional model attention exceptions recently introduced bidirectional lstm model liu neural semantic encoder munkhdalai bidirectional conditional encoding models presented chapter make little assumptions input data applied domains augenstein introduced conditional encoding model determining stance tweet foetus rights respect target legalization abortion checked last april summary extended conditional encoding model bidirectional lstms graves schmidhuber thus replacing eqs rnn lstm rnn lstm rnn lstm rnn tanh denotes forward reversed sequence trainable projection matrices trainable forward reverse start state respectively architecture illustrated fig achieved second best result semeval task twitter stance detection corpus mohammad generating entailing sentences instead predicting logical relationship sentences kolesnyk used entailment pairs snli corpus learn generate entailed sentence given premise model attention used neural machine translation bahdanau manually annotated test corpus generated sentences model generated correct entailed sentences cases recursively applying produced outputs model able generate natural language inference chains wedding party looks happy picture bride groom smiles couple smiling people summary chapter demonstrated lstms read pairs sequences produce final representation simple classifier predicts entailment outperform lstm baseline encoding two sequences independently well classifier features besides contributing conditional model rte main contribution extend model attention premise provides improvements predictive abilities system qualitative analysis showed attention model able compare word phrase pairs deciding logical relationship sentences chapter recognizing textual entailment recurrent neural networks accuracy held large rte corpus time publication summary figure attention visualizations chapter recognizing textual entailment recurrent neural networks abortion foetus rights legalization target tweet figure bidirectional encoding tweet conditioned bidirectional encoding target stance predicted using last forward reversed output representations chapter conclusions summary contributions thesis presented various combinations representation learning models logic first proposed way calculate gradient propositional logic rules respect parameters neural link prediction model chapter stochastically grounding logic rules able use rules regularizers matrix factorization neural link prediction model automated knowledge base completion allowed embed background knowledge form logical rules vector space predicate entity pair representations using method able train relation extractors predicates provided rules little known training facts chapter identified various shortcomings stochastic grounding proposed model implication rules used regularize predicate representations two advantages method becomes independent size domain entity pairs guarantee provided rules hold test entity pair restricting entity pair embedding space able impose implications partial order predicate representation space similar order embeddings vendrov showed empirically restricting entity pair embedding space model generalizes better predicting facts test set attribute regularization effect furthermore showed incorporating implication rules method scales well number rules investigating two ways regularizing symbol representations based rules chapter proposed differentiable prover performs inference symbol representations explicit way end used prolog backward chaining algorithm recipe recursively constructing neural networks chapter conclusions used prove facts specifically proposed differentiable unification operation symbol representations constructed neural network allows compute gradient proof success respect symbol representations thus train symbol representation proof outcome furthermore given templates unknown rules predefined structure induce logic rules using gradient descent proposed three optimizations model implemented unification multiple symbol representations allows make use modern graphics processing units gpus efficient proving proposed approximation gradient following max proofs iii used neural link prediction models regularizers prover learn better symbol representations quickly three four benchmark knowledge bases method outperforms complex neural link prediction model time inducing interpretable rules lastly developed neural models recognizing textual entailment rte determining logical relationship two natural language sentences chapter used one long memory lstm encode first sentence conditioned representation encoded second sentence using second lstm deciding label sentence pair furthermore extended model neural attention mechanism enables comparison word phrase pairs large rte corpus models outperform classifier features strong lstm baseline addition qualitatively analyzed attention model pays words first sentence able confirm presence reasoning patterns limitations future work integration neural representations symbolic logic reasoning remains exciting open research area except see much systems improving representation learning models taking inspiration formal logic future demonstrated benefit regularizing symbol representations logical rules automated completion able efficiently simple implication rules future work would interesting use general logic rules regularizers predicate representations lifted way instance informed grounding rules however likely approach regularizing predicate representations using rules theoretical limitations need investigated thus believe limitations future work interesting alternative direction synthesizing symbolic reasoning neural representations explicit ways differentiable neural theorem prover ntp introduced thesis first proposal towards tight integration symbolic reasoning systems trainable rules symbol representations major obstacle encountered computational complexity making proof success differentiable calculate gradient respect symbol representations possible approximate gradient maintaining max proofs given query point unification query facts necessary kbs contain millions facts grounding becomes impossible efficiently without applying heuristics even using modern gpus possible future direction could use hierarchical attention andrychowicz recent methods reinforcement learning monte carlo tree search coulom kocsis used instance learning play silver chemical synthesis planning segler specifically idea would train model learns select promising rules instead trying rules proving goal orthogonal flexible individual components differentiable provers conceivable instance unification rule selection rule application could modeled parameterized functions thus could used learn optimal behavior data behavior specified closely following backward chaining algorithm furthermore ntp constructed prolog backward chaining currently support datalog logic programs logic open question enable support function terms differentiable provers another open research direction extension automated provers handle natural language sentences questions perform reasoning natural language sentences starting point could combination models proposed determining logical relationship two natural language sentences differentiable prover differentiable prover introduced thesis used calculate gradient proof success respect symbol representations symbol representations composed rnn encoder trained jointly prover vision prover directly operates natural language statements explanations avoiding need semantic parsing zettlemoyer collins parsing text logical form ntps decompose inference explicit way would chapter conclusions worthwhile investigate whether obtain interpretable natural language proofs furthermore would interesting scale methods presented larger units text entire documents needs model extensions hierarchical attention ensure computational efficiency addition would worthwhile exploring structured forms attention graves sukhbaatar forms differentiable memory grefenstette joulin mikolov could help improve performance neural networks rte differentiable proving lastly interested applying ntps automated proving mathematical theorems either logical natural language form similar recent work kaliszyk loos appendix annotated rules manually filtered rules score experiments chapter written appendix annotated rules death death written birth appendix annotated wordnet rules manually filtered rules derived wordnet experiments chapter appendix annotated wordnet rules bibliography abadi ashish agarwal paul barham eugene brevdo zhifeng chen craig citro gregory corrado andy davis jeffrey dean matthieu devin sanjay ghemawat ian goodfellow andrew harp geoffrey irving michael isard yangqing jia rafal lukasz kaiser manjunath kudlur josh levenberg dan rajat monga sherry moore derek gordon murray chris olah mike schuster jonathon shlens benoit steiner ilya sutskever kunal talwar paul tucker vincent vanhoucke vijay vasudevan fernanda oriol vinyals pete warden martin wattenberg martin wicke yuan xiaoqiang zheng tensorflow machine learning heterogeneous distributed systems corr url http yaser learning hints neural networks complexity doi url http rami guillaume alain amjad almahairi christof dzmitry bahdanau nicolas ballas bastien justin bayer anatoly belikov alexander belopolsky yoshua bengio arnaud bergeron james bergstra valentin bisson josh bleecher snyder nicolas bouchard nicolas boulangerlewandowski xavier bouthillier alexandre olivier breuleux pierre luc carrier kyunghyun cho jan chorowski paul christiano tim cooijmans myriam aaron courville yann dauphin olivier delalleau julien demouth guillaume desjardins sander dieleman laurent dinh melanie ducoffe vincent dumoulin samira ebrahimi kahou dumitru erhan ziye fan orhan firat mathieu germain xavier glorot ian goodfellow matthew graham philippe hamel iban harlouchet heng hidasi sina honari arjun jain jean kai jia mikhail korobov vivek kulkarni alex lamb pascal lamblin eric larsen laurent sean lee simon simon lemieux nicholas bibliography zhouhan lin jesse livezey cory lorenz jeremiah lowin qianli pierreantoine manzagol olivier mastropietro robert mcgibbon roland memisevic bart van vincent michalski mehdi mirza alberto orlandi christopher joseph pal razvan pascanu mohammad pezeshki colin raffel daniel renshaw matthew rocklin adriana romero markus roth peter sadowski john salvatier savard jan john schulman gabriel schwartz iulian vlad serban dmitriy serdyuk samira shabanian simon sigurd spieckermann ramana subramanyam jakub sygnowski tanguay gijs van tulder joseph turian sebastian urban pascal vincent francesco visin harm vries david dustin webb matthew willson kelvin lijun xue yao saizheng zhang ying zhang theano python framework fast computation mathematical expressions corr url http miltiadis allamanis hao peng charles sutton convolutional attention network extreme summarization source code proceedings international conference machine learning icml new york city usa june pages url http miltiadis allamanis pankajan chanthirasegaran pushmeet kohli charles sutton learning continuous semantic representations symbolic expressions proceedings international conference machine learning icml sydney nsw australia august pages url http jacob andreas marcus rohrbach trevor darrell dan klein learning compose neural networks question answering naacl hlt conference north american chapter association computational linguistics human language technologies san diego california usa june pages url http marcin andrychowicz misha denil sergio gomez colmenarejo matthew hoffman david pfau tom schaul nando freitas learning learn gradient descent gradient descent advances neural information processing systems annual conference neural information processing systems december barcelona spain pages bibliography url http gabor angeli christopher manning naturalli natural logic inference common sense reasoning proceedings conference empirical methods natural language processing emnlp october doha qatar meeting sigdat special interest group acl pages url http isabelle augenstein tim andreas vlachos kalina bontcheva stance detection bidirectional conditional encoding proceedings conference empirical methods natural language processing emnlp austin texas usa november pages url http franz baader bernhard ganter baris sertkaya ulrike sattler completing description logic knowledge bases using formal concept analysis ijcai proceedings international joint conference artificial intelligence hyderabad india january pages url http dzmitry bahdanau kyunghyun cho yoshua bengio neural machine translation jointly learning align translate corr url http islam beltagy cuong chau gemma boleda dan garrette katrin erk raymond mooney montague meets markov deep semantics probabilistic logical form proceedings second joint conference lexical computational semantics sem june atlanta georgia pages url http islam beltagy katrin erk raymond mooney probabilistic soft logic semantic textual similarity proceedings annual meeting association computational linguistics acl june baltimore usa volume long papers pages url http bibliography islam beltagy stephen roller pengxiang cheng katrin erk raymond mooney representing meaning combination logical form vectors corr url http islam beltagy stephen roller pengxiang cheng katrin erk raymond mooney representing meaning combination logical distributional models computational linguistics kurt bollacker colin evans praveen paritosh tim sturge jamie taylor freebase collaboratively created graph database structuring human knowledge proceedings acm sigmod international conference management data sigmod vancouver canada june pages doi url http antoine bordes jason weston ronan collobert yoshua bengio learning structured embeddings knowledge bases proceedings aaai conference artificial intelligence aaai san francisco california usa august url http antoine bordes nicolas usunier alberto jason weston oksana yakhnenko translating embeddings modeling data advances neural information processing systems annual conference neural information processing systems proceedings meeting held december lake tahoe nevada united pages url http johan bos semantic analysis boxer semantics text processing step conference proceedings volume research computational semantics pages college publications johan bos katja markert recognising textual entailment logical inference human language technology conference conference empirical methods natural language processing proceedings conference october vancouver british columbia canada pages url http bibliography matko bosnjak tim jason naradowsky sebastian riedel programming differentiable forth interpreter international conference machine learning icml url http guillaume bouchard sameer singh theo trouillon approximate reasoning capabilities vector spaces proceedings aaai spring symposium knowledge representation reasoning krr integrating symbolic neural approaches samuel bowman recursive neural tensor networks learn logical reasoning volume url http samuel bowman gabor angeli christopher potts christopher manning large annotated corpus learning natural language inference proceedings conference empirical methods natural language processing emnlp lisbon portugal september pages url http samuel bowman jon gauthier abhinav rastogi raghav gupta christopher manning christopher potts fast unified model parsing sentence understanding proceedings annual meeting association computational linguistics acl august berlin germany volume long papers url http rodrigo salvo braz lifted probabilistic inference phd thesis champaign usa david broomhead david lowe radial basis functions functional interpolation adaptive networks technical report dtic document richard byrd peihuang jorge nocedal ciyou zhu limited memory algorithm bound constrained optimization siam scientific computing doi url http andrew carlson justin betteridge richard wang estevam hruschka tom mitchell coupled learning information extraction bibliography proceedings third international conference web search web data mining wsdm new york usa february pages doi url http chang yih bishan yang christopher meek typed tensor decomposition knowledge bases relation extraction proceedings conference empirical methods natural language processing emnlp october doha qatar meeting sigdat special interest group acl pages url http chang ratinov dan roth guiding learning acl proceedings annual meeting association computational linguistics june prague czech republic url http jan chorowski dzmitry bahdanau dmitriy serdyuk kyunghyun cho yoshua bengio models speech recognition advances neural information processing systems annual conference neural information processing systems december montreal quebec canada pages url http stephen clark stephen pulman combining symbolic distributional models meaning quantum interaction papers aaai spring symposium technical report stanford california usa march pages url http bob coecke mehrnoosh sadrzadeh stephen clark mathematical foundations compositional distributional model meaning corr url http william cohen tensorlog differentiable deductive database corr url http michael collins sanjoy dasgupta robert schapire generalization principal components analysis exponential family bibliography advances neural information processing systems neural information processing systems natural synthetic nips december vancouver british columbia canada pages url http ronan collobert koray kavukcuoglu farabet matlablike environment machine learning biglearn nips workshop number alain colmerauer introduction prolog iii commun acm doi url http coulom efficient selectivity backup operators tree search computers games international conference turin italy may revised papers pages doi url http council european union position council general data protection regulation http april ido dagan oren glickman bernardo magnini pascal recognising textual entailment challenge machine learning challenges evaluating predictive uncertainty visual object classification recognizing textual entailment first pascal machine learning challenges workshop mlcw southampton april revised selected papers pages doi url http rajarshi das arvind neelakantan david belanger andrew mccallum chains reasoning entities relations text using recurrent neural networks conference european chapter association computational linguistics eacl url http bibliography artur avila garcez gerson zaverucha connectionist inductive learning logic programming system appl doi url http artur avila garcez krysia broda dov gabbay learning systems foundations applications springer science business media oier lopez lacalle mirella lapata unsupervised relation extraction general domain knowledge proceedings conference empirical methods natural language processing emnlp october grand hyatt seattle seattle washington usa meeting sigdat special interest group acl pages url http thomas demeester tim sebastian riedel lifted rule injection relation embeddings proceedings conference empirical methods natural language processing emnlp austin texas usa november pages url http liya ding neural concepts construction mechanism systems man cybernetics intelligent systems ieee international conference volume pages ieee xin dong evgeniy gabrilovich geremy heitz wilko horn lao kevin murphy thomas strohmann shaohua sun wei zhang knowledge vault approach probabilistic knowledge fusion acm sigkdd international conference knowledge discovery data mining kdd new york usa august pages doi url http gregory druck gideon mann andrew mccallum learning dependency parsers using generalized expectation criteria acl proceedings annual meeting association computational linguistics international joint conference natural language processing afnlp august singapore pages url http bibliography john duchi elad hazan yoram singer adaptive subgradient methods online learning stochastic optimization journal machine learning research url http cfm saso dzeroski inductive logic programming nutshell lise getoor ben taskar editors introduction statistical relational learning chapter pages oren etzioni michele banko stephen soderland daniel weld open information extraction web commun acm doi url http manaal faruqui jesse dodge sujay kumar jauhar chris dyer eduard hovy noah smith retrofitting word vectors semantic lexicons naacl hlt conference north american chapter association computational linguistics human language technologies denver colorado usa may june pages url http john firth synopsis linguistic theory manoel gerson zaverucha artur avila garcez fast relational learning using bottom clause propositionalization artificial neural networks machine learning doi url http luis antonio nicoleta preda fabian suchanek mining rules align knowledge bases proceedings workshop automated knowledge base construction akbc cikm san francisco california usa october pages doi url http gallaire jack minker editors logic data bases symposium logic data bases centre recherches toulouse advances data base theory new york plemum press isbn kuzman ganchev jennifer gillenwater ben taskar posterior regularization structured latent variable models journal machine learning bibliography search url http cfm matt gardner partha pratim talukdar bryan kisiel tom mitchell improving learning inference large using latent syntactic cues proceedings conference empirical methods natural language processing emnlp october grand hyatt seattle seattle washington usa meeting sigdat special interest group acl pages url http matt gardner partha pratim talukdar jayant krishnamurthy tom mitchell incorporating vector space similarity random walk inference knowledge bases proceedings conference empirical methods natural language processing emnlp october doha qatar meeting sigdat special interest group acl pages url http dan garrette katrin erk raymond mooney integrating logical representations probabilistic information using markov logic proceedings ninth international conference computational semantics iwcs january oxford url http allen van gelder efficient loop detection prolog using technique log doi url https xavier glorot yoshua bengio understanding difficulty training deep feedforward neural networks proceedings thirteenth international conference artificial intelligence statistics aistats chia laguna resort sardinia italy may pages url http kurt zum intuitionistischen anzeiger der akademie der wissenschaften wien bibliography ian goodfellow yoshua bengio aaron courville deep learning adaptive computation machine learning mit press isbn url http bryce goodman seth flaxman regulations algorithmic right explanation corr url http alex graves generating sequences recurrent neural networks corr url http alex graves schmidhuber framewise phoneme classification bidirectional lstm neural network architectures neural networks doi url http alex graves greg wayne ivo danihelka neural turing machines corr url http edward grefenstette towards formal distributional semantics simulating logical calculi tensors proceedings second joint conference lexical computational semantics sem june atlanta georgia pages url http edward grefenstette karl moritz hermann mustafa suleyman phil blunsom learning transduce unbounded memory advances neural information processing systems annual conference neural information processing systems december montreal quebec canada pages url http shu guo quan wang lihong wang bin wang guo jointly embedding knowledge graphs logical rules proceedings conference empirical methods natural language processing emnlp austin texas usa november pages url http karl moritz hermann phil blunsom role syntax vector space models compositional semantics proceedings annual meeting association computational linguistics acl august bibliography sofia bulgaria volume long papers pages url http karl moritz hermann edward grefenstette lasse espeholt kay mustafa suleyman phil blunsom teaching machines read comprehend advances neural information processing systems annual conference neural information processing systems december montreal quebec canada pages url http jochen hipp ulrich gholamreza nakhaeizadeh algorithms association rule mining general survey comparison sigkdd explorations doi url http pascal hitzler steffen anthony karel seda logic programs connectionist networks applied logic doi url http sepp hochreiter schmidhuber long memory neural computation doi url http steffen structured connectionist unification algorithm proceedings national conference artificial intelligence boston massachusetts july august pages url http steffen yvonne kalinke approximating semantics logic programs recurrent neural networks appl doi url http baotian zhengdong hang qingcai chen convolutional neural network architectures matching natural language sentences advances neural information processing systems annual conference neural information processing systems december montreal quebec canada pages bibliography url http zhiting xuezhe zhengzhong liu eduard hovy eric xing harnessing deep neural networks logic rules proceedings annual meeting association computational linguistics acl august berlin germany volume long papers url http sergio george julia baquero alexander gelbukh unalnlp combining soft cardinality features semantic textual similarity relatedness entailment proceedings international workshop semantic evaluation semeval coling dublin ireland august pages url http armand joulin tomas mikolov inferring algorithmic patterns stackaugmented recurrent nets advances neural information processing systems annual conference neural information processing systems december montreal quebec canada pages url http cezary kaliszyk chollet christian szegedy holstep machine learning dataset logic theorem proving international conference learning representations iclr url http diederik kingma jimmy adam method stochastic optimization international conference learning representations iclr url http levente kocsis csaba bandit based planning machine learning ecml european conference machine learning berlin germany september proceedings pages doi url http bibliography stanley kok pedro domingos statistical predicate invention machine learning proceedings international conference icml corvallis oregon usa june pages doi url http vladyslav kolesnyk tim sebastian riedel generating natural language inference chains corr url http ekaterina komendantskaya unification neural networks unification errorcorrection learning logic journal igpl doi url http robert kowalski donald kuehner linear resolution selection function artif doi url http kruszewski denis paperno marco baroni deriving boolean structures distributional vectors tacl url https alice lai julia hockenmaier denotational distributional approach semantics proceedings international workshop semantic evaluation semeval coling dublin ireland august pages url http lao william cohen relational retrieval using combination pathconstrained random walks machine learning doi url http lao tom mitchell william cohen random walk inference learning large scale knowledge base proceedings conference empirical methods natural language processing emnlp july john mcintyre conference centre edinburgh meeting bibliography sigdat special interest group acl pages url http lao amarnag subramanya fernando pereira william cohen reading web learned inference rules proceedings joint conference empirical methods natural language processing computational natural language learning july jeju island korea pages url http mike lewis mark steedman combined distributional logical semantics tacl url https yang liu chengjie sun lei lin xiaolong wang learning natural language inference using bidirectional lstm model corr url http john lloyd foundations logic programming edition springer isbn sarah loos geoffrey irving christian szegedy cezary kaliszyk deep network guided proof search international conference logic programming artificial intelligence reasoning maun botswana may pages url http lowe scott mcdonald direct route mediated priming semantic space technical report university edinburgh jan lukasiewicz logice trojwartosciowej ruch filozoficzny gideon mann andrew mccallum generalized expectation criteria semisupervised learning conditional random fields acl proceedings annual meeting association computational linguistics june columbus ohio usa pages url http christopher manning prabhakar raghavan hinrich introduction information retrieval cambridge university press isbn bibliography marco marelli luisa bentivogli marco baroni raffaella bernardi stefano menini roberto zamparelli task evaluation compositional distributional semantic models full sentences semantic relatedness textual entailment proceedings international workshop semantic evaluation semeval coling dublin ireland august pages url http tomas mikolov ilya sutskever kai chen gregory corrado jeffrey dean distributed representations words phrases compositionality advances neural information processing systems annual conference neural information processing systems proceedings meeting held december lake tahoe nevada united pages url http george miller wordnet lexical database english commun acm doi url http jeff mitchell mirella lapata models semantic composition acl proceedings annual meeting association computational linguistics june columbus ohio usa pages url http volodymyr mnih nicolas heess alex graves koray kavukcuoglu recurrent models visual attention advances neural information processing systems annual conference neural information processing systems december montreal quebec canada pages url http saif mohammad svetlana kiritchenko parinaz sobhani zhu colin cherry task detecting stance tweets proceedings international workshop semantic evaluation semeval naaclhlt san diego usa june pages url http bibliography lili mou rui men yan zhang rui yan zhi jin natural language inference convolution heuristic matching proceedings annual meeting association computational linguistics acl august berlin germany volume short papers url http stephen muggleton inductive logic programming new generation doi url http stephen muggleton luc raedt inductive logic programming theory methods log doi url http stephen muggleton dianhuan lin alireza metainterpretive learning dyadic datalog predicate invention revisited machine learning tsendsuren munkhdalai hong neural tree indexers text understanding corr url http tsendsuren munkhdalai hong neural semantic encoders corr url http arvind neelakantan benjamin roth andrew mccallum compositional vector space models knowledge base completion proceedings annual meeting association computational linguistics international joint conference natural language processing asian federation natural language processing acl july beijing china volume long papers pages url http arvind neelakantan quoc ilya sutskever neural programmer inducing latent programs gradient descent international conference learning representations iclr url http bibliography nemirovski yudin cezari convergence steepest descent method approximating saddle point functions soviet mathematics doklady volume maximilian nickel volker tresp kriegel model collective learning data proceedings international conference machine learning icml bellevue washington usa june july pages maximilian nickel volker tresp kriegel factorizing yago scalable machine learning linked data proceedings world wide web conference www lyon france april pages doi url http maximilian nickel lorenzo rosasco tomaso poggio holographic embeddings knowledge graphs proceedings thirtieth aaai conference artificial intelligence february phoenix arizona pages url http ulf nilsson jan maluszynski logic programming prolog wiley isbn sebastian mirella lapata construction semantic space models computational linguistics doi url http lawrence page sergey brin rajeev motwani terry winograd pagerank citation ranking bringing order web technical report stanford infolab baolin peng zhengdong hang wong towards neural networkbased reasoning corr url http david poole probabilistic inference proceedings eighteenth international joint conference artificial intelligence acapulco mexico august pages url http bibliography ross quinlan learning logical definitions relations machine learning doi url http luc raedt logical settings artif doi url http scott reed nando freitas neural international conference learning representations iclr url http steffen rendle christoph freudenthaler zeno gantner lars bpr bayesian personalized ranking implicit feedback uai proceedings conference uncertainty artificial intelligence montreal canada june pages url https matthew richardson pedro domingos markov logic networks machine learning doi url http sebastian riedel limin yao andrew mccallum benjamin marlin relation extraction matrix factorization universal schemas human language technologies conference north american chapter association computational linguistics proceedings june westin peachtree plaza hotel atlanta georgia usa pages url http tim sebastian riedel learning knowledge base inference neural theorem provers proceedings workshop automated knowledge base construction akbc san diego usa june pages url http tim sebastian riedel differentiable proving advances neural information processing systems annual conference neural information processing systems december long bibliography beach california united states volume url http tim matko bosnjak sameer singh sebastian riedel lowdimensional embeddings logic acl workshop semantic parsing tim sameer singh sebastian riedel injecting logical background knowledge embeddings relation extraction naacl hlt conference north american chapter association computational linguistics human language technologies denver colorado usa may june pages url http tim edward grefenstette karl moritz hermann tomas kocisky phil blunsom reasoning entailment neural attention international conference learning representations iclr alexander rush sumit chopra jason weston neural attention model abstractive sentence summarization proceedings conference empirical methods natural language processing emnlp lisbon portugal september pages url http stuart russell peter norvig artificial intelligence modern approach internat pearson education isbn ivan sanchez tim sebastian riedel sameer singh towards extracting faithful descriptive representations latent variable models aaai spring symposium knowledge representation reasoning krr evan sandhaus new york times annotated corpus linguistic data consortium stefan schoenmackers oren etzioni daniel weld scaling textual inference web conference empirical methods natural language processing emnlp proceedings conference october honolulu hawaii usa meeting sigdat special interest group acl pages url http bibliography stefan schoenmackers jesse davis oren etzioni daniel weld learning firstorder horn clauses web text empirical methods natural language processing emnlp url http stefan schoenmackers jesse davis oren etzioni daniel weld learning horn clauses web text proceedings conference empirical methods natural language processing emnlp october mit stata center massachusetts usa meeting sigdat special interest group acl pages url http marwin segler mike mark waller towards alphachem chemical synthesis planning tree search deep neural network policies corr url http luciano serafini artur avila garcez logic tensor networks deep learning logical reasoning data knowledge proceedings international workshop learning reasoning nesy joint artificial intelligence hlai new york city usa july url http lokendra shastri neurally motivated constraints working memory capacity production system parallel processing implications connectionist model based temporal synchrony proceedings fourteenth annual conference cognitive science society july august cognitive science program indiana university bloomington volume page psychology press jude shavlik geoffrey towell approach combining explanationbased neural learning algorithms connection science yelong shen huang jianfeng gao weizhu chen reasonet learning stop reading machine comprehension proceedings workshop cognitive computation integrating neural symbolic approaches annual conference neural information processing systems nips barcelona spain december url http bibliography david silver aja huang chris maddison arthur guez laurent sifre george van den driessche julian schrittwieser ioannis antonoglou vedavyas panneershelvam marc lanctot sander dieleman dominik grewe john nham nal kalchbrenner ilya sutskever timothy lillicrap madeleine leach koray kavukcuoglu thore graepel demis hassabis mastering game deep neural networks tree search nature doi url http sameer singh dustin hillard chris leggetter extraction entities text advertisements human language technologies conference north american chapter association computational linguistics proceedings june los angeles california usa pages url http richard socher eric huang jeffrey pennington andrew christopher manning dynamic pooling unfolding recursive autoencoders paraphrase detection advances neural information processing systems annual conference neural information processing systems proceedings meeting held december granada pages url http richard socher danqi chen christopher manning andrew reasoning neural tensor networks knowledge base completion advances neural information processing systems annual conference neural information processing systems proceedings meeting held december lake tahoe nevada united pages url http gustav sourek vojtech aschenbrenner filip ondrej kuzelka lifted relational neural networks proceedings nips workshop cognitive computation integrating neural symbolic approaches annual conference neural information processing systems nips montreal canada december url http bibliography sainbayar sukhbaatar arthur szlam jason weston rob fergus memory networks advances neural information processing systems annual conference neural information processing systems december montreal quebec canada pages url http ilya sutskever oriol vinyals quoc sequence sequence learning neural networks advances neural information processing systems annual conference neural information processing systems december montreal quebec canada pages url http kristina toutanova danqi chen patrick pantel hoifung poon pallavi choudhury michael gamon representing text joint embedding text knowledge bases proceedings conference empirical methods natural language processing emnlp lisbon portugal september pages url http geoffrey towell jude shavlik artificial neural networks artif doi url http trouillon johannes welbl sebastian riedel gaussier guillaume bouchard complex embeddings simple link prediction proceedings international conference machine learning icml new york city usa june pages url http ivan vendrov ryan kiros sanja fidler raquel urtasun images language international conference learning representations iclr url http patrick verga david belanger emma strubell benjamin roth andrew mccallum multilingual relation extraction using compositional universal schema naacl hlt conference north american chapter association computational linguistics human language technologies san diego california usa june pages url http bibliography patrick verga arvind neelakantan andrew mccallum generalizing unseen entities entity pairs universal schema corr url http oriol vinyals meire fortunato navdeep jaitly pointer networks advances neural information processing systems annual conference neural information processing systems december montreal quebec canada pages url http oriol vinyals lukasz kaiser terry koo slav petrov ilya sutskever geoffrey hinton grammar foreign language advances neural information processing systems annual conference neural information processing systems december montreal quebec canada pages url http johanna mathias niepert statistical schema induction semantic web research applications extended semantic web conference eswc heraklion crete greece may proceedings part pages doi url http quan wang bin wang guo knowledge base completion using embeddings rules proceedings international joint conference artificial intelligence ijcai buenos aires argentina july pages url http william yang wang william cohen joint information extraction reasoning scalable statistical relational learning approach proceedings annual meeting association computational linguistics international joint conference natural language processing asian federation natural language processing acl july beijing china volume long papers pages url http william yang wang william cohen learning logic embeddings via matrix factorization proceedings international joint conference artificial intelligence ijcai new york usa july bibliography pages url http william yang wang kathryn mazaitis william cohen programming personalized pagerank locally groundable probabilistic logic acm international conference information knowledge management cikm san francisco usa october november pages doi url http zhiguo wang wael hamza radu florian bilateral matching natural language sentences corr url http zhuoyu wei jun zhao kang liu zhenyu zhengya sun guanhua tian knowledge base completion inferring via grounding network sampling selected instances proceedings acm international conference information knowledge management cikm melbourne vic australia october pages doi url http dirk weissenborn separating answers queries neural reading comprehension corr url http jason weston sumit chopra antoine bordes memory networks corr url http fei jun song yang zhongfei mark zhang yueting zhuang structured embedding via pairwise relations interactions knowledge base proceedings aaai conference artificial intelligence january austin texas pages url http kelvin jimmy ryan kiros kyunghyun cho aaron courville ruslan salakhutdinov richard zemel yoshua bengio show attend tell neural image caption generation visual attention proceedings international conference machine learning icml lille bibliography france july pages url http bishan yang yih xiaodong jianfeng gao deng embedding entities relations learning inference knowledge bases international conference learning representations iclr url http fan yang zhilin yang william cohen differentiable learning logical rules knowledge base completion corr url http wenpeng yin hinrich convolutional neural network paraphrase identification naacl hlt conference north american chapter association computational linguistics human language technologies denver colorado usa may june pages url http wojciech zaremba ilya sutskever oriol vinyals recurrent neural network regularization corr url http luke zettlemoyer michael collins learning map sentences logical form structured classification probabilistic categorial grammars uai proceedings conference uncertainty artificial intelligence edinburgh scotland july pages url https jiang zhao tiantian zhu man lan ecnu one stone two birds ensemble heterogenous measures semantic relatedness textual entailment proceedings international workshop semantic evaluation semeval coling dublin ireland august pages url http
| 9 |
road damage detection using deep neural networks images captured smartphone feb hiroya yoshihide sekimoto toshikazu seto takehiro kashiyama hiroshi omata university tokyo komaba tokyo japan abstract research damage detection road surfaces using image processing techniques actively conducted achieving considerably high detection accuracies many studies focus detection presence absence damage however scenario road managers governing body need repair damage need clearly understand type damage order take effective action addition many previous studies researchers acquire data using different methods hence uniform road damage dataset available openly leading absence benchmark road damage detection study makes three contributions address issues first best knowledge first time road damage dataset prepared dataset composed road damage images captured smartphone installed car instances road surface damage included road images order generate dataset cooperated municipalities japan acquired road images hours images captured wide variety weather illuminance conditions image annotated bounding box representing location type damage next used object detection method using convolutional neural networks train damage detection model dataset compared accuracy runtime speed using gpu server smartphone finally demonstrate type damage classified eight types high accuracy applying proposed object detection method road damage dataset experimental results developed smartphone application used study publicly available https introduction period high economic growth japan infrastructure roads bridges tunnels constructed extensively however many constructed years ago mlit aged number structures inspected expected increase rapidly next decades addition discovery aged affected parts infrastructure thus far depended solely expertise veteran field engineers however owing increasing demand inspections shortage field technicians experts financial resources resulted many areas particular number municipalities neglected conducting appropriate inspections owing lack resources experts increasing kazuya united states also similar infrastructure aging problems aaoshat prevailing problems infrastructure maintenance management likely experienced countries world considering negative trend infrastructure maintenance management evident efficient sophisticated infrastructure maintenance methods urgently required response abovementioned problem many methods efficiently inspect infrastructure especially road conditions studied methods using laser technology image processing moreover quite studies using neural networks civil engineering problems years adeli furthermore recently computer vision machine learning techniques successfully applied automate road surface inspection chun zalama ryu however thus far respect methods inspections using image processing believe methods maedahi suffer three major disadvantages common dataset comparison results research proposed method evaluated using dataset road damage images motivated field general object recognition wherein large common datasets imagenet deng pascal voc everingham exist believe need common dataset road scratches although current object detection methods use deep learning techniques method exists road damage detection though road surface damage distinguished several categories japan eight categories according road maintenance repair guidebook jra many studies limited detection classification damage longitudinal lateral directions chun zalama zhang akarsu maeda therefore difficult road managers apply research results directly practical scenarios considering abovementioned disadvantages study develop new road damage dataset train evaluate damage detection model based convolutional neural network cnn method contributions study follows created released road damage images containing damages dataset contains bounding box class eight types road damage image extracted image set created capturing pictures large number roads obtained using smartphone images dataset contain wide variety weather illuminance conditions addition assessing type damage expertise professional road administrator employed rendering dataset considerably reliable using developed dataset evaluated art object detection method based deep learning made benchmark results trained models also publicly available furthermore showed type damage among eight types identified high accuracy rest paper organized follows tion discuss related works details new dataset presented section experimental settings explained section results provided section finally section concludes paper related works road damage detection road surface inspection primarily based visual observations humans quantitative analysis using expensive machines among visual inspection approach requires experienced road managers also timeconsuming expensive furthermore visual inspection tends inconsistent unsustainable increases risk associated aging road infrastructure considering issues municipalities lacking required resources conduct infrastructure inspections appropriately frequently increasing risk posed deteriorating structures contrast quantitative determination based inspection using mobile measurement system mms kokusai kogyo method salari also widely conducted mms obtains highly accurate geospatial information using moving vehicle system comprises global positioning system gps unit internal measurement unit digital measurable images digital camera laser scanner omnidirectional video recorder though quantitative inspection highly accurate considerably expensive conduct comprehensive inspections especially small municipalities lack required financial resources therefore considering abovementioned issues several attempts made develop method analyzing road properties using combination recordings cameras image processing technology efficiently inspect road surface example previous study proposed automated asphalt pavement crack detection method using image processing techniques naive approach chun addition system using commercial camera previously proposed ryu recent times become possible quite accurately analyze damage road surfaces using deep neural networks zhang https maeda zhang instance zhang zhang introduced cracknet predicts class scores pixels however road damage detection methods focus determination existence damage though studies classify damage based example zalama zalama classified damage types vertically horizontally akarsu akarsu categorized damage three types namely vertical horizontal studies primarily focus classifying damages types therefore practical damage detection model use municipalities necessary clearly distinguish detect different types road damage depending type damage road administrator needs follow different approaches rectify damage furthermore application deep learning road surface damage identification proposed studies example studies maeda maeda zhang zhang however method proposed maeda maeda uses pixel images identifies damaged road surfaces classify different types addition method zhang zhang identifies whether damage occurred exclusively using patch obtained pixel image pixel damage classifier applied using sliding window approach felzenszwalb pixel images order detect cracks concrete surface cha studies classification methods applied input images damage detected recently reported object detection using deep learning accurate faster processing speed using combination classification methods discussed detail example method using deep learning performing better tradition methods white line detection based deep learning using overfeat sermanet outperformed previously proposed empirical method huval however best knowledge example application deep learning method road damage detection exists important note classification refers labeling image rather object whereas detection means assigning image label identifying objects coordinates exemplified imagenet competition deng therefore considering apply object detection method based deep learning road surface damage detection problem verify detection accuracy processing speed particular examine whether detect eight classes road damage applying object detection methods discussed later newly created road damage dataset explained section although many excellent methods proposed segmentation cracks concrete surface byrne nishikawa research uses object detection method image dataset road surface damage though image dataset road surface exists called kitti dataset geiger primarily used applications related automatic driving however best knowledge dataset tagged road damage exists field studies focusing road damage detection described study researchers independently propose unique methods using acquired road images therefore comparison methods presented studies difficult furthermore according mohan mohan poobal studies construct damage detection models using real data studies use road images taken directly road fact difficult reproduce road images taken directly roads involves installing camera outside car body many countries violation law addition costly maintain dedicated car solely road images therefore developed dataset road damage images using road images captured using smartphone dashboard general passenger car addition made dataset publicly available moreover show road surface damage detected considerably high accuracy even images acquired employing simple method object detection system general object detection methods apply image classifier object detection task become mainstream methods entail varying size position object test image using classifier identify object sliding window approach example felzenszwalb past years approach involving extraction multiple candidate regions objects using region proposals typified making classification decision candidate regions using classifiers also reported girshick however approach time consuming requires crops leading significant duplicate computation overlapping crops calculation redundancy solved using fast girshick inputs entire image feature extractor crops share computation load feature extraction described image processing methods historically developed considerable pace study primarily focus four recent object detection systems faster ren look yolo redmon redmon farhadi system fully convolutional networks system dai single shot multibox detector ssd system liu erably fast solves problem mere regression detecting objects considering background information yolo algorithm outputs coordinates bounding box object candidate confidence inference receiving image input another object detection framework proposed dai dai architecture fully convolutional network accurate efficient object detection although faster several times faster fast component must applied several hundred times per image instead cropping features layer region proposals predicted like case faster method method crops taken last layer features prior prediction approach pushing cropping last layer minimizes amount computation must performed dai dai showed model using resnet could achieve accuracy comparable faster often faster running speeds faster faster ren two stages detection first stage images processed using feature extractor vgg mobilenet called region proposal network rpn simultaneously intermediate level layers used predict class bounding box proposals second stage box proposals used crop features intermediate feature map subsequently input remainder feature extractor order predict class label bounding box refinement proposal important note faster crop proposals directly image crops feature extractor would lead duplicated computations ssd ssd liu object detection framework uses single convolutional network directly predict classes anchor offsets without requiring second stage classification operation key feature framework use convolutional bounding box outputs attached multiple feature maps top network base network object detection systems convolutional feature extractor base network applied input image order obtain features selection feature extractor considerably important number parameters layers type layers properties directly affect performance detector selected seven representative base networks explained three base networks evaluate results section six feature extractors selected yolo yolo object detection framework achieve high mean average precision map speed redmon redmon farhadi addition yolo predict region class objects single cnn advantageous feature yolo processing speed widely used field computer vision blocks depthwise separable convolutions factorize standard convolution depthwise convolution convolution effectively reducing computational cost number parameters redmon farhadi base model yolo framework model convolutional layers maxpooling layers section describe proposed new dataset including data obtained annotated contents issues related privacy vgg simonyan zisserman cnn total layers consisting convolution layers fully connected layers proposed imagenet large scale visual recognition challenge ilsvrc model achieved good results ilsvrc coco classification detection segmentation considering depth layers resnet inception inception ioffe szegedy inception szegedy enable one increase depth breadth network without increasing number parameters computational complexity introducing inception units inception resnet inception resnet szegedy improves recognition accuracy combining residual connections inception units effectively data collection thus far study damage detection road surface images either captured road surface using cameras vehicles models trained images captured situations applied practice limited considering difficulty capturing images contrast model constructed images captured vehicle camera easy apply images train model practical situations example using readily available camera like smartphones general passenger cars individual easily detect road damages running model smartphone transferring images external server processing server selected seven local governments cooperated road administrators local government collect road seven municipalities snowy areas urban areas diverse terms regional characteristics weather fiscal constraints installed smartphone nexus dashboard car shown figure photographed images pixels per second reason select photographing interval possible photograph images traveling road without leakage duplication average speed car approximately approximately purpose created smartphone application capture images roads record location information per second application also publicly available website resnet refers deep residual learning structure deep learning particularly cnns enables learning deep network released microsoft research accuracy beyond human ability obtained learning images layers resnet achieved error rate imagenet test set first place ilsvrc classification task proposed dataset mobilenet mobilenet howard shown achieve accuracy comparable imagenet computational cost model size mobilenet designed efficient inference various mobile vision applications building ichihara city chiba city sumida ward nagakute city adachi city muroran city numazu city traveled every municipality covering approximately total data annotation collected images annotated manually illustrate annotation pipeline figure dataset format designed manner similar pascal voc everingham easy apply many existing methods used field image processing figure installation setup smartphone car mounted dashboard general passenger car developed application capture photograph road surface approximately ahead indicates application photograph images traveling road without leakage duplication care moves average speed photographing every second addition detect road damages high accuracy see section data statistics dataset composed labeled road damage images images bounding boxes damage annotated figure shows number instances per label collected municipality photographed number road images various regions japan could avoid biasing data example damages pose significant danger therefore road managers repair damages soon occur thus many instances reality many studies blurring white lines considered damage however study white line blur also considered damage summary new dataset includes damage images damage bounding boxes resolution images pixels area weather area diverse thus dataset closely resembles real world used dataset evaluate damage detection model data category table list different damage types definition paper damage type represented class name type damage illustrated examples figure seen table damage types divided eight categories first damage classified cracks corruptions cracks divided linear cracks alligator cracks crocodile cracks corruptions include pot holes rutting also road damage blurring white lines privacy matters dataset openly accessible public therefore considering issues privacy based visual inspection person face car license plate clearly reflected image blurred best knowledge previous research covers wide variety road damages especially case image processing example method proposed ryu detects potholes zalama zalama classifies damage types exclusively longitudinal lateral whereas method proposed akarsu akarsu categorizes damage types longitudinal lateral alligator cracks previous study using deep learning zhang maeda detects presence absence damage experimental setup based previous study many neural networks object detection methods compared detail huang among object detection methods ssd using inception ssd using mobilenet relatively small cpu loads low memory consumption even maintaining high accuracy however important note results abovementioned research obtained using coco dataset lin class name liner crack longitudinal wheel mark part liner crack longitudinal construction joint part liner crack lateral equal interval liner crack lateral construction joint part alligator crack rutting bump pothole separation white line blur cross walk blur figure sample images dataset correspond one eight categories shows legend benchmark contains road images images include cracks total images annotated class labels bounding boxes images captured using smartphone camera realistic scenarios table road damage types dataset definitions damage type longitudinal linear crack lateral detail class name wheel mark part crack construction joint part equal interval construction joint part alligator crack partial pavement overall pavement rutting bump pothole separation corruption white line blur cross walk blur source road maintenance repair guidebook jra japan note reality rutting bump pothole separation different types road damage difficult distinguish four types using images therefore classified one class previous case followed methodology mentioned original paper liu well similar huang huang initialize weights truncated normal distribution standard deviation initial learning rate learning rate decay every iterations input image size case pixels well figure annotation pipeline first draw bounding box class label attached believe object detection method executed smartphone small computational resource desirable study trained model using ssd inception ssd mobilenet frameworks analyze cases applying ssd using inception ssd using mobilenet dataset detail ssd using mobilenet training evaluation conducted training evaluation using dataset experiment dataset randomly divided ratio former part set training data latter evaluation data thus training data included images evaluation data images parameter settings object detection algorithm using deep learning parameters learned data enormous addition number hyper parameters set humans large parameter setting case algorithm described results experiment training performed running ubuntu operating system nvidia grid gpu ram memory evaluation intersection union iou threshold set detected samples illustrated figures compared results provided ssd ssd using inception tion ssd mobilenet results listed followed methodology mentioned table although detected inal paper liu initial learning rate relatively high recall precision value recall reduced learning rate decay low case attributed per iterations input image size number training data see figure pixels indicates original images contrary detected high recall preciare resized sion even though number training data small damages liner crack longitudinal wheel mark part liner crack longitudinal construction joint part liner crack lateral equal interval liner crack lateral construction joint part alligator crack rutting bump pothole separation white line blur cross walk blur total ichihara city chiba city sumida ward nagakute city adachi ward muroran city numazu city total figure number damage instances class municipality see distribution damage type differs local government example muroran city many damages damages compared municipalities muroran city snowy zone therefore alligator cracks tend occur thaw snow blur pedestrian crossing occupies large proportion image feature clear stripped pattern overall ssd mobilenet yields better results next inference speed model described table speed tested specifications previous case nexus smartphone cpu ram memory case ssd inception two times slower ssd mobilenet consistent result huang huang addition smartphone processed data installed moving car damage road surface detected real time accuracy table smartphone application used detect road damage using trained model dataset ssd mobilenet see figure publicly available website road damage visually confirmed classified eight classes images annotated released training dataset best knowledge dataset first one road damage detection strongly believe dataset provides new avenue road damage detection addition trained evaluated damage detection model using dataset based results category achieved recalls precisions greater inference time smartphone believe simple road inspection method using smartphone useful regions experts financial resources lacking support research field made dataset trained models source code smartphone application publicly available future plan develop methods detect rare types damage uncommon dataset acknowledgement conclusions study developed new dataset road damage detection classification collaboration seven local governments japan collected road images images research supported national institute information communication technology nict contract research social big data utilization basic technology issue knowledge table detection classification results class using ssd inception ssd mobilenet sir ssd inception recall sip ssd inception precision sia ssd inception accuracy smr ssd recall smp ssd precision sma ssd accuracy class sir sip sia smr smp sma table inference speed model image resolution image model details inference speed ssd using inception gpu ssd using mobilenet gpu ssd using mobilenet smartphone site citizen knowledge organically development citizen collaborative platform additionally would like express gratitude ichihara city chiba city sumida ward nagakute city adachi city muroran city numazu city cooperation experiment civil engineers ser pavement engineering dai sun object detection via fully convolutional networks advances neural information processing systems pages deng dong socher imagenet hierarchical image database computer vision pattern recognition cvpr ieee conference pages ieee references aaoshat bridging rebuilding nations bridges washington american association state highway transportation officials adeli neural networks civil engineering civil infrastructure engineering akarsu parlak erhan sarimaden fast adaptive road defect detection approach using computer vision real time implementation everingham eslami van gool williams winn zisserman pascal visual object classes challenge retrospective international journal computer vision everingham van gool williams winn zisserman pascal visual object classes voc challenge international journal computer vision cha choi deep crack damage detection using convolutional neural networks civil infrastructure engineering chun hashimoto kataoka kuramoto ohga asphalt pavement crack detection using image processing bayes based machine learning approach journal japan society felzenszwalb girshick mcallester ramanan object detection discriminatively trained models ieee transactions pattern analysis machine intelligence geiger lenz stiller urtasun vision meets robotics kitti dataset huval wang tandon kiske song pazhayampallil andriluka rajpurkar migimatsu empirical evaluation deep learning highway driving arxiv preprint start detection ioffe szegedy batch normalization accelerating deep network training reducing internal covariate shift international conference machine learning pages stop detection ryu pothole detection system using camera sensors figure operating screen smartphone application supposed mounted dashboard general passenger car see figure detection road surface damage initiated pressing start detection button image damaged part position information transmitted external server damage found using ssd mobilenet application detect road damages eight types accuracy shown table jra maintenance repair guide book pavement japan road association edition kazuya akira shun takeki effective surface inspection method urban roads according pavement management situation local governments japan scienc technology information aggregator national journal robotics research kokusai kogyo mms mobile surement system girshick fast proceedings ieee international conference computer vision lin maire belongie hays perona ramanan zitnick pages microsoft coco common objects context eugirshick donahue darrell malik ropean conference computer vision pages rich feature hierarchies accurate object springer detection semantic segmentation proceedings ieee conference computer vision liu anguelov erhan szegedy reed tern recognition pages berg ssd single shot multibox detector european conference zhang ren sun deep computer vision pages springer residual learning image recognition proceedings ieee conference computer vision maeda sekimoto seto pattern recognition pages lightweight road manager autohoward zhu chen kalenichenko wang weyand andreetto adam mobilenets efficient convolutional neural networks mobile vision applications arxiv preprint matic determination road damage status deep neural network proceedings acm sigspatial international workshop mobile geographic information systems pages acm mlit present state future social capital huang rathod sun zhu korattikara aging infrastructure maintenance information fathi fischer wojna song guadarrama mohan poobal crack detection usfor modern convolutional object detectors arxiv ing image processing critical review analysis preprint alexandria engineering journal nishikawa yoshida sugiyama fujino concrete crack detection multiple sequential image filtering civil infrastructure engineering civil infrastructure engineering zhang wang yang dai peng fei liu chen automated pavement crack detection asphalt surfaces using network civil infrastructure engineering byrne ghosh schoefs pakrashi regionally enhanced multiphase segmentation technique damaged surfaces civil infrastructure engineering redmon divvala girshick farhadi look unified object detection proceedings ieee conference computer vision pattern recognition pages redmon farhadi better faster stronger arxiv preprint ren girshick sun faster towards object detection region proposal networks advances neural information processing systems pages sermanet eigen zhang mathieu fergus lecun overfeat integrated recognition localization detection using convolutional networks arxiv preprint simonyan zisserman deep convolutional networks image recognition arxiv preprint szegedy ioffe vanhoucke alemi impact residual connections learning aaai pages szegedy vanhoucke ioffe shlens wojna rethinking inception architecture computer vision proceedings ieee conference computer vision pattern recognition pages salari pavement pothole detection severity measurement using laser imaging technology eit ieee international conference pages ieee zalama medina llamas road crack detection using visual features extracted gabor filters zhang yang zhang zhu road crack detection using deep convolutional neural network image processing icip ieee international conference pages ieee class name liner crack longitudinal wheel mark part liner crack longitudinal construction joint part liner crack lateral equal interval liner crack lateral construction joint part alligator crack rutting bump pothole separation white line blur cross walk blur figure detected samples using ssd mobilenet class name liner crack longitudinal wheel mark part liner crack longitudinal construction joint part liner crack lateral equal interval liner crack lateral construction joint part alligator crack rutting bump pothole separation white line blur cross walk blur figure detected samples using ssd inception
| 1 |
dec tensor product multiplicities via upper cluster algebras jiarui fei abstract valued quiver dynkin type construct valued ice quiver let simply connected lie group dynkin diagram underlying valued graph upper cluster algebra graded triple dominant weights prove dimension graded component counts tensor multiplicity conjecture also true sketch possible approach using construction improve berensteinzelevinsky model sense generalize hive model type contents introduction part construction iart qps graded upper cluster algebras presentations iart quivers cluster character quivers potentials iart qps appendices list iart quivers part isomorphism conf cluster structure maximal unipotent groups maps relating unipotent groups cluster structure conf epilogue appendices twisted cyclic shift via mutations acknowledgement references mathematics subject classification primary secondary key words phrases graded upper cluster algebra iart quiver quiver representation quiver potential mutation cluster character configuration space tensor multiplicity coefficient cone lattice point jiarui fei introduction finding polyhedral model tensor multiplicities lie theory longstanding problem tensor multiplicities mean multiplicities irreducible summands tensor product two irreducible representations simply connected lie group problem asks express multiplicity number lattice points convex polytope accumulating works gelfand berenstein zelevinsky since first quite satisfying model type invented finally around building upon work knutson tao invented hive model led solution saturation conjecture fact reduction horn problem saturation conjecture important driving force evolution models outside type berenstein zelevinsky models still known polyhedral models models lose nice features knutsontao hive model short discussion section despite lot effort improve model author best knowledge satisfying results direction recently interesting link hive model cluster algebra theory established quiver potential model cluster algebras similar different link polyhedral models tropical geometry established goncharov shen fact work berenstein fomin zelevinsky links may big surprise two goals current paper first want generalize work types specifically hope prove algebras regular functions certain configuration spaces upper cluster algebras second want improve model spirit fact shall see accomplish two goals almost simultaneously namely use conjectural models establish cluster algebra structures cluster structures established conjectural models proved well key making conjectural models construction iart quivers let valued quiver dynkin type let category projective presentations associate category quiver translation art quiver short ice art quiver iart quiver short obtained freezing three sets vertices correspond negative positive neutral presentations put quite canonical potential iart quiver quiver potential short related upper cluster algebras cluster characters evaluating introduced cluster character considered paper generic one replaced fancier ones seen many different situations set given lattice points rational polyhedral cone also case iart qps whole part devoted construction iart polyhedral cone turns cone neat hyperplane presentation columns matrix tensor product multiplicities via upper cluster algebras given dimension vectors subrepresentations representations representations bijection frozen vertices also simple nice description see theorem main result part following theorem theorem set exactly upper cluster algebra natural grading weight vectors presentations grading extended grading grading slices cone polytopes let simply connected simple lie group dynkin diagram underlying valued graph conjectural model lattice points counts tensor multiplicity multiplicity irreducible representation highest weight tensor product often identify dominant weight integral vector prove model follow similar line however quiver setting work general replace rings quiver representations ring regular functions certain configuration space introduced fix opposite pair maximal unipotent subgroups quotient space called base affine space quotient space called dual configuration space conf definition acts ring regular functions conf invariant ring ring conf multigraded triple weights graded component conf given space dimension counts tensor multiplicity main result part theorem theorem suppose trivially valued ring regular functions conf graded upper cluster algebra moreover generic character maps lattice points onto basis algebra particular counted lattice points show example upper cluster algebra strictly contains corresponding cluster algebra general conjecture trivially valued assumption dropped theorem theorem pointed end missing ingredient proving conjecture analogue lemma species potentials fock goncharov studied similar spaces conf cluster varieties however author best knowledge clear discussion initial seed type moreover equality established theorem seem fellow result fact fock goncharov later conjectured tropical points cluster parametrize bases corresponding upper cluster algebras result viewed considered generic part quotient stack work categorical quotient partial compactification jiarui fei algebraic analog conjecture space conf instead working tropical points work sketch ideas first observe forget frozen vertices corresponding positive neutral presentations get valued ice quiver denoted whose cluster algebra isomorphic coordinate ring roughly speaking procedure corresponds open embedding conf precisely corollary define cluster theorem pullback map hard show conf contains upper cluster algebra graded subalgebra detail given section far graded inclusions conf span finish proof suffices show containment conf span come back cluster structure turns analog theorem rather easy prove set contains exactly lattice points polytope defined one three sets relations hand two embeddings conf map followed twisted cyclic shift conf another crucial ingredient paper interpretation twisted cyclic shift terms sequence mutations applying get two qps wql wqr analogous polytopes defined two sets relations finally showing good behavior pullback three embeddings required inclusion follow fact conf laurent polynomial ring cluster detail given section except two main results side result base affine spaces author would like thank leclerc yakimov confirming following theorem open problem turns cluster structure lies conf let valued ice quiver obtained deleting frozen vertices corresponding neutral presentations theorem theorem suppose trivially valued ring regular functions graded upper cluster algebra moreover generic character maps lattice points onto basis particular weight multiplicity dim counted lattice points models knutson tao invented remarkable polyhedral model called hives honeycomb author personally thinks least three advantages model first hive polytopes nice presentation second cyclic symmetry tensor multiplicity lucid hive model actually tensor product multiplicities via upper cluster algebras symmetries also follow hive model last importantly operation called overlaying honeycombs appropriate sense models share nice properties first one clear result even entries however readers prefer inequalities hives one transform model totally unimodular map however inequalities always neat ones type discuss transformation analogous overlaying elsewhere although cyclic symmetry immediately clear understand construction proofs hidden believe probably best outside type general context algebras tensor multiplicity problem solved littelmann path model pointed model transformed polyhedral ones work however general involves union several convex polytopes relation work groundbreaking work berenstein zelevinsky invented polyhedral model dynkin types main tools lusztig canonical basis tropical relations double bruhat cells polytopes defined explicitly terms author feels hard compute especially type contrast subrepresentations defining rather easy list cases difficult cases type provide algorithm suitable computers recently goncharov shen made progress using tropical geometry geometric satake proved symmetric polyhedral model see theorem however explicit description polytopes equality intermediate byproduct proof similar result loosely speaking work independent results though author benefit lot reading papers construction iart quivers new believe construction results especially ideas behind beyond solving tensor multiplicity problem simple lie groups proofs part similar part heavily rely cluster structure mutation interpretation twisted cyclic shift throughout quiver potential model cluster algebras important outline paper section recall basics valued quivers representations define graded upper cluster algebra attached valued quiver section section recall theory functorial point view specialize theory category presentations mostly hereditary algebras section section define iart quivers general consider hereditary cases detail section proposition compares art quivers presentations familiar art quivers representations section review generic cluster character setting quivers potentials section study iart qps prove two main results part theorem appendix provide examples iart quivers jiarui fei section review rings regular functions base affine spaces maximal unipotent groups especially cluster structure latter theorem proposition section study maps relating configuration spaces corresponding unipotent groups almost technical work required proving main result section prove main result theorem section prove side result theorem end make remark possible generalization laced cases appendix prove mutation interpretation twisted cyclic shift theorem consequence produce algorithm computing cones notations conventions vectors exclusively row vectors modules right modules arrows composed left right path unless otherwise stated unadorned hom base field superscript trivial dual vector spaces direct sum copies write instead traditional part construction iart qps graded upper cluster algebras valued quivers representations familiar usual quiver representations care results simply laced cases skip subsection readers may find detailed treatment valued quivers definition valued quiver triple set vertices usually labelled natural numbers set arrows subset called valuation called symmetrizable every valued quiver pair called ordinary quiver throughout paper valued quivers assumed loops oriented ordinary quivers every called equally valued draw valued quiver first draw ordinary quiver put valuations arrows omit valuation trivially valued valued quivers paper symmetrizable always fix choice readers may view part defining data let gcd let finite field write algebraic closure positive integer denote degree extension note largest subfield contained fgcd fix basis thus freely identify vector space representation assignment fdi space arrow fdi map dimension definition slightly different classical reference adapted cluster algebra theory tensor product multiplicities via upper cluster algebras vector dimm integer vector dimfdi similar usual quiver representations define morphism set homfdi category rep representations abelian category kernels cokernels taken category rep also every representation written uniquely direct sum indecomposable representations usual quivers useful consider equivalent category modules path algebra analog lfor valued quivers notion define fdi fdi notice fdi contains fdi fdj thus structure define tensor algebra clear indecomposable projective resp injective modules precisely resp identity element fdi category rep enough projective injective objects top simple representation supported vertex also socle algebra hereditary global dimension rep dimf homq dimf bilinear form depending dimension vectors called form denote matrix form also define matrix elj eri eli eri matrices related der diagonal matrix diagonal entries example consider valued quiver type module category six indecomposable objects simple injective projective cover simple projective injective hull module presented module presented paper encounter two kinds valued quivers one valued quivers dynkin type bigger valued quivers constructed see section define upper cluster algebras attached latter upper cluster algebras mostly follow define upper cluster algebra need introduce notion quiver mutation mutation valued quivers defined mutation associated matrix jiarui fei every symmetrizable valued quiver corresponds skew symmetrizable integer matrix entries given otherwise matrix skew symmetrizable diagonal matrix conversely given skew symmetrizable matrix unique valued quiver easily defined definition mutation skew symmetrizable matrix direction given sign max otherwise denote induced operation valued quiver also cluster algebras consider paper cluster algebras coefficients combinatorial data defining cluster algebra encoded symmetrizable valued quiver frozen vertices frozen vertices forbidden mutated remaining vertices mutable valued quiver called valued ice quiver viq short mutable part full subquiver consisting mutable vertices general define upper cluster algebra required symmetrizable however paper viqs happen globally symmetrizable usually label mutable vertices first vertices restricted first rows let field necessarily related sense finite field base field rest part definition let field containing seed pair consisting viq together collection called extended cluster consisting algebraically independent elements one vertex elements associated mutable vertices called cluster variables form cluster elements associated frozen vertices called frozen variables coefficient variables seed mutation mutable vertex transforms seed defined follows new viq new extended cluster new cluster variable replacing determined exchange relation xcwu xcvv note mutated seed contains coefficient variables original seed easy check one recover performing seed mutation two seeds obtained sequence mutations called denoted tensor product multiplicities via upper cluster algebras definition cluster algebra associated seed defined subring generated elements extended clusters seeds note construction depends natural isomorphism mutation equivalence class initial viq fact depends mutation equivalence class restricted may drop simply write amazing property cluster algebras laurent phenomenon theorem element cluster algebra expressed terms extended cluster laurent polynomial polynomial coefficient variables since generated cluster variables seeds mutation equivalent theorem rephrased note definition slightly different original one replaced laurent polynomial definition upper cluster algebra seed upper cluster algebra subring field integral domain conventions conversely given domain one may interested identifying upper cluster algebra following useful lemma specialization proposition case unique factorization domain lemma let finitely generated ufd suppose seed contained adjacent cluster variable also moreover pair pair relatively prime gradings let extended cluster vector write monomial set row matrix let suppose element written rational polynomial divisible assume matrix full rank elements algebraically independent vector uniquely determined call vector extended respect pair definition implies two elements set subalgebra forms lemma lemma matrix full rank subset distinct linearly independent jiarui fei definition weight configuration lattice viq assignment vertex weight vector mutable vertex mutation also transforms weight configuration mutated quiver defined otherwise slight abuse notation view matrix whose row weight vector matrix notation condition equivalent zero matrix call cokernel grading space weight configuration called full corank equal rank easy see weight configuration mutation iterated given weight configuration assign multidegree weight upper cluster algebra setting deg mutation preserves multihomogeneousity say upper cluster algebra denoted refer graded seed note variables zero degrees homogeneous degree presentations review categories functors briefly review theory emphasizing functorial point view following theory developed originally module categories artin algebras without much difficulty almost generalized categories let field category two objects let rada space morphisms define consist morphisms form rada rada denote irra rada indecomposable enda local thus enda rad enda division let ind full subcategory indecomposable objects definition ind irreducible morphism element irra let object indecomposable pairwise homa indecomposable write called left minimal almost split residual classes irra form indecomposable irra tensor product multiplicities via upper cluster algebras similarly define right minimal almost split exact sequence called almost split left minimal almost split right minimal almost split case called translation denoted definition left right minimal almost split original one see definition follows proposition also recall basic fact irr dimemi irra dimem let finite dimensional mod category finite dimensional right proposition theorem translation defined every indecomposable given trivial dual auslander transpose functor see denote aop opposite category mod aop category additive contravariant functors mod object functor homa object mod aop yoneda embedding hommod aop homa mod aop conclude corollary homa projective object mod aop fact every finitely generated projective object form lemma ask readers formulate dual statements injective object homa object associate unique simple functor characterized let come back category mod consider subcategory proj projective yoneda embedding mod proj equivalent mod lemma theorem let indecomposable exact sequence almost split induced sequence functors homa homa homa minimal projective resolution mod mod aop projective right minimal almost split induced sequence functors homa homa minimal projective resolution mod mod aop presentations let finite dimensional valued quiver see know valued quiver associated take defined section let proj category projective presentations precise proj objects complexes fixed degrees morphisms commutative diagrams note category abelian let indecomposable projective module denote corresponding jiarui fei weight vector reduced weight vector difference presentations forms called negative positive neutral also denoted idp respectively called negative positive neutral presentation denoted idi respectively lemma presentation decomposes fid positive fid neutral minimal presentation coker corollary indecomposable presentation one following four kinds negative positive neutral presentations minimal presentations indecomposable representations following lemma easy verify mod lemma homc homa homc homa homc idp homa homc idp homa let two presentations representations namely coker coker morphism homc get induced morphism homa conversely homa lifts morphism homc obtain surjection homc homa coker coker maps zero morphism image contained image case lifts map homa projective hence kernel image map homa homc moreover injective injective well record terms functors lemma exact sequence homc homc homa coker coker moreover injective leftmost map injective well let assume denote representation write minimal presentation following two lemmas write hom homc results also proved slightly different setting tensor product multiplicities via upper cluster algebras lemma suppose almost split sequence rep projective resolution simple mod hom hom hom unless simple resolution becomes hom hom hom idi hom proof suppose simple splice minimal presentations together form presentation construction exact sequence claim minimal equivalent dim homq dim homq dim homq dim dim dim since almost split follows homq homq homq exact using lemma expand double complex exact columns sits middle row bottom row homq coker homq coker homq coker exact lemma top row hom hom hom clearly exact easy see holds evaluating evaluating outside exactness follows lemma case formula theorem homq homq exact sequence homq homq hence homq implies idi exact argument shows holds jiarui fei lemma following projective resolutions simple objects mod hom hom hom hom hom hom hom hom idi hom hom source hom source hom hom idi sidi proof lemma hom homq coker exactness follows lemma prove rest evaluating sequences indecomposable presentations clear idi idi equivalent morphism hom uniquely determines morphism hom since right minimal almost split proposition unique lift unique morphism homa map hom proof resolution similar lemma contains two cases prove thel first one second similar exact sequence similar argument lemma shows idi exact source simple assume minimally presented construction exact sequence contains direct summand particular see easy verify assume moreover presents using lemma expand double complex tensor product multiplicities via upper cluster algebras exact columns homq homq homq homq hom hom hom hom homq homq due write homq homq reflecting top row find homq homq homq hom homq homq top row reduces last two rows nothing double complex obtained taking homq presentations know complex cohomology homq left right hand map kernel cokernel homq well finally easily chase diagram conclude exactness middle row according two lemmas makes sense least hereditary cases extend classical translation representations presentations define define iart quivers iart quivers slightly upgrade classical quiver adding translation arrows following definition basically taken jiarui fei let category section recall ind enda rad enda division definition art quiver art valued quiver defined follows vertex isomorphism classes objects ind morphism arrow irra assign valuation arrow dimem irra irr dimen translation arrow trivial valuation defined vertex art quiver called transitive translation inverse defined note number valuation alternatively interpreted direct sum multiplicity minimal right almost split similarly multiplicity minimal left almost split moreover ind morphism arrows equally valued definition iart quivers iart quiver obtained art quiver freezing vertices whose translations defined iart quiver obtained art quiver freezing vertices remark module category algebra frozen vertices precisely indecomposable projective modules however clear vertices frozen author think interesting doable problem determine vertices see algebra hereditary precisely negative positive neutral presentations use notation morphism arrow translation arrow arrow art quiver lemma let additive function abelian group exact sequence transitive vertex proof two almost split sequences tensor product multiplicities via upper cluster algebras additivity typical additive function weight vector special cases including examples indecomposable presentations uniquely determined weight vectors label iart quiver weight vectors use exponential form shorthand example vector written example let jacobian algebra quiver potential see section difference two oriented triangles iart quiver drawn always put frozen vertices boxes two vertices weight label identified translation arrow going ends hereditary cases particular take rep valued quiver get two art quivers rep denote corresponding iart quivers proposition art quiver obtained follows add vertices corresponding another vertices corresponding idi draw morphism arrows add translation arrows draw morphism arrows idi idi particular vertices precisely negative positive neutral presentations jiarui fei proof vertices identified vertices via minimal presentations corollary need add vertices step due note equal source otherwise equal step due need anything else due proposition freely identify subquiver let valued quiver dynkin type case indecomposable presentation uniquely determined weight vector quiver already consider authors associated ice quiver reduced expression longest element weyl group iart quiver corresponds reduced expressions adapted example iart quiver type ice hive quiver constructed arrows frozen vertices example iart quiver readers find iart quivers appendix remark one natural question whether iart quivers answer positive least trivially valued cases conjecture also true general pointed remark trivially valued tits lemma every two reduced words obtained form sequence elementary see section theorem every move either leaves seed unchanged replaces adjacent seed finally similar proof corollary result extended however quivers replaced jacobian algebras see section jacobian algbera example obtained path algebra mutating vertex according iart quiver example finite mutation type quiver one example mutation equivalent wild acyclic quiver infinite mutation type follows fact symmetrizable lemma tensor product multiplicities via upper cluster algebras following lemma easy exercise linear algebra lemma restricted thus full ranks lemma assignment weight configuration however full see section want extend full one useful second half paper since finite representation type ind translated unique indecomposable positive presentation ind assign vector follows definition translated triple weights given unit vector supported idi set also define another weight vector attached corollary assignment resp defines full weight configuration iart quiver resp proof due lemma suffices show mutable call vertex regular transitive defined defined clear equation holds regular vertex description proposition see transitive vertices regular except problem vertices may morphism arrows neutral frozen vertices whose translation defined first component triple weights idi zero vector equality still holds vertices case enough observe weight vector zero positive neutral frozen vertices obtained deleting vertices shall consider graded upper cluster algebra graded cluster algebra later cluster character quivers potentials quivers potentials mutation quivers potentials invented model cluster algebras next section appendix switch back usual quiver notation quiver quadruple maps map arrow head tail following define potential ice quiver possibly infinite linear combination oriented cycles precisely potential element trace space completion path algebra closure commutator pair ice quiver potential iqp short subspace defined linear arrow cyclic derivative extension jiarui fei potential jacobian ideal closed ideal generated jacobian algebra quotient algebra polynomial completion unnecessary define case throughout paper key notion introduced mutation quivers potentials decorated representations ice quiver nondegenerate potential see mutation certain sense lifts mutation definition short review appendix definition decorated representation jacobian algebra pair rep let rep set decorated representations isomorphism let homotopy category bijection two additive categories rep mapping representation minimal presentation rep simple representation suppose corresponds projective presentation definition decorated representation reduced weight vector definition potential called rigid quiver every potential cyclically equivalent element jacobian ideal also called rigid potential called ice quiver restriction mutable part rigid known proposition corollary every rigid rigidity preserved mutations particular rigid nondegenerate definition two qps vertex set call isomorphism cyclically equivalent two iqps call restricted mutable parts definition representation called supporting vertices mutable denote category decorated representations remark equivalent indeed write restriction mutable part find cyclic derivative sum paths passing frozen vertices sum gives rise trivial relation representations generic cluster character definition associate reduced presentation space phomj homj vector satisfying max denote coker cokernel general presentation phomj definition need include ideal even arrow frozen vertices tensor product multiplicities via upper cluster algebras reader aware coker notation rather specific representation write coker simply means take presentation general enough according context phomj let cokernel definition called coker let set turns large class iqps set given lattice points rational polyhedral cone class includes iqps introduced ones introduced section definition define generic character gre coker variety parametrizing quotient representations denotes topological theorem corollary theorem suppose iqp full rank generic character maps bijectively set linearly independent elements containing cluster monomials definition say iqp models algebra generic cluster character maps bijectively onto basis upper cluster algebra simply say cluster model remark suppose remark equivalent via isomorphism abuse notation denote equivalence also since isomorphic see cluster model iart qps iart time let assume trivially valued dynkin quiver translation triangle iart quiver oriented cycle form iart quiver define potential alternating sum translation triangles make precise follows also label pair arrows thus classified three classes type arrows type arrows type arrows idi idi jiarui fei let resp denote sum type arrows potential defined thus jacobian ideal generated elements let jacobian algebra rest section denote single arrow lowercase letter type superscript observe vertex exactly one incoming arrow one outgoing arrow type moreover connected arrow type relations say acev cbev general relations similar implication trivalent vertex type baev trivalent right sum one summand negative resp positive translation arrows undefined relations reduce following acev resp caev cbev similarly resp neutral bcev resp caev lemma iqp rigid proof show suffices observe nonzero path uniquely identified element homc indeed suppose path make avoid neutral vertex move arrows type left remove arrows type truncated path interpreted morphism due relations cycle jacobian algebra equivalent sum composition cycles acb mutable suffices show fact zero jacobian algebra applying relation twice negative see equivalent connected arrow type mutable negative vertex connected arrows type equivalent zero delete translation arrows obtain subquiver denoted let direct sum presentations ind auslander algebra endc equal modulo mesh relations auslander algebra quotient jacobian algebra ideal generated translation arrows let presentation ind denote resp indecomposable projective resp injective representation corresponding vertex tensor product multiplicities via upper cluster algebras lemma following module homq homq idi homq proof recall proof lemma homc know homc unless negative implies homq conversely identify element homq path consisting solely arrows type adjoining arrows type get path easily seen nonzero observe path containing translation arrow must equivalent zero due relations restricted auslander algebra result follows lemma similar path idi containing translation arrow must equivalent zero result follows lemma cone consider following set representations idj idi maps canonical map given morphism arrow map idi idj let recall lemma hom idi idj idj idi homq take map irreducible map homq follow proof theorem rightmost maps surjective let possibly trivial involution involution depend orientation formula listed section map projective modules top restriction induced map top top theorem following description modules module indecomposable module supported vertices translated dimension vector defining linear map given top restriction defining linear map tidi given top restriction irreducible morphism jiarui fei particular dimension vector given idi proof lemma identify homq definition exact sequence homq homq homq conclude homq lemma identify homq definition exact sequence homq homq homq conclude homq description maps follows naturality lemma identify idi homq definition tidi exact sequence homq homq homq conclude tidi homq description maps follows naturality definition vertex called maximal representation dim strict subrepresentations supported note every contains maximal vertex another frozen vertex example resp idi resp idi rest article write view linear functional via usual dot product lemma lemma suppose representation contains maximal vertex let coker homj dims subrepresentations lemma let coker homj frozen homj frozen proof since subrepresentation also subrepresentation one direction clear conversely let assume homj frozen vertex prove homj induction first notice idi sink general injective presentation homj homj equivalent homj perform induction sink appropriate order using conclude homj frozen tensor product multiplicities via upper cluster algebras definition define cone dims strict subrepresentations frozen theorem set lattice points exactly proof due lemma suffices show defined dims subrepresentations frozen notice conditions union defining conditions dimtv latter conditions redundant dimtv dimtv dimtv dimension vector strict subrepresentation maximal frozen vertex example example continued case almost trivial list strict subrepresentations readers easily find subrepresentations subrepresentations respectively subrepresentations respectively subrepresentations needed define inequalities readers find extended version example example easily generalized type similar orientation remark valued cases deal general valued quiver could worked species analogue require lengthy preparation avoid define analogous jacobian algebra algebra auslander algebra even define without introducing jacobian algebra without jacobian algebra unable define module injective presentations still makes perfect sense define via theorem defined define cones definition remark also applies cone defined general defining conditions may redundant shown following example example complicated type see example strict subappendix dimension vector representations distinct dimension vectors however need define readers find full list inequalities come back iart quiver define potential formula defining iart nothing restriction subquiver define representation injective presentation similar theorem module indecomposable module supported vertices translated dimension vector jiarui fei define cone dims subrepresentations note ask subrepresentations strict ones observe defining conditions related follows group defining conditions three sets ghu ghl ghr arise subrepresentations negative neutral positive respectively defining conditions exactly ghu similar theorem following proposition proposition set lattice points exactly definition given weight configuration quiver convex polyhedral cone define necessarily bounded convex polytope cut hyperplane sections conjectural model tensor multiplicity multiplicity counted weight configuration prove model type ade part list iart quivers type draw iart quiver running example reader difficulty draw general ones cases hard large draw label vertices way software lie check things conveniently tensor product multiplicities via upper cluster algebras jiarui fei part isomorphism conf cluster structure maximal unipotent groups basic notation simple lie group let algebraically closed field characteristic zero always simply connected linear algebraic group lie algebra assume dynkin diagram underlying valued graph lie algebra cartan decomposition let chevalley generators simple coroots form basis cartan subalgebra simple roots form basis dual space structure uniquely determined cartan matrix given let closed subgroups lie algebras thus maximal torus two opposite maximal unipotent subgroups let simple root subgroup abuse notation let simple coroot corresponding root isomorphisms maps provide homomorphisms denote transpose antiautomorphism defined let simple reflections generating weyl group set tensor product multiplicities via upper cluster algebras elements satisfy braid relations associate representative way reduced decomposition one denote longest element weyl group general identity order two central element involution section weight lattice consists thus fundamental weights given write thus identify weight integral vector notation stress throughout paper identification widely used weight dominant base affine space natural induces left right translation algebraic theorem theorem says decomposes irreducible dual quotienting left translation get decomposition ring regular functions base affine space realized subspace similarly dual base affine space keep mind via right left translations respectively fix additive character coset resp determines point resp refer readers detail fundamental representation dual choose fixed vector vector normalized hui following define fundamental weight pair generalized minor concretely bruhat decomposition hwu implies written hwu suppose lies open set jiarui fei regular function restricted given type principal minor matrix definition extended whole proposition proposition hwu otherwise function given follows lemma weight cluster structure ready recall cluster algebra structure associate indecomposable presentation generalized minor follows suppose translated moreover put note positive principal minor take iart quiver let known upper cluster algebra standard seed identify subquiver moreover cluster algebra equal upper cluster algebra recall weight configuration definition vertex set lemma degree conjugation action exactly let matrix rows also indexed positive root corresponding dim coker note generalized gabriel theorem contains exactly positive roots kostant partition function definition counts lattice points polytope consider labelling dual one section let number note lemma proof let coker equality equivalent dim recall cartan matrix equal follows definition matrix transforms reduced weight vector transforms dimm reduced weight vector minimal injective presentation equivalently tensor product multiplicities via upper cluster algebras righthand side equal cancellations two terms survive weight configuration polyhedral cone consider polytope definition lemma function partition function proof recall defined introduce new variables satisfying second equality established easy bijection last one due lemma defining condition equivalent polytope finally observe transformation totally unimodular particular transformation inverse preserve lattice points theorem coordinate ring equal graded cluster algebra moreover trivially valued cluster model definition proof need prove second statement recall universal enveloping algebra standard grading deg classical fact partition function counts dimension homogeneous component algebra graded dual see seen degree second statement follows theorem lemma fact another two seeds cluster structure one called left standard called right standard obtained standard seed sequence mutations sequences mutations defined studied appendix let follows section always represents let resp ice quiver obtained deleting negative positive resp equal negative neutral frozen vertices corollary arrows frozen vertices since rigid let potential restricted proposition jiarui fei also rigid particular equivalent definition rigidity definition remark proposition implies cluster model also cluster model view remark follows replace proposition mlq mrq another two graded seeds cluster algebra moreover trivially valued wql wqr cluster models remark later also need concrete description wql wqr given lattice points rational polyhedral cones recall three sets defining conditions polyhedral cone ghu ghl ghr define polyhedral cones relations ghl ghr respectively almost proof theorem show wql wqr maps relating unipotent groups standard maps let conf categorical quotient category varieties lemma ring regular functions conf unique factorization domain proof ufd since group multiplicative characters theorem conf also ufd bruhat decomposition pair conf representative pair conf called generic chosen identity let conf open subset conf pair generic impose condition among definition isomorphism conf let conf open embedding define regular function conf clear definition depend representatives follows proposition lemma exactly localization conf focus space conf ring regular functions decomposition conf write often graded component conf clear dim purpose enough work categorical quotient ring regular functions corresponding quotient stack tensor product multiplicities via upper cluster algebras recall several rational maps defined let conf open embedding embedding stabilizer generic pair clear image exactly conf restriction get embedding conf view two subgroups natural way write restriction first second respectively function uniquely determined restriction fact embeds disjoint embedding easy calculation note recall classical interpretation multiplicity let subspace irreducible denote lemma dim lemma ind generalized minor spans space particular proof recall chevalley generator acts gxi coefficient equal eni homogeneous degree nonzero follows contains column cartan matrix since never hence proved generalized minor spans space suppose dim dim recall principal minor evaluates set midi ind unique function conf moreover satisfies sidi jiarui fei proof first statement follows lemma last statement suffices prove part degree clear consider tensoring highest weight vectors also degree remark type analogous map also considered quiverinvariant theory setting see example setting map comes decomposition module category triple flag quiver corollary ind irreducible conf proof clear component conf positive neutral degree obviously indecomposable argument lemma shows irreducible remain consider case minimal presentation general representation known generalized minors irreducible suppose factors one say unit according lemma polynomial sidi homogenous component degree since minimal presentation general representation must nonzero must weight equal embeds since positive neutral contradiction let rational inverse regular conf let composition natural projection first second respectively corollary open subset conf conf sidi indecomposable presentation proof follow straightforward calculation statement rather obvious example corollary localization conf positive neutral exactly proof let natural projections conf conf given conf tersection conf suffices show conf tensor product multiplicities via upper cluster algebras sidi map agrees pei dense subset conf sidi corollary set functions algebraically independent base field proof since map conf birational pullbacks two copies algebraically independent seen factor monomial sidi exactly functions claim follows suppose mutable let function exchange relation let lemma function regular conf satisfying exchange relation moreover relatively prime conf proof since regular section regular conf pull back multiply sides get corollary remain show fact regular whole conf locus indeterminacy contained zero locus since intersection conf nonempty words contained complement conf since irreducible corollary conclude regular outside codimension subvariety conf since conf factorial lemma thus normal algebraic hartogs regular function conf last statement suppose contrary since irreducible factor clearly impossible comparing first component twisted maps let written exponentially morphism given let twisted cyclic shift jiarui fei identity universal property categorical quotients induces endomorphism conf automorphism generic part conf define liu conf write similarly denote riu conf conf tttit conf conf turns twisted cyclic shift also related sequence mutations considered section see theorem mutable moreover relabelling frozen vertices lemma following mutable sidi sidi proof first consider case mutable given sequence mutations pullback clearly commutes sequence mutations lemma frozen lemma sidi argument similar set muq define three linear natural projections resp forgets coordinates corresponding neutral positive resp positive negative negative neutral frozen vertices recall definition respect pair section let laurent polynomial ring note basis parameterized possible respect identified lemma conf respect respect equal tensor product multiplicities via upper cluster algebras proof prove statement argument two similar suppose section seen lemma mlq negative positive negative positive recall quiver obtained deleting negative positive frozen vertices according description resp blf row resp corresponding hence get lemma polytope lattice points less proof corollary conf since regular trivially conf previous lemma equal lemma proposition remark due suffices show lattice points identified points weight follows description cones theorem proposition remark cluster structure conf theorem suppose trivially valued ring regular tions conf graded upper cluster algebra moreover cluster model particular counted lattice points proof lemma corollary form graded seed due lemma corollary apply lemma conclude graded subalgebra conf theorem linearly independent set triple weights least lemma says cardinality actually span vector space conclude equal conf cluster model follows fact equality proof shows graded subalgebra conf matter trivially valued conjecture first last statement theorem true even trivially valued illustrate example general upper cluster algebra strictly contains cluster algebra jiarui fei example let type example checked inequalities example weight labelling triple weights moreover clearly extremal weight suffices show cluster variable need result says cluster variable rigid presentation jacobian algebra rigidity characterized vanishing introduced hard show homk general presentation homj epilogue cluster structure base affine spaces denote ice quiver obtained deleting neutral frozen vertices let define weight configuration recall lemma degree side result show ring regular functions base affine space graded upper cluster algebra proof similar easier conf treatment may little sketchy recall open set open embedding localization positive exactly particular contained laurent polynomial ring consider open embedding conf note map idh conf map defined section let rational inverse map followed second component projection define birational map composition map viewed variation fominzelevinsky twist automorphism big cell readers check differ fibrewise rescaling along toric fibre need fact let following commutative diagram map fact regular regular image conf conf finish proof need three maps analogous theorem pullback related sequence mutations precisely mutable tensor product multiplicities via upper cluster algebras analogous lemma mutable obviously also analogous maps map resp forgets coordinates corresponding positive resp negative frozen vertices following analog lemma respect respect mrq equal respectively restrict iart subquiver denote restricted potential almost proof theorem shows set given lattice points polyhedral cone cone defined two sets defining conditions namely ghu ghr almost argument get following analog lemma weight configuration define polytope definition lemma polytope lattice points less dim theorem suppose trivially valued ring regular functions graded upper cluster algebra moreover cluster model particular weight multiplicity dim counted lattice points proof know seed satisfying condition lemma lemma graded subalgebra theorem linearly independent set lemma implies actually span vector space conclude equal cluster model remark type theorem proved example using technique projection map induced decomposition module category triple flag quiver general cluster algebra also strictly contained upper cluster algebra example still given type take example remark let proj homotopy category ice quiver obtained art quiver freezing negative positive vertices two seeds mutation equivalent seed via quivers obtained deleting positive negative frozen vertices respectively remark laced cases suppose simply laced equivalently trivially valued prove conjecture let exam arguments proving theorem find two missing jiarui fei ingredients one straightforward generalization theorem analogous cluster character species potentials although proof theorem depends result involving preprojective algebras author much longer proof cases using conjectural generalization lemma species potentials missing cluster character proved usual qps existence map upper cluster algebra also equivalent lemma argument generalized upshot missing part conjecture certain generalization lemma expect result appear subsequent paper twisted cyclic shift via mutations mutation representations review material let section mutation vertex defined follows first step define put union following new three different kinds arrows incident composite arrow opposite arrow resp incoming arrow resp outgoing arrow result first two steps definition new potential note given obtained substituting words occurring finally define reduced part definition last step refer readers section details start define mutation decorated representations consider resolution simple module indecomposable injective representation corresponding vertex thus triangle linear maps tensor product multiplicities via upper cluster algebras first define decorated representation set ker ker ker ker arrows incident set making representation defined choice linear maps refer readers section details finally define reduced part definition recall form definition also define dual using injective presentations let representation minimal injective presentation dual reduced weight vector definition also extended decorated representations similar recall section seed mutation definition induces mutation recall mutation rule let definition define dual representation gre gre variety parameterizing subrepresentations remark gre need dual version dual dual key lemma lemma let arbitrary representation nondegenerate let related related remark min vertices equivalently reads jiarui fei known corresponds general presentation gcoherent obtained positive simple via sequence mutations corresponds indecomposable rigid presentation particular general twisted cyclic shift via mutations recall twisted cyclic shift conf induced understand map introduce half map induced denote map clear let recall sequence mutations constructed section sequence originally defined ice quiver terms reduced expression translate setting recall label vertices pair lemma let max first assume vertices ordered totally order mutable vertices relation starting minimal vertex ascending order defined perform sequence mutations mutable vertex vertex sequence mutations defined whole sequence mutations let permutation defined mutable idi idi clear involution set mutable vertices applying quiver view relabelling vertices trivially valued dynkin quiver let associated preprojective algebra recall module category mod frobenius category let stable category recall stable category naturally triangulated shift functor given relative inverse syzygy functor readers find standard terminology example write authors defined tilting module mod denoted proved quiver endomorphism algebra exactly frozen vertices corresponding objects mod moreover known stable endomorphism algebra jacobian algebra see remark rigid associate function function turns cluster variable standard seed mutation maximal rigid modules defined compatible seed mutation particular compatible quiver mutation quiver exactly proposition proposition sequence mutations takes modules mutable tensor product multiplicities via upper cluster algebras corollary identifying subquiver restricted proof since autoequivalence mutable part let section authors described mutated quiver applying follows easily description forgetting arrows frozen vertices subquiver whose frozen vertex identified frozen vertex let restriction mutable part next compute cluster variables applying seed initial cluster variable vertex unit vector theorem nothing positive simple representation view lemma simple representation proof equivalent show let study mutated description mutated quiver section formula find play role words suffices compute full linear subquiver vertices using formula easily prove induction otherwise conclude let set given eidi let weight configuration defined involution induced corollary mutable view simple representation proof lemma obtained principal part going recover extended part principal part recall proposition mutation invariant general presentation equal corresponds indecomposable projective representation lemma homq make supported jiarui fei must add least similarly must add least eidi well hand easy check adding additional positive component frozen vertices make general presentation decomposable case hence last statement recall hence corollary restricted proof corollary mutable part well remains show blocks restricted corresponding positive neutral frozen vertices equal follows easy linear algebra consideration indeed let restricted denote blocks write contains weights corresponding positive neutral frozen vertices since full rank linear equation overdetermined solving seen clear satisfies block definition let define corollary restricted mutable note fixes mutable vertices shuffles frozen vertices idi corollary proof since fact follows corollary pointed functor also proposition says mutation maximal rigid modules compatible mutation corresponding see remarks proposition proposition positive simples invariant invariance obviously extends desired result follows experiment believe following conjecture conjecture corollary thus corollary theorem hold valued well tensor product multiplicities via upper cluster algebras induced define rational mutable sidi since main theorem theorem use following theorem assume conf proof theorem conf pullback defined easy see fact conf containment holds moreover obtained base change cluster mutation twisted cyclic shift clearly defined well theorem map equal pullback twisted cyclic shift proof need show suffices prove statement extended cluster definition upper cluster algebra cluster shall take corollary statement clearly true coefficient variables let assume mutable ind need show argue multidegrees corollary degree according degree also degree lemma must show consider relevant constructions remark note clearly commutes must algorithm section present algorithm find subrepresentations defined section said introduction type need algorithm little effort one show corollary strengthened orientation type orientations type particular case whether holds checked computer first observe description theorem subrepresentations known particular dual even better mutated positive simple unfrozen via bold defined beginning appendix indeed easily checked pby mutated representation exactly since cokernel general presentation weight claim follows remark idea algorithm generate dual resp sequence mutations resp defined proposition view representations original via automorphisms jiarui fei proof clear unchanged corollary mutated view original via automorphism note ifp clearly cokernel general presentation weight hence argument similar positivity cluster variables coefficients cluster variables positive compute resp applying resp subrepresentations using formula way find acknowledgement author would like thank ncts national center theoretical sciences shanghai jiao tong university financial support would also like thank professor bernard leclerc interesting discussion taipei conference representation theory jan references auslander reiten modules functors representation theory algebras cocoyoc cms conf vol amer math providence auslander reiten representation theory artin algebras corrected reprint original cambridge studies advanced mathematics cambridge university press cambridge assem simson elements representation theory associative algebras london mathematical society student texts cambridge university press buan iyama reiten smith mutation objects potentials amer math berenstein zelevinsky triple multiplicities spectrum exterior algebra adjoint representation alg berenstein zelevinsky tensor product multiplicities canonical bases totally positive varieties invent math berenstein fomin zelevinsky cluster algebras iii upper bounds double bruhat cells duke math cerulli feigin reineke desingularization quiver grassmannians dynkin quivers adv math dlab ringel indecomposable representations graphs algebras mem amer math soc derksen fei general presentations algebras adv math derksen weyman quivers saturation coefficients amer math soc derksen weyman zelevinsky quivers potentials representations selecta math derksen weyman zelevinsky quivers potentials representations amer math soc fei cluster algebras rings triple flags proc lond math soc fei cluster algebras rings projections math fei cluster algebras invariant theory kronecker coefficients adv math fei polyhedral models tensor product multiplicities appear contemp math tensor product multiplicities via upper cluster algebras fock goncharov moduli spaces local systems higher theory publ math inst hautes fock goncharov cluster ensembles quantization dilogarithm ann sci norm fomin pylyavskyy tensor diagrams cluster algebras fomin zelevinsky double bruhat cells total positivity amer math soc fomin zelevinsky cluster algebras foundations amer math soc fomin zelevinsky cluster algebras coefficients compos math geiss leclerc rigid modules preprojective algebras invent math geiss leclerc auslander algebras initial seeds cluster journal london mathematical society geiss leclerc groups cluster algebras adv math geiss leclerc generic bases cluster algebras chamber ansatz amer math soc goncharov shen geometry canonical bases mirror symmetry invent math goodman wallach symmetry representations invariants graduate texts mathematics springer dordrecht gross hacking keel kontsevich canonical bases cluster algebras keller quiver mutation java http kostant formula multiplicity weight trans amer math knop kraft vust picard group algebraische transformationsgruppen und invariantentheorie dmv basel knutson tao honeycomb model gln tensor products proof saturation conjecture amer math soc lee schiffler positivity cluster algebras ann math van leeuwen cohen lisser lie package lie group computations http zelevinsky strongly primitive species potentials mutations boletn sociedad matemtica mexicana littelmann rule symmetrizable algebras invent math lusztig canonical basis arising quantum canonical algebras amer math soc partasarathy ranga rao varadarajan representations complex semisimple lie groups lie algebras ann math plamondon generic bases cluster algebras cluster category int math res notices popov vinberg invariant theory algebraic geometry encyclopaedia mathematical sciences vol berlin shapiro shapiro vainshtein zelevinsky coxeter groups groups generated symplectic transvections michigan math zelevinsky coefficients cluster algebras three lectures symmetric functions surveys developments perspectives nato sci ser math phys kluwer acad dordrecht shanghai jiao tong university shanghai china address jiarui
| 0 |
list figures auxiliary support function function circuit realization chebyshev filter circuit realization passband filter possible locations roots equation case calculated formula possible locations roots equation case calculated formula possible locations roots equation case calculated formula algorithms plot function determining cutoff frequency chebyshev filter plot function determining cutoff frequency passband filter vin vout vout start order points calculate initiate index sets main cycle yes calculate calculate yes stop start main cycle yes calculate yes calculate according calculate according yes yes yes find according according calculate according yes calculate according end main cycle fig yes yes calculate according list tables characteristics functions utilized numerical experiments comparison grid technique methods using smooth auxiliary functions function number roots frl number local extrema sin sin xsin sin sin sin sin cos sin cos cos sin sin esin cos xsin cos sin sin cos cos function grid two methods solving optimization problems arising electronic measurements electrical engineering function grid two methods solving optimization problems arising electronic measurements electrical engineering function grid two methods solving optimization problems arising electronic measurements electrical engineering function grid average two methods solving optimization problems arising electronic measurements electrical engineering
| 5 |
algebra interpolatory cubature generic nodes mar claudia fassino giovanni pistone eva riccomagno submitted november abstract consider classical problem computing expected value real function random variable using cubature use synergy tools commutative algebra cubature elementary orthogonal polynomial theory probability keywords design experiments cubature algebraic statistics orthogonal polynomials evaluation expectations introduction consider classical problem computing expected value real function random variable linear combination values finite set points general cubature problem determine classes functions finite set nodes positive weights probability distribution random vector univariate case set set zeros node polynomial orthogonal polynomial see sec much known multivariate case unless set nodes product sets claudia fassino eva riccomagno dipartimento matematica genova via dodecaneso genova italy fassino riccomagno giovanni pistone collegio carlo alberto via real collegio moncalieri italy similar setting appears statistical design experiment doe one considers finite set treatments experimental outputs function treatment set treatments set nodes described efficiently zeros systems polynomial equations called variety commutative algebra framework systematic algebraic statistics tools modern computational commutative algebra used address problems statistical inference modelling see doe set called design affine structure ring real functions analyzed detail represents set real responses treatments however algebraic setting euclidean structure computation mean values missing algebraic design experiment computation mean values obtained considering special sets called factorial designs see note zero set polynomial purpose present paper discuss comes together considering orthogonal polynomials particular consider algorithms commutative algebra cubature problem mixing tools elementary orthogonal polynomial theory probability vice versa formula provides interesting interpretation rhs term expected value proceed steps increasing degree generality section consider univariate case take admit orthogonal system polynomials let univariate division given polynomial exist unique degree smaller number points degree furthermore written lagrange polynomial show expected values coincide fourier expansion respect orthogonal polynomial system zero weights expected values lagrange polynomials case design proper subset zero set orthogonal polynomial developed section section standard gaussian probability law zero set hermite polynomial applying theory give representation hermite polynomials including degree sum element polynomial ideal generated reminder see theorem following discussion particular equation unsurprisingly reminiscent iterated integrals point describe ring structure space generated hermite polynomials certain order ring structure essentially aliasing functions induced limiting observations particular form recurrence relationship hermite polynomials makes possible suspect study ring structure systems orthogonal polynomials require different tools use result implies system equations theorem extended multidimensional case section gives implicit description design weights via two polynomial equations envisage applicability choice suitable classes functions developed section contains general restrict product probability measures consider set distinct points type algorithm provided works exclusively generated orthogonal polynomials suitable degree gives generating set vanishing ideal expressed terms orthogonal polynomials used determine sufficient necessary conditions polynomial function holds suitably defined weights furthermore exploiting fourier expansion basis vanishing ideal results exactness cubature shown course interest determine generalisations results cases product measure still admits orthogonal system polynomials basic commutative algebra start notation polynomials ring polynomials real coefficients indeterminate fassino dimensional vector integer entries indicates monomial indicates monomials one term ordering case designs product form share commonalities one dimension case term orders much used standard quadrature theory see refining division partial order proper actually relevant multivariate cases total degree monomials symbol indicates set polynomials total degree vector space polynomials total degree let finite set distinct points probability measure random vector probability distribution expected value random variable given term ordering let form basis respect see ideal polynomials vanishing exist unique largest term divisible largest term note necessarily unique polynomial referred reminder normal form often indicated symbol shorter version indicates polynomial ideal generated moreover monomials divisible largest terms form vector basis monomial functions vector space real functions polynomials fundamental applications algebraic geometry finite spaces various general purpose softwares including maple mathematica matlab computer algebra softwares like cocoa macaulay singular allow manipulation polynomial ideals particular compute reminders monomial bases polynomial written uniquely indicator polynomial point equation follows fact space basis expected value random polynomial function respect algebra cubature linearity paper discuss classes polynomials design points equivalently one dimension polynomial vanishing degree forms basis indicates number elements set furthermore satisfies three main properties polynomial degree less equal suitable section consider algebra orthogonal polynomials one variable theorem favard theorem let sequences real numbers let defined recurrently form system orthogonal polynomials system monic orthogonal polynomials monic case section let zero set polynomial orthogonal constant functions respect next recall basics orthogonal polynomials use see let finite infinite interval posir tive measure moments exist finite particular polynomial function square integrable scalar product defined related inner product defiwe consider whose nite positive case unique infinite sequence monic orthogonal polynomials respect denote furthermore form real vector space basis orthogonal polynomials total degree smaller exists unique called fourier coefficient finite number zero since inner product satisfies shift property hxp corresponding orthogonal polynomial system satisfies recurrence relationship precisely orthogonal polynomial systems real line satisfy recurrence relationships conversely favard theorem holds hold true therefore norm computed orthonormal polynomials christoffeldarboux hold orthogonal polynomials algebra example inner products sobolev type namely vis positive measures possibly different support satisfy shift condition neither complex hermitian inner products theorem let zero set orthogonal polynomial respect distribution real random variable consider division giving exist weights expected value fourier coefficient polynomial remark theorem version well known result see sec include proof underline particular form error quadrature formula used theorem section applying theorem proof set contains distinct points univariate polynomial write uniquely deg deg max deg furthermore indicator functions expression lagrange polynomials namely fassino proof hence particular case theorem occurs degree less case degree shows quadrature rule nodes given zeros weights gaussian quadrature rule exact polynomial functions degree smaller equal notes quadrature rules see example example identification polynomial degree write constant term given particular deg coefficients fourier expansion computed evaluation general polynomial degree possibly larger theorem gives fourier expansion reminder indeed theorem generalises theorem generic finite set distinct points say dicator function let unique monic polynomial vanishing degree write polynomial uniquely consider fourier expansions theorem notation proves theorem condition theorem linear fourier coefficients found easily polynomial division first fourier coefficients appearing conditions theorem determined solving system linear equations otherwise matrix first orthogonal polynomials theorem used two ways least known condition theorem checked verify expected value determined gaussian quadrature rule nodes weights symmetric function polynomial fourier coefficients computed analogously adapting equation unknown polynomial finite number unknown real coefficients theorem characterizes polynomials gaussian quadrature rule exact namely furthermore characterization linear expression unknown equation linear combinations coefficients section shall specialise study hermite polynomials section shall generalise theorem higher dimension conclude section discuss remainder orthogonal projection remark let write degree less algebra cubature unique polynomial orthogonal rephrasing characteristic property remainder belongs orthogonal exist two would space hence null deg orthogonal projection fact leading coefficient therefore multiple indeed orthogonal deg orthogonal projection differs unless projection zero example substituting fourier expansions division find coefficient fourier expansion written hermite polynomials simplified using product formula theorem section hermite polynomials another way look algebra orthogonal polynomials discuss case hermite polynomials reference measure tribution real valued differentiable function define consider following identity holds dxn rodrigues formula identity operator relationn ships dhn recurrence relationship xhn deduced hermite polynomials orthogonal respect standard normal distribution indeed equation already mentioned spans orthogonal polynomial degree different ring structure space generated hermite polynomials described theorem theorem fourier expansion product hhk first hermite polynomials proof note scalar product obvious space let square integrable functions identity holds operators standard normal distribution holds constant one mapped hermite polynomial defined direct computation using proves following wellknown facts square integrable see lemma proposition polynomials satisfy conditions also called operator standard normal distribution linear operator namely fassino example aliasing application theorem observe recurrence relation hermite polynomials equation corollary let let degree smaller let xhn evaluated zeros say becomes indicates equality holds general let fourier expansion normal form simplified notation fourier coefficients substitution product formula theorem gives formula write terms fourier coefficients lower order hermite polynomials proof equation steps followed proof theorem conclude equating coefficients gives closed formula table normal form respect written terms hermite polynomials degree smaller example values algebraic characterisation weights theorem gives two polynomial equations whose zeros design points weights particular case formula provide proof highlight algebraic nature result proof theorem let table aliasing example observe degree equivalently coefficients degree clean give another proof theorem hermite polynomials exists one polynomial degree furthermore equivalently polynomial satisfies proof univariate polynomial interpolation polynomial values distinct points hence exists unique degree observe polynomials substituh tion evaluation give hek hek hek matrix form equations becomes htn diag algebra cubature square matrix diag indicates diagonal matrix invertible provided section polynomial system conis sidered observe diag let polynomial degree typical remainder division write furthermore note diag htn diag expected value substitution equation gives holds system equations rewriting previous parts theorem first equation states values considered second equation proven item theorem states weights strictly positive theorem applied constant polynomial shows sum one words mapping associates discrete probability density theorem states expected value polynomial functions equal expected value discrete random variables given coefficients equation equated give apply lagrange polynomial whose fourier expansion using equation obtain degree reduced using example polynomial theorem determined larger values algorithm situations design experimental plan gaussian quadrature rule exact computation weights might necessary need explicit values weights required computation done outside symbolic computation setting need solve get evaluate find example let positive integer given let positive integers theorem issue evaluation zeros recurrence relationship used values tabulated code weighing polynomial polynomial theorem called weighing polynomial table gives code written specialised software symbolic computation called cocoa compute fourier expansion based theorem line specifies number nodes line establishes working environment polynomial ring whose variables first polynomials plus extra variable encodes weighing polynomial convenient work elimination termordering called elim variable appear least possible lines construct hermite polynomials using recurrence relationships specifically provide expansion line states giving nodes quadrature line polynomial second equation system item theorem gives weights equations collected ideal whose basis computed line application interesting bases contains polynomial appears alone term degree one element basis relates explicitly desired weighing polynomial first polynomials use elim eqs append eqs endfor append eqs append eqs eqs last table computation fourier expansion weighing polynomial using theorem fassino polynomial degree theorem random variable probability law discrete random variable taking value probability first equality follows fact zero last equality definition conditional expectation another approach consider polynomial whose zeros elements say consider lagrange polynomials namely lzf line table gives polynomial obtained set line namely nodes values weights showing nodes weights algebraic numbers rational numbers mac intel core duo processor ghz using cocoa release result obtained cpu time user time cpu time user time cpu time user time cpu time user time observe computations done results stored observe furthermore line cocoa commands last could substituted improve computational cost requires computation basis reduction minor point observe symbol would appear line lemma let lagrange polynomial remainder lagrange polynomial respect namely lzf proof exists unique polynomial degree small furthermore lzf two polynomials laf degree smaller coincide points interpolation must equal polynomial degree write lzf let degree fractional design section return case general orthogonal polynomials positive measure assume nodes proper subset points work within two different settings one ambient design considered one consider indicator function subset namely represented polynomial degree function defined let polynomial degree product note error gaussian quadrature rule linear fourier coefficients also fourier coefficients node polynomial generalised section fraction coincides ambient design hence contains points polynomial algebra cubature degree obtain well known result zero error fourier coefficient node polynomial order general one try determine pairs sets polynomials absolute value errors minimal theorem holds higher dimension zero set orthogonal polynomials design support section return higher dimensional section restrict consider product measure independent random variables one distributed according probability law design take product grid zeros orthogonal polynomials respect precisely design points interpolation nodes orthogonal polynomial respect degree lagrange polynomial point defined lykk apex indicates univariate lagrange polynoyk mial dnk span equal linear space generated monomials whose exponents lie integer grid polynomial written unique degree variable smaller belongs span coefficients fourier expansion respect variable functions let denote vector obtained removing component write finite number zero independence expected value lagrange polynomial lynkk lyk expected value univariate random lagrange polynomial previous sections proof proof similar theorem simpler notation design grid given dnm independent random variables distributed according polynomial decomposed lan lbm lan lbm taking expectation using independence orthogonality note proof sufficient condition zero degree smaller similarly retrieve results degree smaller gaussian theorem applied variable weights nodes satisfy polynomial system hnd hnd grid section gaussian case analogy example fourier coefficients polynomials low enough degree determined exactly values polynomials grid points shown example fassino table indicator functions example example consider square grid size dnn polynomial degrees smaller hermite polynomials standard normal distribution write degree degree smaller chk ckh indicator function belongs note indicator function fraction dnn independent normally distributed random variables write linear combination indicator functions points ckh span shown table expected values given furthermore linearity example deals general design introduces general theory section example let zero set conclude namely given five points write polynomial span span furthermore key points example determine class polynomial functions determine indicator functions points section give algorithms fraction algebra cubature theorem notation equation higher dimension general design support previous sections considered particular designs whose sample points zeros orthogonal polynomials gaussian case exploited ring structure set functions defined design order obtain recurrence formula write fourier coefficients higher order hermite polynomials terms lower order hermite polynomials example also deduced system polynomial equations whose solution gives weights quadrature formula mathematical tools allowed equation particular structure implies hermite polynomials recurrence relation general orthogonal polynomials theorem section switch focus consider generic set points design nodes cubature formula generic set orthogonal polynomials gain something lose something essential computations linear computation basis finite set distinct points buchberger type algorithm table based finding solutions linear systems equations section give characterisation polynomials expected values linear expression fourier coefficients square free polynomial degree two larger set fourier coefficients see equation given set points algorithm table returns reduced basis design ideal expressed linear combination orthogonal polynomial low enough degree directly computes basis working space orthogonal polynomials lose equivalent theorem hermite polynomials particular know yet impose ring structure span generic orthogonal polynomials miss general formula write product linear combination fundamental aliasing structure discussed hermite polynomials multivariate cubature refer together basic references section clarity repeat basics notation let probability measure associated orthogonal polynomial system associate monomial product polynomials note system orthogonal polynomials product measure theorem describes correspondence monomial linear combination component wise vice versa holds component wise proof proof item induction item follows rearranging coefficients product given appendix example hermite polynomial item theorem gives well known result odd even direct application theorem cumbersome need characterise polynomial functions cubature formula exact proceed another way finite set distinct points associated vanishing polynomial ideal let denote largest term polynomial respect let evaluation vector polynomial finite set polynomials let evaluation matrix whose columns evaluation vectors polynomials doe often matrix called mentioned end section space real valued functions defined linear space particularly important vector space bases constructed follows let basis define two interesting vector fassino space bases let define example since sets depend well known divides follows component wise also belongs note induces total ordering also orthogonal polynomials analogously used symbol indicate related orderings given componentwise since divides since given uniquely written leading term tail linear combination terms preceding theorem provides alternative classical method rewriting polynomial terms orthogonal polynomials applying theorem substituting monomial theorem gives linear rules write elements remainder polynomial divided linear combinations orthogonal polynomials low enough order proof appendix theorem span span let reduced basis uniquely written solves linear system words coefficient matrix evaluation matrix orthogonal polynomials tail vector constant terms evaluation vector let polynomial evaluation vector polynomial defined step include include elements multiples element return step include polynomial values components solutions linear system step delete multiples table algorithm using orthogonal polynomials solves linear system input set distinct points vector norm output reduced basis linear combination orthogonal polynomials set step let step stop else set delete solve overdetermined linear system compute residual theorem provides compute basis interpolating polynomial terms orthogonal polynomials low order directly table gives algorithm variation algorithm starts finite set distinct points returns expressions reduced basis performing linear operations real vector assigned expression found using item theorem permits rewrite every polynomial linear combination orthogonal polynomials algorithm table returns basis linear combination orthogonal polynomials performs operations orthogonal polynomials particular involve step monomials computationally faster first computing classical basis next substituting furthermore working one vector space basis switching conceptually appealing unique polynomial belonging span summarising given function finite set distinct points probability product measure system product orthogonal polynomials random vector probability distribution expected value respect approximated algebra cubature computing algorithm table determining solving linear system unique polynomial polynomial expressed linear combination orthogonal polynomials coefficient required approximation recall linear combination indicator functions points lagrange polynomials hence particular computed applying notice however product measure obtained ones noticed theorem would interesting generalise section measures algorithm provided approximate expected value polynomials next set polynomials whose expected value coincides value cubature formula characterised section provide characterisation full set via linear relationships fourier coefficients suitable polynomials satisfy section possibly proper subset characterised via simple condition total degree polynomials characterisation polynomial functions zero expectation section characterise set polynomials whose expected value coincides value cubature formula mentioned section given vanishing ideal basis respect polynomial written two ways first study fourier expansion elements next present results degree elements belonging elements characterized theorem note span linearity independence theorem let product probability measure product orthogonal polynomials let random vector distribution let set distinct points basis whose elements linear combinations orthogonal polynomials thus write stands let suitable consider fourier expansion proof key observation linearity used proof found appendix importantly terms low enough fourier order equation matter computation expectation example consider two independent standard normal random variables hence hermite polynomials consider also five point design unique span written product lagrange polynomials section theorem states write hence study set monomials algorithm table gives equivalent characterize set fassino table solution theorem purpose computing expectation polynomial simplified form furthermore equation practice put coefficients two vectors multiply component wise sum result infinite polynomials satisfy equations one polynomial given table equation involves finite number fourier coefficients namely relevant equation smaller leading terms hence add polynomial form still obtain zero mean polynomial modify adding high enough terms without changing mean value example adding obtain following zero mean polynomial adopt another viewpoint characterise set instead studying fourier expansion polynomials focus attention degree given degree compatible term ordering show compute maximum degree degree cubature formula nodes strategy based definition polynomials definition polynomial maximum integer exactness cubature implies furthermore set polynomials theorem reformulates summarizes two theorems degree cubature formula presented considered theorem given set degree compatible term ordering following conditions equivalent basis proof let since hypothesis since let since gqg gqg gqg since degree compatible deg gqg deg follows since gqg linearity obtain remark maximum integer degree cubature formula nodes respect algebra cubature theorem shows maximum integer polynomials total degree coincides maximum hence focus attention elements theorem polynomial deg deg proof since deg identically zero polynomial deg implies always deg moreover deg fact belongs deg orthogonality polynomials identically zero deg following theorem shows detect polynomial analysing fourier coefficients theorem let fourier expansion polynomial polynomial deg polynomial deg proof let constant polynomial implies deg theorem conclude deg let deg let polynomial since deg deg fourier expansion orthogonality polynomials deg generality implies deg corollary given finite set distinct points degree compatible term ordering let reduced basis maximum integer min fourier expansion given forall deg proof theorem thesis follows true since theorem example given set points degree compatible term ordering algorithm table reduced basis written since deg follows min corollary follows maximum integer example example reduced basis since follows cubature formula exact polynomials degree nevertheless let remark cubature formula exact much larger class polynomials shown theorem example conclusion paper mixed tools computational commutative algebra orthogonal polynomial theory probability address recurrent statistical problem estimation mean values polynomial functions work shares great similarity applications computational algebra design analysis experiments inspired viewpoint cubature formulae obtained two main results gaussian case obtained system polynomial equations whose solution gives weights quadrature formula theorem finite product measure admits orthogonal system polynomials characterise set polynomials mean value depends substantially equation terms fourier coefficients particular polynomials obtained adapting basis theory fassino appendix proofs theorem proof proof induction monomial degree three terms recurrence formula three terms recurrence formula inductive step thesis holds prove three terms recurrence formula concludes proof first part theorem prove second part apply proved unfold multiplication given polynomial product univariate polynomials degree clearly divide furthermore component wise vice versa applying first part theorem xkj deduce linear combination power products divide power products commuting product sum shows linear combination products component wise divides theorem proof recall defined terms common set vectors integer entries satisfying property theorem follows since belongs span vice versa proved analogously matrix square matrix since many elements full rank linear independence columns matrix follows fact linear combination columns corresponds polynomial span coincides span polynomial written multiple element theorem polynomial appears first sum coefficient terms first sum observe also analogously second sum consider since since obvious notation since vector coefficients identity solves linear system furthermore since full rank matrix unique solution system let polynomial whose coefficients solution linear system algebra cubature polynomial obviously interpolates values since columns evaluation vectors elements belongs span conclude unique polynomial belonging span interpolates values theorem proof basis every exist since linearity thesis follows show holds equation substitute fourier expansion given theorem computing expectation use fact different expectation vanishes expectation gives pectation gives acknowledgments pistone supported castro statistics initiative collegio carlo alberto moncalieri italy riccomagno worked paper visiting department statistics university warwick faculty statistics daad grant financial support gratefully acknowledged authors thank wynn monegato politecnico torino hans michael technische dortmund useful suggestions references cocoateam cocoa system putations commutative algebra http comavailable david cox john little donal shea ideals varieties algorithms third undergraduate texts mathematics springer new york mathias drton bernd sturmfels seth sullivant lectures algebraic statistics basel roberto fontana giovanni pistone maria piera rogantin classification factorial fractions statist plann inference walter gautschi orthogonal polynomials computation approximation numerical mathematics scientific computation oxford university press new york paolo gibilisco eva riccomagno maria piera rogantin henry wynn eds algebraic geometric methods statistics cambridge university press cambridge paul malliavin integration probability graduate texts mathematics vol new york buchberger construction multivariate polynomials preassigned zeros computer algebra marseille giovanni peccati murad taqqu wiener chaos moments cumulants diagrams bocconi italia giovanni pistone eva riccomagno henry wynn algebraic statistics monographs statistics applied probability chapman boca raton wim schoutens orthogonal polynomials stein method math anal appl stroud secrest gaussian quadrature formulas englewood cliffs yuan polynomial interpolation several variables cubature ideals adv comput math michael construction cubature formulae nodes using groebner bases numerical integration halifax nato adv sci inst ser math phys vol reidel dordrecht
| 0 |
depression scale recognition audio visual text analysis shubham anirudh abhinav sep department computer science engineering indian institute technology iit ropar india email abhinav major mental health disorder rapidly affecting lives worldwide depression impacts emotional also physical psychological state person symptoms include lack interest daily activities feeling low anxiety frustration loss weight even feeling selfhatred report describes work done audio visual emotion challenge avec second year btech summer internship increase demand detect depression automatically help machine learning algorithms present multimodal feature extraction decision level fusion approach features extracted processing provided distress analysis interview database gaussian mixture model gmm clustering fisher vector approach applied visual data statistical descriptors gaze pose low level audio features head pose text features also extracted classification done fused well independent features using support vector machine svm neural networks results obtained able cross provided baseline validation data set audio features video features svm depression neural network rmse mae fusion speech processing ntroduction depression common mood disorder affect lives individual suffering world wide problem affecting innumerable lives depressed people prone anxiety sadness loneliness hopelessness frequently worried disinterested find hard concentrate work communicate people even become introvert may suffer insomnia restlessness loss appetite may suicidal thoughts diagnosed depression symptoms must present least two weeks hence detection depression major issue present techniques rely clinicians review patient person methods subjective done interview depend reports patient rise depression automatic reliable means depression recognition required efforts made direction assess detect depression computer vision machine learning depression detection done audio video recordings patient depressed people behave different normal people detected audio visual recordings patient studies shown depressed people tend avoid eye contact engage less verbal communication speak anxiously short phrases monotonously avec provides opportunity teams around globe come forth develop program identifying depression using dataset provided taking motivation avec team participated avec similar dataset avec hence paper present methods visual audio text feature extraction followed decision level fusion help identify depressed subjects appreciable accuracy evident results obtained elated ork recent researches prominent detecting depressed people previous year avec depression sub challenge brought papers decent accuracies depicted results published yang avec achieved quite plausible accuracy decision tree classification multimodal fusion text features asim jan avec competition used motion history histogram depression recognition extracting local binary pattern lbp edge orientation histogram features eof low proposed description depression recognition using acoustic speech analysis five acoustic feature categories used prosodic cepstral spectral glottal teager energy operator teo based features teo observed perform best paper introduced benefits gaussian mixture model gmm clustering depression recognition douglas sturim also brought forward gmm clustering classification speech data also features head pose blinking rate even textual features used depression recognition head pose facial movement discriminative feature demarcate depressed people depressed people show less nodding likeliness position head people avoid eye contact alghowinem used head pose movement features associated face perform classification svm depression recognition concluded head movements depressed people different normal person inspired present deep learning performance detecting emotion tzirakis avec presented deep learning based approach assessing emotion well depression state person used convolution neural network cnn audio deep residual network resnet layers visual data concluded deep learning approach achieved relatively better results methods time pampouchidou avec extracted features recorded verbal communication recorded transcript file provided data arousalvalence rating words negative emotion taken feature concluded removal negative impact overall accuracy obtained trained fused model also cummins avec presented multimodal approach fusing audio visual modalities different techniques detect depression hence recent researches emphasizing need depression recognition automatic fruitful way provide help clinicians patients therefore paper also present approach detect depression iii eature xtraction video features challenge organizers share raw video recordings however facial landmarks action units aus gaze pose features provided therefore performed preprocessing facial landmarks obtained features form head features distance blink rate statistics derived provided features emotion aus gaze pose also used complementary features head features head pose features provided organizers used depict temporal information features convey change respect time hence head motion judged horizontal vertical motion certain facial points points fig points selected minimally involved facial movements expressions blinking smiling facial expressions motion prominently represent head motion facial points change position calculated every consecutive frame change measured horizontal vertical direction well net magnitude statistical features calculated mean median mode displacement horizontal vertical direction magnitude displacement velocity horizontal vertical direction magnitude velocity regarding effective state person hence one may find correlation among implemented features extracted facial regions namely eyes mouth eyebrows head data changes two consecutive frames would negligible hence one every three frames used initially affine transformation applied facial points frames get rid unwanted scale translation rotational variance reference points used taken particular frame points frame plotted gave perfectly aligned face respect frame observed videos left eye distance points used horizontal distance vertical distance similarly right eye horizontal vertical distance respectively mouth points used vertical distance horizontal distance average distance two pair points used head horizontal distance average distance two pair vertical distance average since motion two eyebrows occur simultaneously hence horizontal vertical motion calculated together horizontal motion average distance two pair points used vertical motion eyebrows judged respect reference point hardly moves facial expressions nose tip hence vertical motion average distance pairs used hence distances calculated frames stored vector hence resulted total vectors per video vectors individually normalized sum account different type faces person may long face small eyes affect results way normalization removed unwanted effects characteristics person face may resultant vectors produced processed using gaussian mixture model gmm producing bag words fisher vectors extracted video clusters produced using gmm gmm model gives probability points belonging cluster according model probability normally distributed around cluster resultant vectors produced approach classified clusters using clusters formed used inputs initialization expectation maximization forming gmm fig facial landmarks distance head motion facial expressions combined give substantial information regarding behavior person temporal information may convey information emotions aus gaze pose statistical features namely minimum maximum mean mode median range mean deviation variance standard deviation skewness kurtosis calculated pre extracted gaze pose features provided given data set statistical features also used depression analysis blink rate blink rate calculated using facial landmarks first region points enclosing eye points left eye taken entire number frames given area polygon made points calculated way data eye area frame data obtained given data plotted obtain idea blink rate obtain approximate area open eye mode total number areas per frame obtained considered area open eye close eye area minimum area random frames taken blink considered area covered eye points less percent area eye opened way number blinks calculated entire number frames blink frequency calculated dividing number blinks corresponding duration interview sample graph eye area frames shown low level descriptors normalized naq qoq psp mdq peak slope conf mcep hmpdm hmpdd statistical features mean min skewness kurtosis standard deviation median ratio root mean square level interquartile range table statistical descriptors calculated audio features two sets features concatenated used analysis text features fig eye area frames features visual features motion history image converts motion grayscale image recent movement shown pixels highest grayscale value earliest pixels least grayscale value also computed facial landmarks motion depicted gradually increasing grayscale value recent activity hog lbp features extracted resulting mhi formed results obtained classification hog lbp features extracted vague inaccurate hence approach discussed detail audio features audio data provided avec consisted audio file entire interview participant features using covarep toolbox intervals entire recording naq qoq psp mdq peak slope conf mcep hmpdm hmpdd formants transcript file containing speaking times values participant virtual interviewer formant file given audio file processed obtain voice participant hence audio file voice participant isolated using speaking times giveb transcript file two sets features calculated audio modality first set consists statistical features low level descriptors shown following table features calculated using inbuilt functions matlab low level descriptors pre extracted provided data set covarep file second set audio features consisted discrete cosine transform dct coefficients descriptor first column table first values dct retained reduce complexity processing data finally features also extracted verbal responses participant given transcript file total number sentences words spoken participant average words spoken sentence ratio laughter count total number words spoken participant ratio depression related words total number words spoken participant text features extracted total number sentences words spoken normalized duration video referring mentioned paper slow less amount speech longer speech pauses brief answers manifestations depression related words dictionary words constructed manually using online resources another seven features extracted using affective norms english words ratings anew mean standard deviation pleasure arousal dominance ratings word frequency word spoken participant stored eventually mean features taken words giving total seven features lassification svm neural network used classification algorithms svm classifier applied separately extracted features features eight models trained model giving score intensities nointerest depressed sleep tired appetite failure concentrating moving results eight models added obtain final predicted score neural network applied fisher vectors calculated dimension feature vectors video small hence application layers small dimension make sense also checked experimentally results good hence fisher vectors similar svm classification networks trained whose sum gave net score another regression neural network model also trained gave combined output labels model constructed nodes output last layer node giving value supposedly score nodes obtained first rounded nearest integer value hence sum nodes gave score used judge depression model interlink kind set different labels expected give better results network layers gave best rmse xperimental esults support vector machine svm initially default parameters used svm classifier training accuracy obtained default parameters less hence cross validation applied obtain optimum values cost gamma classifier features extracted cross validation performed fold applied using python script cross validation model able fit training data number values cost gamma eventually values cost gamma selected accuracy development set maximum best results along svm parameters obtained audio text head pose fisher vectors validation dataset displayed table results modalities listed even close baseline best rmse validation set obtained histogram oriented gradients hog mhi similar rsme obtained local binary pattern lbp statistical features gaze pose blink rate features audio text head fisher rmse mae kernel linear radical cyclic radical cost gamma table results obtained svm validation set results development set results obtained better baseline table iii development set test set one submission made till date whose result mentioned table accurate partition development development development test test test modality audio video audio video rmse mae table iii provided baseline results neural networks eight models total layers neural network used dropout added layers prevent overfitting range dropout kept number dropouts dimensions layers manually found keep rmse mae network minimum networks adam optimizer used sgd faster well efficient best rms mae optimizers given table adam sgd rmse mae table neural network classification result validation set results obtained features approaches svm neural networks decision level fusion applied results different modalities outputs audio text fused together table head pose fisher together table done dimensionality pose close since rmse nearly weights decided experimentally mentioned table table weight aud weight text rmse dev mae dev rmse test mae test table fusion audio text features weight head weight fisher rmse mae table fusion head pose fisher features validation set results development set training set models fit quite accurately gave less rmse mae cases eventually equal weights given modalities gave least rmse fused models also fusion four modalities equal weights given gave best rmse mae table vii audio text fisher head rmse mae table vii equal weight results four modalities validation set another fusion technique applied maximize outputs modalities output compared modalities maximum among taken technique applied across four modalities also table viii features rmse mae table viii fusion classification results validation set results given test training train development sets onclusion behavior depressed person shows relative change terms speech pattern facial expressions head movement compared person paper introduced depression recognition task visual audio text features using svm neural networks classifiers gmm clustering fisher vectors calculated relative distance facial regions facial regions used recording relative distance certain points ones mostly involved facial expressions like smiling laughing visible emotions head pose statistical descriptors gaze pose blinking rate also calculated verbal responses person coded form text sentences words negative words audio low level features hold information regarding behavior person features extracted trained svm results audio fisher vectors text features individually combined outperformed baseline results validation data set table fisher vector features also classified using neural networks also crossed baseline results validation data set table decision fusion form mean maximizing outputs purely experimental results obtained maximizing outputs four modalities trained svm taking mean outputs different modalities best better accuracy obtained may removed unwanted variance outputs giving desired results since fused outputs improved accuracies future work would devoted towards fusing outputs generic manner also models gave results overfit training data hence parameters search would prevent overfitting valstar michel avec depression mood emotion recognition workshop proceedings international workshop emotion challenge acm ringeval fabien avec depression affect recognition workshop jan asim automatic depression scale prediction using facial expression dynamics proceedings international workshop emotion challenge acm low detection clinical depression adolescents using acoustic speech sturim douglas automatic detection depression speech using gaussian mixture modeling factor twelfth annual conference international speech communication association alghowinem sharifa head pose movement analysis indicator affective computing intelligent interaction acii humaine association conference ieee tzirakis panagiotis multimodal emotion recognition using deep neural arxiv preprint pampouchidou anastasia depression assessment fusing high low level features audio video proceedings international workshop emotion challenge acm ptucha raymond andreas savakis towards usage optical flow temporal features facial expression advances visual computing ellgring heiner communication depression cambridge university press maple canada depression vocabulary depression word list ghosh sayan moitreya chatterjee morency multimodal approach distress proceedings international conference multimodal interaction acm bradley margaret peter lang affective norms english words anew instruction manual affective ratings technical report center research psychophysiology university florida katon wayne mark sullivan depression chronic medical clin psychiatry cummins nicholas diagnosis depression behavioural signals multimodal proceedings acm international workshop emotion challenge acm oneata dan jakob verbeek cordelia schmid action event recognition fisher vectors compact feature proceedings ieee international conference computer vision dhall abhinav roland goecke temporally fisher vector approach depression affective computing intelligent interaction acii international conference ieee kroenke kurt measure current depression general journal affective disorders yang decision tree based depression classification audio video language proceedings international workshop emotion challenge acm vii acknowledgement would like express gratitude teacher organizers avec gave golden opportunity participate project avec project helped inculcate better understanding computer vision machine learning even basic concepts deep learning eferences depression article https pedersen ethological description depression acta psychiatrica scandinavica vol fossi faravelli paoli ethological proach assessment depressive disorders nal nervous mental disease vol waxer nonverbal cues depression journal abnormal psychology vol
| 1 |
efficiently decodable threshold group testing thach minoru mahdi isao jan sokendai graduate university advanced studies hayama kanagawa japan bvthach graduate department national institute school natural science computing imperial informatics technology college london tokyo japan okayama university iechizen okayama japan kminoru abstract consider threshold group testing identification defective items set items test positive contains least negative otherwise defective items log log log tests probability defective items identified using log log log tests probability least decoding time log poly log result significantly improves best known results decoding threshold group testing log log probabilistic decoding log deterministic decoding ntroduction goal combinatorial group testing identify defective items among population items usually much smaller problem dates back work dorfman proposed using pooling strategy identify defectives collection blood samples test group items pooled combination tested result positive least one item group defective otherwise negative damaschke introduced generalization classical group testing known threshold group testing variation result positive corresponding group contains least defective items parameter negative group contains defective items arbitrary otherwise threshold group testing reduces classical group testing note always smaller number defective items otherwise every test yields negative outcome information enhanced tests two approaches design tests first adaptive group testing several testing stages design stage depends outcomes previous stages second group testing nagt tests designed advance tests performed parallel nagt appealing researchers application areas computational molecular biology multiple access communications data steaming focus work nagt threshold classical group testing desirable minimize number tests efficiently identify set defective items efficient decoding algorithm testings one needs log tests identify defective items using adaptive schemes adaptive schemes decoding algorithm usually implicit test design number tests decoding time significantly different classical cnagt threshold group testing natgt cnagt porat rothschild first proposed explicit nonadaptive constructions using log tests however efficient decoding algorithm associated schemes exact identification explicit schemes allowing defective items identified using poly log tests time poly log number tests low log false positives allowed reconstruction achieve nearly optimal number tests adaptive group testing low decoding complexity cai proposed using probabilistic schemes need log log tests find defective items time log threshold group testing damaschke showed set positive items identified tests false positives false negatives gap parameter cheraghchi showed possible find defective items log log tests essentially optimal recently marco improved bound log tests extra assumption number defective items exactly rather restrictive application although number tests extensively studied reports focus decoding algorithm well chen proposed schemes based cnagt find defective items using log tests time log chan presented randomized algorithm log log tests find defective items time log log given cost decoding schemes increases objective find efficient decoding scheme identify defective items natgt contributions paper consider case call model first propose efficient scheme identifying defective items natgt time log poly log number tests main idea create least specified number rows test matrix corresponding test row contains exactly defective items defective items rows defective items identified map rows using special matrix constructed disjunct matrix defined later complementary matrix thereby converting outcome natgt outcome cnagt defective items row efficiently identified although cheraghchi marco proposed nearly optimal bounds number tests decoding algorithms associated schemes hand scheme chen requires exponential number tests much larger number tests scheme moreover decoding complexity scheme exponential number items impractical chan proposed probabilistic approach achieve small number tests combinatorially better scheme however scheme applicable threshold much smaller decoding complexity remains high namely log log precision parameter present divide conquer scheme instantiate via deterministic randomized decoding deterministic decoding deterministic scheme defective items found probability randomized decoding reduces number tests defective items found poly log probability least decoding complexity log comparison existing work given table reliminaries consistency use capital calligraphic letters matrices letters scalars bold letters vectors matrix vector entries binary notations used number items maximum number defective items binary representation items set defective items cardinality table omparison existing work cheraghchi marco chen chan deterministic decoding randomized decoding number tests log log log log log log log log log log log log decoding complexity decoding type log log log log log deterministic random poly log deterministic poly log random operation related cnagt defined later measurement matrix used identify defective items integer number tests gij matrix mij matrix used identify defective items defective items cnagt integer number tests mij complementary matrix mij mij row matrix row matrix row matrix column matrix respectively xin binary representation items set indices defective items row diag diag gin diagonal matrix constructed input vector problem definition index population items let defective set test defined subset items problem defective items among items test consisting subset items positive least defective items test test designed advance formally test outcome positive negative model follows binary matrix tij defined measurement matrix number items number tests binary representation vector items indicates item defective indicates otherwise jth item corresponds jth column matrix entry tij naturally means item belongs test tij means otherwise outcome tests test positive otherwise procedure get outcome vector called encoding procedure procedure used identify defective items called decoding procedure outcome vector def def notation test operation namely tij tij objective find efficient decoding scheme identify defective items disjunct matrices reduces cnagt distinguish cnagt change notation use measurement matrix instead matrix outcome vector equal def def def mkj boolean operator vector multiplication multiplication replaced operator addition replaced operator mij union columns defined follows mji mtji column said included another column exists row entry first column entry second column matrix satisfying property union columns include remaining column always recovered need matrix efficiently decoded identify defective items strongly explicit matrix matrix entries computed time poly state following theorem theorem theorem let exists strongly explicit matrix log input vector decoding procedure returns set defective items input vector union columns matrix poly time completely separating matrix introduce notion completely separating matrices used get efficient decoding algorithms separating matrix defined follows definition matrix gij called separating matrix pair subsets exists row glr gls row called singular row subsets called matrix definition slightly different one described lebedev easy verify matrix separating matrix also separating matrix present existence matrices theorem given integers exists separating matrix size log log base natural logarithm proof matrix gij generated randomly entry gij assigned probability probability pair subsets probability row singular probability singular row subsets using union bound probability pair subsets singular row probability matrix ensure exists matrix one needs find choose exp exp exp exp get exp exp log log got using choose log log exists separating matrix size suppose separating matrix set every submatrix constructed columns separating matrix property strict makes number rows high reduce number rows relax property follows submatrix constructed columns separating matrix high probability following corollary describes idea details corollary let given positive integers exists matrix submatrix constructed columns separating matrix probability least log log base natural logarithm proof matrix gij generated randomly entry gij assigned probability probability task prove matrix constructed columns separating matrix probability least specifically prove log log sufficient achieve similar proof theorem probability separating matrix exp exp exp log log derived get completes proof iii roposed scheme basic idea scheme uses divide conquer strategy create least rows map rows using special matrix enables convert outcome natgt outcome cnagt defective items row efficiently identified present particular matrix achieves efficient decoding row following section number defective items equals threshold section consider special case number defective items equals threshold given measurement matrix representation vector defective items def def def def observe objective recover recovered choose matrix described theorem achieve goal create measurement matrix def mij matrix described theorem mij complement matrix mij mij note decoded time poly poly log log let assume outcome vector def def def following lemma shows always obtained always recovered lemma given integers exists strongly explicit matrix exactly defective items among items defective items identified time poly log proof construct measurement matrix assume observed vector task create vector one get using following rules prove correctness rules least defective items row first rule implied less defective items row threshold must defective items row moreover since complement must defective item test therefore second rule implied less defective items row similarly less defective items row complement number defective items row equal zero since either would equal would equal since number defective items row equal zero test outcome positive third rule thus implied since get matrix defective items identified time poly theorem example demonstrate lemma setting defining matrix first two columns follows assume defective items observed vector ytwyt using three rules proof lemma obtain vector note using decoding algorithm omitted example identify items defective items encoding procedure implement divide conquer strategy need divide set defective items small subsets defective items subsets effectively identified define integer create matrix containing rows denoted probability least set indices defective items row example defective items conditions guarantee defective items included decoded set achieve pruning matrix size removing columns must separating matrix high probability definition also separating matrix rows chosen follows choose collection sets defective items jlu satisfying pick last set follows definition exists row denoted gil gil sil row singular sets sil sil condition thus holds condition also holds sil matrix specified section creating matrix generate matrix final measurement matrix size created follows diag diag diag diag diag diag vector observed using performing tests given measurement matrix diag diag def def diag yik yik yit yti note vector representing defective items corresponding row xin xil thus moreover decoding procedure decoding procedure summarized algorithm yik presumed procedure briefly explained follows line enumerates rows line checks least defective items row lines calculate line checks items truly defective adds algorithm decoding procedure input outcome vector output set defective items yil end yil end yil end end decode usingwm get defective set end end end return correctness decoding procedure objective recover line enumerates rows indicator whether least defective items row implies less defective items row since focus row exactly defective items considered lines task implies least defective items row exactly defective items row always identified described lemma task prevent accusing false defective items decoding lines calculates know union many columns many defective items row therefore task decode using matrix get defective set validate whether items defective exists least rows exactly defective items row defective items rows defective items need identify therefore consider case number defective items obtained decoding equals task prevent identifying false defective items described line two sets defective items corresponding first one true set unknown second one expected sure always identify defective items condition line always holds lemma need consider case defective items row classify case two categories case elements defective items need consider whether condition holds receive true defective items hold take defective item set case prove hold none elements added defective item set let pick since matrix exists row denoted hand less defective items row implies however therefore thus line eliminates false defective items line returns defective item set decoding complexity constructed using probability successful decoding depends choices given input vector get set defective items decoding probability successful decoding thus depends since rows satisfying probability least defective items identified poly time using tests probability least summarize divide conquer strategy following theorem theorem let integers defective set suppose matrix contains rows denoted sil index set defective items row gil suppose matrix matrix decoded time measurement matrix defined used identify defective items time probability successful decoding depends event rows satisfying specifically event happens probability least probability successful decoding also least omplexity proposed scheme specify matrix theorem get desired number tests decoding complexity identifying defective items specifying leads two approaches decoding deterministic randomized deterministic decoding deterministic scheme defective items found probability achievable every submatrices constructed columns separating matrices randomized decoding reduces number tests defective items found probability least achievable submatrix constructed columns separating matrix probability least deterministic decoding following theorem states exists deterministic algorithm identifying defective items choosing size separating matrix theorem theorem let exists matrix defective items poly log identified time log log log log proof basis theorem measurement matrix generated follows choose separating matrix theorem log log choose matrix theorem log decoding time poly defined since separating matrix pruning matrix created removing columns also separating matrix probability definition also separating matrix exists rows satisfying described section theorem defective items recovered using log tests probability least probability time poly randomized decoding randomized decoding chosen pruning matrix size created removing columns separating matrix probability least results improved number tests decoding time compared theorem theorem let defective items identified using log log log tests probability least decoding time log poly log proof basis theorem measurement matrix generated follows choose matrix corollary log log generate matrix using theorem log decoding time poly define let matrix described corollary pruning matrix created removing columns separating matrix probability least definition also separating matrix exists rows satisfying described section probability least theorem defective items recovered using log tests probability least time poly onclusion introduced efficient scheme identifying defective items natgt however algorithm works extending results left future work moreover would interesting consider noisy natgt well erroneous tests present test outcomes acknowledgement first author thanks sokendai supporting via abroad program eferences dorfman detection defective members large populations annals mathematical statistics vol damaschke threshold group testing general theory information transfer combinatorics springer farach kannan knill muthukrishnan group testing problems sequences experimental molecular biology compression complexity sequences proceedings ieee wolf born group testing multiaccess communications ieee transactions information theory vol cormode muthukrishnan hot tracking frequent items dynamically acm transactions database systems tods vol hwang combinatorial group testing applications vol world scientific chen bonis almost optimal algorithm generalized threshold group testing inhibitors journal computational biology vol chang chen shi reconstruction hidden graphs threshold group testing journal combinatorial optimization vol porat rothschild explicit combinatorial group testing schemes automata languages programming ngo porat rudra efficiently decodable list disjunct matrices applications international colloquium automata languages programming springer cheraghchi group testing limitations constructions discrete applied mathematics vol cai jahangoshahi bakshi jaggi grotesque noisy group testing quick efficient communication control computing allerton annual allerton conference ieee cheraghchi improved constructions threshold group testing algorithmica vol marco stachowiak subquadratic threshold group testing international symposium fundamentals computation theory springer chen nonadaptive algorithms threshold group testing discrete applied mathematics vol chan cai bakshi jaggi saligrama stochastic threshold group testing information theory workshop itw ieee ieee lebedev separating codes new combinatorial search model problems information transmission vol
| 7 |
lyubeznik numbers injective dimension mixed characteristic dec daniel luis felipe emily witt abstract investigate lyubeznik numbers injective dimension local cohomology modules finitely generated prove mixed characteristic lyubeznik numbers standard ones agree locally almost reductions positive characteristic additionally address open question lyubeznik asks whether injective dimension local cohomology module regular ring bounded dimension support although show answer affirmative several families also exhibit example bound fails hold example settles lyubeznik question illustrates one way behavior local cohomology modules regular rings equal characteristic mixed characteristic differ introduction since introduction grothendieck theory local cohomology active area research commutative algebra algebraic geometry neighboring paper especially motivated following observation despite fact local cohomology modules regular rings typically large generated often satisfy strong structural conditions precisely consider following conditions module ring number associated primes bass numbers inj dimr dim suppr local cohomology support maximal ideal injective condition use inj dimr denote injective dimension length minimal injective dim suppr denote dimension support topological subspace spec longest length chain prime ideals whose terms contained support shown huneke sharp positive characteristic lyubeznik equal characteristic zero mixed characteristic local cohomology modules regular local ring support arbitrary ideal satisfy conditions recently shown bhatt blickle lyubeznik singh zhang local cohomology modules smooth support arbitrary ideal also satisfy condition extent local cohomology modules satisfy subtle issue overarching theme paper recall results regarding conditions huneke sharp lyubeznik zhao later introduction condition local cohomology crucial ingredient lyubeznik numbers family invariants associated local ring containing addition encoding important properties ambient ring lyubeznik numbers also interesting geometric topological interpretations including connections singular cohomology simplicial complexes worth pointing avoid contemplating dimension consider property nonzero witt properties play important role proofs results property also used mixed characteristic lyubeznik numbers variant standard lyubeznik numbers second fourth authors major obstruction understanding invariants uncertainty surrounding properties mixed characteristic article two major goals investigate relationship standard lyubeznik numbers mixed characteristic lyubeznik numbers case local rings positive characteristic second investigate local cohomology modules satisfy conditions comparison lyubeznik numbers suppose local ring contains may realized regular local ring containing standard lyubeznik number respect integers dim ith bass number local cohomology module dim dimk extis attempt study local rings mixed characteristic rings characteristic zero positive characteristic residue second fourth authors proposed following analogous residue characteristic may realized regular local ring mixed characteristic mixed characteristic lyubeznik number respect integers dimk exti dim one may extend arbitrary local ring realized ways follows cohen structure theorems fact either case lyubeznik numbers depend indices quotient choices made realizing important note although mixed characteristic lyubeznik numbers originally study rings mixed characteristic standard mixed characteristic lyubeznik numbers rings equal characteristic observation motivates following question question rings positive characteristic true standard lyubeznik numbers mixed characteristic lyubeznik numbers agree known question positive answer rings rings small dimension corollary interestingly enough however types lyubeznik numbers need always agree exhibited certain quotient power series characteristic two squarefree monomial ideal remark article consider question perspective reduction positive characteristic recall generated call reduction characteristic context one may specialize question instead ask reductions given generated standard invariants referred lyubeznik numbers mixed characteristic terminology intended imply must mixed characteristic order define arise considering would false chosen emphasize integers quotient regular unramified ring mixed characteristic lyubeznik numbers injective dimension mixed characteristic lyubeznik numbers agree locally stating general result direction present illustrative example example rings reductions theorem light aforementioned result corollary shows localization integers fact vanishing prime ideal local cohomology modules described theorem implies invariants zero unless dim suggested example standard mixed characteristic lyubeznik numbers agree locally almost reductions characteristic given theorem theorem finitely generated exists finite set prime integers following property prime contained integers localization prime ideal fundamental tool proof theorem fact local cohomology modules polynomial rings many associated primes injective dimension local cohomology modules light fact local cohomology regular ring containing structural conditions following question lyubeznik natural question lyubeznik given ideal regular local ring conditions hold nonzero hij question unresolved mixed characteristic case two decades best knowledge step toward answering question result zhou theorem says following regular mixed characteristic hij nonzero ideal integer injective dimension hij bounded dim supps hij furthermore maximal ideal injective dimension iterated local cohomology module hmi hij one article prove answer part question cases descend theorem theorem given ideal hij exists finite set primes following property prime ideal supps hij lying prime inj dims hij dim supps hij also show local cohomology modules satisfy condition several cases see proposition details moreover although many cases positive answer question also construct example properties fail hold example presented remark theorem theorem exists ideal regular local ring mixed characteristic nonnegative integers inj dims hij dim supps hij iterated local cohomology module hmi hij injective theorem establishes negative answer parts question settling furthermore exhibits way regular rings equal characteristic mixed characteristic behave condition finiteness imply set prime ideals open dense subset supps hij see subsection details witt conclude introduction mentioning one result may regarded extension recent noted work smooth see section proof theorem given ideals nonnegative integers iterated local cohomology module hijtt finitely many associated primes note theorem ideas behind proof namely corollary play key role proofs theorem theorem proposition local cohomology section recall basic properties local cohomology modules refer reader details koszul local cohomology cohomological koszul complex element commutative ring complex given nontrivial map multiplication copy degree zero copy degree one generally koszul complex sequence tensor product complexes regard complex concentrated degree zero koszul cohomology module given integer commutative diagram map complexes tensoring maps form together tensoring produces map sequence obtained raising every term power use denote limit complex directed system recall exists canonical isomorphism denotes complex taking homology produces directed system direct limits commute homology obtain functorial isomorphism lim depend ideal generated terms refer isomorphic objects denoted hak local cohomology module support long exact sequences local cohomology assume noetherian given ideal short exact sequence functorial long exact sequence hak hak hak given also long exact sequence functorial map hak hak hak natural localization map lyubeznik numbers injective dimension finiteness associated primes iterated local cohomology modules goal section prove theorem extends recent result bhatt blickle lyubeznik singh zhang associated primes local cohomology theorem proof appears last subsection earlier subsections survey results theory main reference theory given ring characteristic integer let denote abelian group considered via rule rxsp remainder subsection assume regular functor taking left exact consists isomorphism call structure morphism isomorphism given shows article always regard way map map respects structure morphisms expected diagram commutes given map consider following commutative diagram denotes direct limit row diagram vertical arrows induce isomorphism case say generates local cohomology localization regarded way localization map map example particular complex complex therefore iterated local cohomology also proposition adaptation proposition describe another isomorphic module structure local cohomology proceeding recall basic facts remark direct limits tensor products natural numbers basic example filtered poset direct limits systems satisfy certain desirable conditions example two directed systems lim lim lim analogous identity complexes also holds theorem use identities without mention following remark remark koszul complexes frobenius functor standard structure allows canonically identify gives two ways regard second row following commutative diagram witt words exists commutative diagram complexes ppp ppp ppp ppp generally given sequence frobenius morphism allows tensor sequences form together obtain commutative diagram next let generated tensoring vertical maps produces considering directed systems result iterating maps see lim lim lim lim lim lim lim level homology isomorphism becomes hak lim lim ideal generated terms last directed system obtained taking homology map summarize content remark proposition proposition fix finite sequence whose terms generate ideal fix arbitrary homomorphism let associated map induced generated generated exists isomorphism hak commutes map direct limit functorial map induced map direct limit vertical map one given natural transformation koszul local cohomology lyubeznik numbers injective dimension iterated local cohomology rest section work following context setup let polynomial ring every sequence generating ideal consider compositions functors haudd category note natural transformation koszul local cohomology induces natural transformation functors every remark induced functors reductions modulo prime integer fix prime integer set hik also analogous property also holds koszul fact isomorphic hir cohomology follows observations restricted category rmodules functor respectively may regarded iterated koszul respectively local cohomology functor also worth pointing natural transformation functors restricts natural transformation functors compatible one given considering iterated koszul local cohomology finally note functor induces functor also induces functor denotes frobenius functor polynomial ring following technical result centered behavior iterated local koszul cohop mology respect sequence prime integer result plays important role proof theored later section sense one needs overcome lack general long exact sequences associated iterated cohomology lemma let denote either functor fix positive integer prime integer consider exact sequence either induced map injective every induced map surjective every exact sequence proof set proceed induction first suppose case key point functor associated long exact sequence moreover multiplication injective every surjective every long exact sequence induced splits short exact sequences establishes lemma next suppose lemma true positive integer either multiplication injective every surjective every inductive hypothesis implies exact every thus complete proof show exact replaced fix base case long exact sequence either koszul local cohomology depending iterated functor represents induced splits short exact witt sequences form assumptions maps induced every sequence form arises way conclude proof following extension proposition context iterated cohomology proposition let denote reduction modulo prime integer fix let map obtained iterating one induced generated exists isomorphism commutes map direct limit obtained applying identification generated structure morphism vertical map one given natural transformation proof follows straightforward induction one repeatedly invokes proposition structure morphism let ring let denote ring operators throughout discussion term always refer left note always via restriction scalars maps complex sequence local cohomology inherits natural structure example also shows given map associated functorial maps local cohomology follows iterated local cohomology modules functorial maps determined iterated local cohomology another canonical structure iterated local cohomology positive characteristic indeed characteristic frobenius functor structure morphism allows one natural structure section fortunately two structures iterated local cohomology one coming complex induced structure isomorphic see example referring iterated local cohomology positive characteristic always mean either two isomorphic structures following result gives important criterion subset given generates plays crucial role proof proposition lemma corollary suppose regular finitely generated algebra regular local ring characteristic finitely generated map generates image direct limit generates specialize context setup proposition fix prime integer map induced map surjective induced map iterative process repeatedly apply fact frobenius functor exact lyubeznik numbers injective dimension proof set let via let map given iterating one induced generated proposition exists isomorphism commutes denotes map induced natural transformation lemma image generates however lower row isomorphism earlier discussion implies also isomorphism consequently image composition equals commutativity diagram generates next consider following commutative diagram obtained applying natural transformation canonical surjection earlier discussion iterated local cohomology modules modules map follows killed therefore preceding isomorphism follows discussion bbl subsection combine observations complete proof hypothesis top row surjective therefore however noted generates allows conclude surjective theorem fix positive integer multiplication prime integer injective every also injective every proof set according lemma hypothesis multiplication injective implies surjective every follows proposition also surjective every applying lemma shows multiplication injective every corollary given positive integer finitely many prime integers multiplication injective proof zero divisor follows theorem must zero divisor therefore contained associated prime module fact many follows fact many iterated koszul cohomology modules form associated prime module many since modules generated contain one positive prime integer witt finiteness associated primes iterated local cohomology ready prove theorem introduction context setup theorem states every positive integer module many associated primes proof theorem fix positive integer element set unique map spec spec induces map asss spec show asss show image point image also show image show contains many positive prime integers however prime integer contained image contained associated prime therefore zero divisor corollary shows many prime integers consider associated primes correspondence associated primes localization many primes remark hand primes prime integer precisely associated primes containing many primes theorem agreement standard mixed characteristic lyubeznik numbers section focus question introduction concerned equality standard mixed characteristic lyubeznik numbers natural context namely local rings characteristic obtained generated subsection dedicated establishing important results results established subsection utilized current next section second subsection prove theorem fact prove precise statement theorem refer reader introduction standard mixed characteristic lyubeznik numbers preliminary lemmas throughout subsection suppose regular local ring mixed characteristic residue follows repeatedly use without reference fact local cohomology modules support arbitrary ideals bass numbers lemma extit extit modules every integer proof let sequence together forms regular system parameters complex equals therefore copy summand degree zero summand degree one hypothesis multiplication zero map given follows preserve summand decomposition fact straightforward verify sign induced maps summand agree complexes respectively words isomorphism complexes lyubeznik numbers injective dimension choice may compute extit cohomology moreover image modulo forms system parameters koszul complex modulo considered module complex complex used compute modules extit lemma follows using lemma relate bass numbers certain local cohomology modules certain local cohomology modules reduction modulo corollary ideal multiplication injective haj every every pair nonnegative integers dimk extit haj dimk proof hypothesis guarantees long exact sequence local cohomology respect associated short exact sequence breaks short exact sequences haj haj haj turn short exact sequence local cohomology induces long exact sequence haj extit haj haj exti haj multiplication zero extit haj obtain short exact sequences extt extt every pair integers ready prove corollary induction automatic zero fact extt also zero follows next suppose dimk extit haj dimk according dimk dimk extt dimk extt lemma haj also dimk extit haj dimk extit haj dimk substituting second identity shows dimk equals dimk extit haj dimk dimk extt inductive hypothesis implies two terms equal leaving dimk dimk extt allows conclude proof shift attention study certain local cohomology modules support ideals containing prime integer witt lemma ideal containing every furthermore ideal multiplication injective haj every every short exact sequence haj hbj hbj proof since btp long exact sequence local cohomology induced short exact sequence implies integers establishing claim next let set since multiplication injective haj localization map haj haj injective therefore long exact sequence haj haj breaks short exact sequences haj haj unit multiplication haj surjective implies multiplication must also surjective quotient follows part lemma multiplication surjective hbj consequently long exact sequence local cohomology associated short exact sequence homomorphism sends class class hbj hbj induces short exact hbj sequences every finally expansions agree may replace module short exact sequence corollary ideal multiplication injective haj every every pair nonnegative integers dimk extit hbj dimk extit proof short exact sequence given lemma induces long exact sequence haj extt exti exti multiplication zero also zero every module sequence therefore breaks short exact sequences extt extt hand setting haj lemma also shows extit haj extit haj comparing two descriptions extit hbj inducing show dimk extit dimk extit lary follows isomorphism lemma lyubeznik numbers injective dimension agreement lyubeznik numbers theorem statement lyubeznik numbers rings obtained polynomial ring killing prime integer localizing prime ideal set notation needed precisely describe process notation also appear theorem let denote polynomial ring ideal set furthermore prime integer ideal containing prime integer ideal expands prime ideal canonical map rings abusing notation also use denote expansion rings also set regular local ring mixed characteristic fact localization polynomial ring particular regular local ring prime must part minimal generating set maximal ideal finally set isq psq prove theorem show exists set prime integers whenever establish slightly statement theorem theorem notation let denote set prime integers zero divisors hij finite set corollary proof exact sequence exact sequence set dim term short exact sequence agrees expansion sequences imply dimk dimk exti dimk extit hit multiplication injective hij every follows multiplication also injective hit hij every light may combine corollary description similarly may apply lyubeznik numbers conclude corollary concluce affirmative answers lyubeznik question section consider lyubezink question question introduction asks following suppose ideal regular local ring hij inj dims hij less equal dim supps hij iterated local cohomology module hmi hij injective every witt subsection namely theorem give positive answer question interesting case third subsection namely proposition second question second subsection discuss condition local cohomology generated descends local cohomology positive answers question injective dimension remark support certain local cohomology suppose ideal ring multiplication zero haj suppt haj suppt haj suppt haj spec spec addition multiplication injective every local cohomology module support exact sequence haj haj haj implies suppt haj suppt haj furthermore every prime suppt haj must properly contain associated prime haj properness follows assumption nonzerodivisor haj therefore contained associated prime shows dim suppt haj dim suppt haj dim suppt haj lemma let unramified regular local ring mixed characteristic prime ideal arbitrary ideal extitq haj dim suppt haj proof chain prime ideals suppt haj contained extended adding words dim suppt haj dim suppt haj theorem inj dim haj dim suppt haj dim suppt haj thus bass numbers haj vanish dim suppt haj proposition let unramified regular local ring mixed characteristic let ideal multiplication injective every local cohomology module support hbj nonzero inj dimt hbj dim suppt hbj proof given nonzero local cohomology module hbj aim show bass numbers beyond dimension support zero lemma shows holds bass numbers respect prime ideals remains show dimk extit hbj dim suppt hbj denotes residue establish strategy replace statement appeal results key point regular local ring characteristic step direction observe corollary allows restate dimk extit dim suppt hbj assume momentarily suppt hbj suppt lyubeznik numbers injective dimension regard latter set subset spec spec given may rewrite equivalent condition dimk extit dim suppt condition holds corollary thus conclude proof justify towards notice localizing sequence given lemma prime ideal gives short exact sequence multiplication injective unless zero module vanishes words suppt suppt establish simply note side agrees suppt hbj lemma side agrees suppt theorem theorem let polynomial ring ideal let denote set prime integers zero divisors hij finite set corollary supps hij lie prime inj dims hij dim supps hij proof suppose prime ideal supps hij lying let let residue proof proposition reduce proving statement appeal stress however reduction step one used proof proposition since bass numbers hit respect prime ideals vanish beyond dimension support lemma enough show dimk extit hit dim suppt hit corollary may restated dimk hit dim suppt hit choice shows dim suppt hit dim suppt hit therefore condition holds whenever dimk dim suppt hit however last condition holds corollary condition local cohomology descends let polynomial ring ideal subsection consider condition hij appears statement theorem statement theorem first consider occurs condition fails fix integer set hij associated prime lies consequently prime supps contains associated prime preceding observation shows prime integer follows well however element associated prime zero divisor therefore lies prime contained set appearing statement theorem follows witt local cohomology modules theorem vacuous provide concrete example situation example let polynomial ring six indeterminates let denote monomial ideal corresponding reisner variety know nonzero example hand remark therefore proposition preceding discussion theorem empty statement case see section example next consider situation local cohomology descends corollary show condition guarantees theorem vacuous start lemma well known experts lemma finitely generated algebra finitely many prime integers units proof suppose exists sequence prime integers units generic freeness lemma exists integer free prime integer divide therefore however multiplication injective free also contradicts therefore assumption every corollary let domain contains finitely generated algebra let ideal hij finitely many prime integers exists prime suppa hij proof hypothesis hij implies exists associated prime hij set hypothesis also domain shows generated algebra lemma exists prime unit prime otherwise would imply unit prime ideal containing proper ideal contains construction suppa hij since contains positive answers question iterated local cohomology subsection address lyubeznik second question concerning injectivity certain iterated local cohomology modules start discussion recall result second fourth authors result plays important role current subsequent section remark criterion injectivity let power series ring complete noetherian discrete valuation ring mixed characteristic let denote ring operators iterated local cohomology module example bass numbers theorem consequently assume supported maximal ideal lemma implies injective multiplication surjective remark specialized criterion injectivity fix prime integer let either polynomial ring localization let completion maximal ideal generated variables note regardless choice power series ring integers fix iterated local cohomology module suppose supported maximal ideal condition lyubeznik numbers injective dimension implies regarded natural way isomorphic iterated local cohomology module thus remark multiplication surjective injective however supported implies injective well proposition let polynomial ring let ideal generated variables fix ideal let denote set prime integers zero divisors iterated local cohomology module form hij hni hij finite set iterated local cohomology modules hmi hij corollary hmi injective fact hmi proof consider long exact sequence hmi hij hni hij hij hij choice localization map sequence injective therefore map hni hij hij surjective therefore multiplication surjective hni hij also surjective quotient hmi hij iterated local cohomology module hmi hij supported follows remark hmi hij injective next consider long exact sequence localization maps sequence injective obtain short exact sequences induces hmi hmi since contained modules form hmi hij vanish thus hmi isomorphic latter already established injective negative answer lyubeznik question injective dimension section following conventions clarity notation set also let localization set let denote maximal ideal generated indeterminates let ideal generated following monomials remark goal section show used provide negative answer lyubeznik question question introduction begin important preliminary observation consider long exact sequence vanishes shown remark proof proposition unless furthermore known remark polynomial ring may apply proposition conclude hij dim dim therefore long exact sequence reduces witt lemma supps proof let denote ideal generated let direct computation shows therefore particular module vanishes localizing supps following computations imply hbi substituting vanishing long exact sequence shows therefore last equality holds since generated three monomials substituting vanishing long exact sequence shows surjects onto follows supps supp next construct another closed set containing supps using similar argument let ideal generated let shows hdij combining long exact sequence shows last equality holds since generated three monomials long exact sequence imply surjects onto isomorphisms analogous conclude supps contained supps supps notice interchanging set generators remains thus applying symmetry implies supps well containments allow conclude supps lyubeznik numbers injective dimension finally supps lemma multiplication surjective particular proof corollary shown multiplication surjective mind suppose multiplication surjective aim derive contradiction showing implies must also surjective consider following commutative diagram whose rows sequence supposing last column exact show via diagram chase third column also exact take exists assumption addition exists since diagram commutative ker thus exists since multiplication automorphism exist let arbitrary shows multiplication surjective earlier discussion contradiction results hand able show answer lyubeznik question question introduction always positive theorem theorem let denote completion injective one support particular dimension hit local cohomology module provides negative answer parts lyubeznik question proof supported lemma naturally module pletion moreover hit supported maximal ideal multiplication surjective lemma know module injective remark hit apply theorem see inj dimt hit dim suppt hit acknowledgments authors began work program commutative algebra mathematical sciences research institute msri thank msri hospitality support includes postdoctoral fellowships fourth authors authors also grateful national science foundation nsf national council science technology mexico conacyt support author supported nsf postdoctoral research fellowship second partially supported conacyt grant nsf grant third nsf grant last nsf grant witt references josep montaner characteristic cycles local cohomology modules monomial ideals pure appl algebra josep montaner manuel blickle gennady lyubeznik generators positive characteristic math res josep montaner ricardo santiago zarzuela armengou local cohomology arrangements subspaces monomial ideals adv montaner vahidi lyubeznik numbers monomial ideals trans amer math josep montaner kohji yanagawa lyubeznik numbers local rings linear strands graded ideals preprint arxiv manuel blickle raphael bondu local cohomology multiplicities terms cohomology ann inst fourier grenoble bhargav bhatt manuel blickle gennady lyubeznik anurag singh wenliang zhang local cohomology modules smooth finitely many associated primes invent markus brodmann rodney sharp local cohomology algebraic introduction geometric applications volume cambridge studies advanced mathematics cambridge university press cambridge cohen structure ideal theory complete local rings trans amer math sabbah topological computation local cohomology multiplicities collect dedicated memory fernando serrano melvin hochster craig huneke tight closure equal characteristic zero melvin hochster joel roberts rings invariants reductive groups acting regular rings advances craig huneke rodney sharp bass numbers local cohomology modules trans amer math srikanth iyengar graham leuschke anton leykin claudia miller ezra miller anurag singh uli walther hours local cohomology volume graduate studies mathematics american mathematical society providence kawasaki lyubeznik number local cohomology modules bull nara univ natur kawasaki highest lyubeznik number math proc cambridge philos gennady lyubeznik local cohomology modules hai ideals generated monomials complete intersections acireale volume lecture notes pages springer berlin gennady lyubeznik finiteness properties local cohomology modules application dmodules commutative algebra invent gennady lyubeznik applications local cohomology characteristic reine angew gennady lyubeznik finiteness properties local cohomology modules regular local rings mixed characteristic unramified case comm algebra special issue honor robin hartshorne gennady lyubeznik local cohomology invariants local rings math luis local cohomology modules polynomial power series rings rings small dimension illinois luis emily witt lyubeznik numbers mixed characteristic math res luis emily witt wenliang zhang survey lyubeznik numbers preprint peskine szpiro dimension projective finie cohomologie locale applications conjectures auslander bass grothendieck inst hautes sci publ gerald allen reisner quotients polynomial rings advances lyubeznik numbers injective dimension uli walther lyubeznik numbers local ring proc amer math electronic wenliang zhang highest lyubeznik number local ring compos caijun zhou higher derivations local cohomology modules algebra department mathematics university michigan ann arbor email address dhernan department mathematics university virginia charlottesville email address department mathematics statistics georgia state university atlanta email address jperezvallejo department mathematics kansas university lawrence email address witt
| 0 |
tests weights global minimum variance portfolio setting taras bodnara nov solomiia dmytrivb nestor parolyac wolfgang schmidb department mathematics stockholm university stockholm sweden department statistics european university viadrina frankfurt oder germany institute statistics leibniz university hannover hannover germany department economics heidelberg university heidelberg germany abstract study construct two tests weights global minimum variance portfolio gmvp setting namely number assets depends sample size tends infinity considered tests based sample estimator shrinkage estimator gmvp weights derive asymptotic distributions test statistics null alternative hypotheses moreover provide simulation study power functions proposed tests compared existing approaches observe test performs well based shrinkage estimator even values close one keywords finance portfolio analysis global minimum variance portfolio statistical test shrinkage estimator random matrix theory introduction financial markets developed rapidly recent years amount money invested risky assets substantially increased due investor must knowledge optimal portfolio proportions order receive large expected return time reduce level risk associated investment decision since markowitz presented analysis many works optimal portfolio selection published however investors faced difficulties practical implementation investing theories since sampling error present unknown theoretical quantities estimated classical asymptotic analysis almost always assumed sample size increases size portfolio namely number included assets remains constant corresponding author taras bodnar tel fax jobson korkie okhrin schmid nowadays case often called standard asymptotics see cam yang traditional estimator optimal portfolio sample estimator consistent asymptotically normally distributed however many applications number assets portfolio large comparison sample size portfolio dimension sample size tend tends concentration ratio case infinity simultaneously faced asymptotics kolmogorov asymptotics see van geer bai shi cai shen whenever dimension data large classical limit theorems longer suitable traditional estimators result serious departure optimal estimators asymptotics bai silverstein methods fail provide consistent estimators unknown parameters asset returns mean vector covariance matrix generally greater concentration ratio worse sample estimators cases new test statistics must developed completely new asymptotic techniques must applied derivations several studies deal asymptotics portfolio theory using results random matrix theory see frahm jaekel laloux recently bodnar parolya schmid presented estimator global minimum variance portfolio gmvp weights bodnar okhrin parolya derived optimal shrinkage estimator portfolio testing efficiency portfolio classical problem finance former literature focuses case standard asymptotics considers exact tests fixed example gibbons ross shanken provided exact efficiency given portfolio derived inference procedures efficient portfolio weights based application linear regression recently bodnar schmid presented test general linear hypothesis portfolio weights case elliptically contoured distributions contribution study derivation statistical techniques testing efficiency portfolio asymptotics two statistical tests considered whereas first approach based asymptotic distribution test statistic suggested bodnar schmid highdimensional setting second test makes use shrinkage estimator gmvp weights provides powerful alternative existing methods best knowledge analysis first time shrinkage approach applied statistical test theory paper structured follows section discuss main results distributional properties optimal portfolio weights presented okhrin schmid section new test based shrinkage estimator gmvp weights derived highdimensional version test based test statistics given bodnar schmid proposed asymptotic distributions test statistics null hypothesis alternative hypothesis obtained corresponding power functions tests presented section power functions proposed tests compared different values comparison study test glombeck considered well conclude section proofs given appendix estimation optimal portfolio weights consider financial market consisting risky assets let denote vector returns risky assets time suppose cov covariance matrix assumed positive definite let consider single period investor invests gmvp one commonly used portfolios see example memmel kempf frahm memmel okhrin schmid bodnar schmid glombeck others portfolio exhibits smallest attainable portfolio variance constraint denotes vector ones stands vector portfolio weights weights gmvp given wgm practical implementation framework spirit markowitz relies estimating first two moments asset returns know true covariance matrix usually replaced sample estimator based sample historical asset returns given replacing sample estimator obtain estimator gmvp weights expressed note estimator gmvp weights exclusively function estimator covariance matrix assuming asset returns follow stationary gaussian process mean covariance matrix okhrin schmid proved vector estimated optimal portfolio weights asymptotically normal additional assumption independence derived exact distribution okhrin schmid showed distribution arbitrary components dimensional degrees freedom wgm cov consequently wgm obtained deleting last element wgm consist first elements degrees freedom eters wgm distribution denoted wgm since test theory gmvp high dimensions time point investor interested know whether portfolio holding coincides true gmvp reconstructed reason consider following testing problem wgm wgm known vector example weights holding portfolio thus problem analyses whether true gmvp weights equal given values bodnar schmid analysed general linear hypothesis gmvp portfolio weights introduced exact test assuming asset returns independent elliptically contoured distributed moreover derived exact distribution test statistic null hypothesis alternative hypothesis main focus study portfolios want consider testing problem environment assuming note case depend well thus would precise write following ignore fact order simplify notation moreover turns sample covariance matrix longer good estimator covariance matrix see bai silverstein bai shi reason unclear well test bodnar schmid behaves context first study behaviour asymptotics propose alternative test makes use shrinkage estimator portfolio weights bodnar parolya schmid recent years several studies dealt estimators unknown portfolio parameters asymptotics applications portfolio theory glombeck formulated tests portfolio weights variances excess returns sharpe ratios gmvp bodnar parolya schmid bodnar okhrin parolya derived shrinkage estimators gmvp portfolio respectively kolmogorov asymptotics test based mahalanobis distance bodnar schmid proposed test general linear hypothesis weights global minimum variance portfolio interested special case case test statistic given consists first elements number assets portfolio fixed shown null hypothesis moreover density alternative hypothesis equal ftn wgm wgm stands hypergeometric function see abramowitz stegun chap thus exact power function test known note result also valid elliptically contoured distributions see bodnar schmid hand several computational difficulties appear power function test calculated large values since involves hypergeometric function whose computation challenging large values order deal problem derive asymptotic distribution setting result given theorem proof appendix since depends write rest paper theorem let assume sequence independent normally distributed random vectors mean covariance matrix assumed positive definite let holds null hypothesis results theorem lead asymptotic expression power function given standard normal distribution figure plot power function function several values solid line addition empirical power test shown values dashed line equal relative number rejections null hypothesis obtained via simulation study remarkable following proof theorem considered simulation study considerably simplified instead generating random matrix asset returns simulation run simulate four independent random variables standard univariate distributions compute statistic given value following stochastic representation appendix namely simulation study performed following way generate four independent random variables fixed compute iii repeat steps calculate empirical power indicator function set figure observe good performance asymptotic approximation power function approximation works perfectly small large values test based shrinkage estimator cases unknown parameters asset return distribution replaced sample counterparts optimal portfolio constructed recent years however power power asymptotic empirical asymptotic empirical noncentrality parameter noncentrality parameter power power asymptotic empirical asymptotic empirical noncentrality parameter noncentrality parameter figure asymptotic power function empirical power function different values function various values significance level types estimators shrinkage estimators discussed well see okhrin schmid bodnar parolya schmid shrinkage methodology introduced stein results extended efron morris case covariance matrix unknown shrinkage methodology applied expected asset returns jorion covariance matrix bodnar gupta parolya applications appear successful reducing damaging influences portfolio selection shrinkage estimator applied directly portfolio weights golosnoy okhrin okhrin schmid showed shrinkage estimators portfolio weights lead decrease variance portfolio weights increase utility bodnar parolya schmid proposed new shrinkage estimator weights gmvp turns provide better results case existing estimators estimator based convex combination sample estimator gmvp weights arbitrary constant vector expressed gse index gse stands general shrinkage estimator assumed vector constants uniformly bounded bodnar parolya schmid proposed determining optimal shrinkage intensity given target portfolio risk minimal gse wgm gse wgm minimized respect result leads authors showed optimal shrinkage intensity almost surely asymptotically equivalent quantity given rbn rbn rbn relative loss target portfolio variance target portfolio variance gmvp result provides estimator optimal shrinkage intensity given corresponding portfolio weights given using estimated shrinkage intensity esi theorem show estimated intensity asymptotically normally distributed proof theorem given appendix bodnar parolya schmid proved ratio theorem let assume sequence independent normally distributed random vectors mean covariance matrix assumed positive definite rbn rbn rbn rbn next introduce test based estimated shrinkage intensity motivation based following equivalences rbn result means variance portfolio based equal variance gmvp finding turn means min wgm wgm since gmvp weights uniquely determined result valid wgm choosing holds wgm thus possible obtain test structure gmvp using shrinkage intensity hypothesis given let testing use test statistic note theorem suppose conditions theorem satisfied nan given statement theorem null hypothesis proof theorem follows directly theorem result gives promising new approach detecting deviations true portfolio weights given quantities using theorem able make statement power function test since depend replace quantity holds note approximation given purely function rbn property main difference test discussed section power function function properties useful analyse performances tests simplify power analysis power power relative loss portfolio rbn relative loss portfolio rbn figure asymptotic power function test function rbn various values significance level number observations figure power test shown function rbn seen test performs better smaller values increasing values power test decreases determine power function two different sample numbers expected test shows better performance larger values since increases numerator expression cumulative distribution function becomes negative whole expression tends one comparison study aim section compare several tests weights gmvp preceding two subsections considered two tests weights gmvp test based empirical portfolio weights exact distribution test statistic known section asymptotic power function test proposed bodnar schmid derived setting section new test proposed asymptotic power function purely depends rbn determined fact tests depend different quantities complicates comparison tests note rbn wgm wgm section tests compared additionally include test presented glombeck theorem comparison study design comparison study let positive definite covariance matrix asset returns number samples structure covariance matrix chosen following way oneninth eigenvalues set equal set equal rest set equal ensure eigenvalues dispersed increases spectrum covariance matrix change behaviour covariance matrix determined follows diagonal matrix whose diagonal elements predefined eigenvalues matrix eigenvectors obtained spectral decomposition standard random matrix consider following scenario modelling changes alternative hypothesis covariance matrix defined denotes change standard deviation given diag order demonstrate influence build norm difference wgm wgm function difference relates proportional transaction costs moving new optimal portfolio weights wgm figure see largest influence portfolio composition observed larger influence decreases results obtained scenario present almost linear relationship norm difference vector size change worth mentioning differences zero changes occur difference difference change figure differences change changes main diagonal changes main diagonal change change changes main diagonal changes main diagonal function change difference difference comparison tests section present results simulation study compare powers three tests simulation based independent realizations significance level chosen concentration ratio takes value within set order illustrate performance tests based shrinkage approach test based statistic bodnar schmid test proposed glombeck empirical power functions general hypothesis evaluated figure figure power power shrinkage glombeck bodnar schmid shrinkage glombeck bodnar schmid change change power power shrinkage glombeck bodnar schmid shrinkage glombeck bodnar schmid change change figure empirical power functions three tests different values changes main diagonal according scenario given figure elements main diagonal covariance matrix contaminated observe slow increase power functions better behaviour smaller values case significant difference performance tests power curves glombeck test test bodnar schmid close whereas test presented glombeck shows slightly better performance one presented bodnar schmid test based shrinkage approach outperforms competitors figure illustrates behaviour tests detect improvement performances tests values case test bodnar schmid outperforms shrinkage approach glombeck test whereas test appears worst one situation occurs powers glombeck test test bodnar schmid almost coincide test based estimated shrinkage intensity lies rest competitors power power shrinkage glombeck bodnar schmid shrinkage glombeck bodnar schmid change change power power shrinkage glombeck bodnar schmid shrinkage glombeck bodnar schmid change change figure empirical power functions three tests different values changes main diagonal according scenario given summary main focus study inference gmvp weights constructing optimal portfolio investor interested know whether weights portfolio holding still optimal fixed time point reason investigate several asymptotic exact statistical procedures detecting deviations weights gmvp one test based sample estimator gmvp weights whereas another uses shrinkage estimator best knowledge shrinkage approach popular point estimation applied test theory first time asymptotic distributions test statistics obtained null alternative hypotheses setting finding great advantage respect approaches appear literature elaborate distribution alternative hypothesis glombeck order compare performances proposed procedures empirical power functions derived tests determined shown test based shrinkage approach performs uniformly better tests considered analysis moderate large values concentration ratio approach seems promising testing portfolio weights situation acknowledgement research partly supported german science foundation dfg via projects schm bayesian estimation optimal portfolio weights risk measures references abramowitz stegun pocketbook mathematical functions verlag harri deutsch frankfurt main bai shi estimating high dimensional covariance matrices applications annals economics finance bai silverstein spectral analysis large dimensional random matrices springer new york bodnar gupta parolya strong convergence optimal linear shrinkage estimator large dimensional covariance matrix journal multivariate analysis bodnar gupta parolya direct shrinkage estimation large dimensional precision matrix journal multivariate analysis bodnar okhrin parolya optimal portfolio selection high dimensions url http bodnar parolya schmid estimation global minimum variance portfolio high dimensions european journal operational research press url https bodnar schmid test weights global minimum variance portfolio elliptical model metrika sampling error estimates efficient portfolio weights journal finance van geer statistics data methods theory applications springer berlin heidelberg cai shen data analysis world scientific singapore cam yang asymptotics statistics basic concepts springer new york dasgupta asymptotic theory statistics probability springer new york efron morris families minimax estimators mean multivariate normal distribution annals statistics frahm jaekel tyler random matrix theory generalized elliptical distributions applications finance ssrn electronic journal url https frahm memmel dominating estimators portfolios journal econometrics gibbons ross shanken test efficiency given portfolio econometrica glombeck statistical inference global minimum variance portfolios scandinavian journal statistics golosnoy okhrin multivariate shrinkage optimal portfolio weights european journal finance jobson korkie performance hypothesis testing sharpe treynor measures journal finance jorion estimation portfolio analysis journal financial quantitative analysis laloux cizeau potters bouchaud random matrix theory financial correlations international journal theoretical applied finance markowitz portfolio selection journal finance memmel kempf estimating global minimum variance portfolio schmalenbach business review muirhead aspects multivariate statistical theory wiley new york okhrin schmid distributional properties portfolio weights journal econometrics okhrin schmid comparison different estimation techniques portfolio selection asta advances statistical analysis okhrin schmid estimation optimal portfolio weights international journal theoretical applied finance stein inadmissibility usual estimator mean multivariate normal distribution proceedings third berkeley symposium mathematical statistics probability volume contributions theory statistics university california press berkeley california appendix section proof theorems given proof theorem first derive stochastic representation let symbol denote equality distribution holds see proof theorem bodnar schmid wgm wgm independent moreover using muirhead theorems obtain wgm wgm wgm last equality shows conditional distribution given depends consequently conditional distribution coincides using distributional properties obtain following stochastic representation given hence stochastic representation expressed independent applying obtain using asymptotic properties infinite degrees freedom independence application slutsky lemma see example theorem dasgupta leads proof theorem order stress dependence use notation following using proposition glombeck rewrite using follows since ndn nen since follows iin furthermore iin consequently iin since min thus condition fulfilled holds
| 10 |
embeddings spherical homogeneous spaces characteristic feb rudolf tange summary let reductive group algebraically closed field characteristic study properties embeddings spherical homogeneous look frobenius splittings canonical power compatible certain subvarieties show existence rational resolutions toroidal embeddings give results cohomology vanishing surjectivity restriction maps global sections line bundles show class homogeneous spaces results hold contains symmetric homogeneous spaces characteristic closed parabolic induction introduction let algebraically closed field prime characteristic let connected reductive group let closed scheme homogeneous space see spherical interested properties embeddings always normal equivariant existence equivariant rational resolutions toroidal embeddings canonical frobenius splittings compatible splittings cohomology vanishing surjectivity restriction maps global sections line bundles idea show class spherical homogeneous spaces whose embeddings nice properties contains symmetric homogeneous spaces characteristic reductive group embeddings characteristic closed parabolic induction previously special cases studied wonderful compactification adjoint group large wonderful compactification adjoint symmetric homogeneous space arbitrary embeddings reductive groups embeddings homogeneous spaces induced reductive groups describe issues arise general approach let image neutral element canonical map let borel subgroup orbit open let maximal torus shown reducing toroidal case one splitting power compatibly splits prime divisors implies certain cohomology vanishing results general situations symmetric homogeneous spaces need longer true firstly one construct splitting wonderful compactification power since closed key words phrases equivariant embedding spherical homogeneous space frobenius splitting mathematics subject classification tange orbit necessarily borel subgroup therefore splitting general power since seems necessary splitting power get cohomology vanishing results using der kallen theorem thm splitting interest results good filtrations use types splittings furthermore one expect splitting compatibly splits prime divisors since schematic intersections need reduced see finally one expect usual cohomology vanishing results arbitrary spherical varieties characteristic see example example ample line bundle projective spherical variety nonzero need make assumptions homogeneous space paper organised follows section introduce notation state basic assumptions prove technical lemmas section prove three main results result requires one two extra assumptions turns implies see remark proposition show assuming every smooth toroidal embedding split power compatibly splits closed subvarieties theorem states assuming every equivariant rational resolution toroidal embedding leads results frobenius splitting cohomology vanishing surjectivity restriction maps global sections line bundles proofs proposition theorem use ideas together fact split power lemma fact certain expression canonical divisor contains every prime divisor strictly negative coefficient lemma theorem states assuming every split compatible closed subvarieties section show class homogeneous spaces satisfy contains symmetric spaces section show class stable parabolic induction preliminaries retain notation introduction recall canonical morphism hred reduced scheme associated homeomorphism see iii usually deal varieties unless explicitly use term scheme schemes always supposed algebraic use ordinary notation notions like inverse image intersection special notation schematic analogues frobenius splitting give brief sketch basics frobenius splitting introduced mehta ramanathan precise statements refer start affine case commutative ring characteristic put subring first assume reduced one define frobenius splitting direct complement example polynomial ring frobenius splitting given restricted monomials embeddings spherical homogeneous spaces characteristic exponents different course frobenius splitting also seen projection onto map point view take root definition obtain usual definition frobenius splitting homomorphism abelian groups definition also works rings follows immediately frobenius split ring always reduced finally one sheafify definition obtain definition frobenius splitting scheme separated finite type let absolute frobenius morphism morphism schemes identity underlying topological space power map frobenius splitting morphism maps therefore also consider morphism modules frobenius splitting morphism frobenius splitting closed subscheme called compatibly split ideal sheaf stable compatibly split subscheme frobenius split existence frobenius splitting one deduce cohomology vanishing results example frobenius split variety proper affine ample line bundle see thm based fact presence frobenius splitting one embed using compatible splittings splitting relative divisor one obtain refined results see lem important result frobenius splittings smooth variety correspond certain nonzero global sections power canonical line bundle see thm result first proved smooth projective sect since may zero also shows every smooth variety frobenius split acts variety also acts homox turns vector space rational see smooth holds one define frobenius splitting killed divided powers dist simple relative dist distribution algebra hyper algebra notion canonical splitting introduced mathieu proved following result line bundle split good filtration see sect detail notion good filtration found finish discussion frobenius splittings lemma splittings lemma seems known provide proof lack reference needed proof proposition parabolic subgroup containing denote half sum roots unipotent radical put denote opposite parabolic character viewed character orthogonal tange simple coroots levi containing denote line bundle lemma let parabolic subgroup split power proof prove claim well known easy see dominant see sect denoted thm multiplication map surjective canonical bundle assertion follows spherical embeddings recall facts theory spherical embeddings detail refer normal embedding image open dense set nonzero rational functions denoted since open every rational function defined completely determined value group rational functions multiplicative group put homz discrete valuation function field determines function element particular prime divisor irreducible closed subvariety codimension determines element valuation completely determined image identify set valuations subset finitely generated convex cone spans complement open prime divisors irreducible components divide two classes ones called boundary divisors denoted others prime divisors intersect denoted possibly subscript called simple unique closed coloured cone simple embedding pair set colours set prime divisors whose closure contains closed orbit cone generated images boundary divisors image simple embedding completely determined coloured cone arbitrary embedding orbit closed orbit unique simple open coloured cones form coloured fan embedding embedding completely determined coloured fan generally coloured fans used describe morphisms embeddings different homogeneous spaces see sect called toroidal prime divisor intersects contains toroidal embeddings embeddings corresponding coloured fans without colours fans sense toric geometry whose support embeddings spherical homogeneous spaces characteristic contained toroidal denote complement union prime divisors intersect note every intersects denote closure say local structure theorem holds relative exists parabolic subgroup containing levi subgroup containing variety derived group acts trivially action induces isomorphism local structure theorem implies local structure theorem furthermore parabolic subgroup depends characterised stabiliser open stabiliser open subset bhred action left multiplication whenever closed scheme levi subgroup containing denote schematic intersection local structure theorem holds every rational function unique extension character group cocharacter group lemma let toroidal local structure theorem holds every intersects embedding whose fan furthermore closures normal local structure theorem also holds complete every closed isomorphic proof first assertion follows arguments proof thm sect normality closures follows toric case prop final assertion obvious let closed let corresponding simple open complete cone corresponding dimension dim rkx dim thm unique point must intersection let stabiliser bijective gequivariant morphism prop complete parabolic since contains every root least one root subgroups contained obvious local structure theorem isomorphism wonderful compactification smooth simple complete toroidal since simple toroidal determined cone rem also stated assumed local structure theorem holds tange since also complete cone see thm wonderful compactification unique exists furthermore since cones coloured fans always strictly convex contains line wonderful compactification local structure theorem holds therefore affine space also need following property smooth nonzero global section bundle weight divisor first sum prime divisors gstable strictly positive second sum boundary divisors toroidal smooth satisfies local structure nonzero global theorem easy see section weight unique scalar multiples divisor given formula integers crucial point coefficients strictly positive lemma proof prop let smooth assume local structure theorem holds toroidal embedding obtained removing orbits codimension assume map separable property proof since proof brief give bit detail reduce loc cit case smooth toroidal removing orbits codimension local structure theorem algebraic group acts automorphisms lie acts derivations put differently lie algebra homomorphism lie vect furthermore lie zariski tangent space dim dim dim dim dim let wedge product basis lie together lift lie basis lie lift basis lie elements lie exists separability assumption clearly weight applied vanishes complement open union prime divisors furthermore well known toric geometry using local structure theorem vanishes order one boundary divisors easy see nonzero global section bundle divisor zeros anypsmooth boundary divisors put like sheaf comes canonical embeddings spherical homogeneous spaces characteristic also toroidal log canonical sheaf note property let adjoint see section closed subgroup scheme containing let image canonical map next section make following assumptions see remark comments generated closed subgroup scheme normalises homogeneous space wonderful compactification local structure theorem see section holds wonderful compactification property characters acts top exterior powers txa follows easily local structure theorem holds toroidal one use fact every toroidal unique morphism maps see thm also observed specific situations prop prop concerning see acts easiest use functor points approach dual numbers commutative put indeterminate let smooth scheme define tangent bundle functor commutative sets follows furthermore obtain morphism functors given homomorphism maps let fiber see cor acts see sect also acts morphism fixes acts action clearly linear also act top exterior power see note central surjective morphism reductive groups closed subgroup schemes schematic inverse image satisfy relative denote boundary divisors note adjunction formula see prop restriction isomorphic lemma assume let smooth toroidal consider morphism extends canonical map isomorphic particular smooth property proof let unique scalar multiples nonzero rational section resp weight nonzero rational sections multiplied canonical sections boundary divisors resp let tange first show divisor equals local structure theorems clear involve boundary divisors enough show equal since equivariant picard group picg trivial pull back functions since enough show follows since equals isomorphism corresponds since two line bundle differ character weight clear isomorphism final assertion follows reducing case smooth toroidal removing orbits codimension note define canonical divisor divisor whose restriction smooth locus canonical divisor assuming canonical divisor given formula remarks assumptions important one example satisfied varieties unseparated flags essential local structure theorem isomorphism given action reduced deduce follows canonical isomorphisms lie txa lie obtain exact sequence lie txa used normal top exterior power tensor product top exterior powers txa lie action lie trivial since induced conjugation action trivial thus follows holds characteristic cartier theorem know examples satisfied see also prop coefficients determined least many special cases also done prime characteristic example symmetric space adjoint semisimple see also section amounts writing canonical divisor plus sum boundary divisors linear combination images pic prime divisors wonderful compactification form basis pic since pic embeds naturally pic closed orbit via restriction see line bundle corresponding divisor restricts canonical bundle pic amounts writing linear combination images notation prop images node satake diagram node satake diagram except split weights second list correspond exceptional simple roots see sect embeddings spherical homogeneous spaces characteristic two summands characteristic orbit closures normal see prop following example characteristic mentioned brion put let act frobenius morphism raises matrix entries power let symplectic form given form preserved define hfr frobenius morphism spherical zero locus orbit closure since singular locus codimension see cor prop result guarantees normality closures certain assumption frobenius splittings rational resolutions retain notation previous sections assume properties hold consider following assumptions restriction map surjective restriction map surjective kernel good filtration recall line bundle called positive power generated global sections proposition assume let smooth toroidal frobenius split power compatibly splits gstable closed subvarieties assume projective affine let line bundle let closed subvariety simple irreducible restriction map surjective proof local structure theorem holds toroidal arguing beginning sect toroidal smooth completion may assume complete well let lemma lemma restriction closed orbit isomorphism splitting lemma exists note assumption find lift lemma let canonical section splitting put compatibly splits boundary divisors therefore gstable closed subvarieties follows considering local expansion point closed orbit see thm thm thm details let restricted original put divisor prop reduced effective tange contains pnone splitting given compatible lem split injection linearly equivalent divisor see lemma since quasiprojective effective divisor ample proof cor write choose mad mad ample since generated global sections latter follows fact contains since toroidal see argument proof prop finally choose assertion follows thm case arbitrary follows observing restricts splitting smooth toroidal spherical variety projective affine furthermore local structure theorem clearly holds apply arguments first assume simple ample cor compatibly split therefore also thm rem furthermore contains since nonzero closed orbit closed subvarieties compatibly result follows thm assume irreducible ample line bundle cor global section nonzero take divisor section ample contain proceed proof whenever since compatibly split combine arguments lemmas form commutative square lpr lpr top horizontal row bottom horizontal row replaced vertical arrows restriction maps splittings horizontal arrows compatible form commutative square vertical arrows since right vertical arrow surjective follows holds left vertical arrow consequence fact homomorphism abelian groups surjective restrictions surjective embeddings spherical homogeneous spaces characteristic sect following kempf define morphism varieties rational direct image structure sheaf higher direct images zero recall resolution singularities irreducible variety smooth irreducible variety together proper birational morphism note normal rational resolution resolution rational morphism satisfies following lemma implicit proof cor lemma assume every projective resolution projective toroidal assume every resolution rational morphism every projective birational morphism rational every proof let let projective birational sumihiro theorem may assume embed open subsets projective let normalisation closure graph projective identifies open subset furthermore natural morphism extends denote note let resolution projective since rational morphism grothendieck spectral sequence collapses obtain rationality morphism follows theorem assume let rational resolution projective quasiprojective toroidal frobenius split compatible closed subvarieties iii assume proper affine let line bundle let irreducible closed subvariety restriction map surjective scheme projective affine proper morphism proof construct resolution projective smooth quasiprojective toroidal prop prop first consider normalisation closure graph language coloured fans means form toroidal covering removing colours fan construct desingularisation simplicial subdivision fan toric slice fan local structure theorem deduce smoothness toric slice local structure theorem also deduce toric slice prop since toric slice quasiprojective tange nonnegative combination boundary divisors toric slice ample form combination corresponding boundary divisors ample inverse image obtain divisor ample relative since projective follows see prop alternatively one deduce description ample divisors spherical variety see sect sect show rational arguments similar proof cor vanishing higher direct images use proposition rather arguments loc since proper birational normal let frobenius splitting given proposition vanishes exceptional locus since latter contained complement open union boundary divisors thm remains show lemma may assume projective kempf lemma lem lem suffices show exists ample line bundle note kempf lemma enough proper proper affine note furthermore proposition ample line bundle exists one since projective take resolution lem push splitting proposition forward since compatibly split closed subvarieties since holds splitting iii take resolution note proper affine since since also projective affine since rational see lemma choose irreducible map injective since assertions follow proposition follows iii kempf lemma see proof theorem assume let frobenius splitting compatibly splits closed subvarieties proof replacing connected centre times simply connected cover derived group may assume let resolution theorem recall first steinberg module irreducible also isomorphic induced module weyl module see let nonzero lowest highest weight vector embeddings spherical homogeneous spaces characteristic smooth lem image phism let splitting see thm since weyl module weyl filtration prop cor prop one easily deduces functor homg exact short exact sequences good filtration follows map homg homg surjective let homg lift homg put clearly restricts let lemma proof proposition one toroidal smooth completion show considering splitting compatibly splits gstable closed subvarieties clearly splitting finally push splitting means apply lem remarks existence splitting theorem implies existence good filtrations several situations see thm also implies normality closures see cor prop assumption fact equivalent existence splitting compatibly splits closed orbit indeed ing latter surjective since ample cor compatibly split see thm furthermore kernel map good filtration similarly equivalent existence splitting compatibly splits particular implies theorem hold equivariant embeddings spherical homogeneous spaces following example mentioned one referees let unseparated flags admitting ample line bundle examples varieties given examples section let structure map dual line bundle put spec affine cone let canonical map normal spherical varieties furthermore equivariant resolution see lem rational since first equality holds since affine second one since affine see prop tange symmetric spaces notice case group embeddings characteristic already treated remainder section assume char background symmetric spaces refer let involution adjoint group gad let gad canonical homomorphism let ordinary centre let zsch schematic centre also schematic kernel fixed point subgroup smooth reductive subgroup let ordinary inverse image zsch see notation schematic inverse image let tad maximal torus gad contains maximal torus let pad minimal parabolic subgroup containing tad let bad borel subgroup pad containing tad sect sect let corresponding maximal torus parabolic borel subgroup let closed subgroup scheme satisfied open example assume closed subgroup scheme involution since contains maximal torus arguments lem lem generated adding zsch gives satisfied case assume addition reduced check satisfied since property character given enough look top exterior power lie trivial top exterior power lie trivial top exterior power lie trivial similarly top exterior power lie gad trivial since maps follows check assumptions section depend satisfied prop thm gad wonderful compactification local structure theorem holds satisfied furthermore tad map separable satisfied lemma finally check assumptions results section follows ample cor thm satisfied indeed rem thm thm imply restriction map surjective similarly clear prop thm satisfied situation one look construction good filtration thm see contains good filtration kernel restriction map last property exists simply connected see embeddings spherical homogeneous spaces characteristic parabolic induction final section show assumptions used preserved apply parabolic induction retain notation assumptions section let connected reductive group let parabolic subgroup surjective homomorphism assume isogeny induced central let maximal torus whose image let borel subgroup containing let inverse image let opposites relative let levi subgroup containing denote natural map note also holds replace let schematic inverse images let subgroup generated simple factors lie kernel isogeny central note recall also induced variety comes fibration fiber base point lem spherical open moreover prime divisors unique prime divisors bdi intersect prime divisors prime divisors simple root corresponding reflection weyl group denote closures prime divisors letters recall valuation cone see sect prop easy determine images valuation cone prop furthermore toroidal embeddings obtained parabolically inducing toroidal embeddings see prop show assumptions hold etc closed subgroup scheme normalises generate schematic inverse image normalises together generates thus holds wonderful compactification local structure theorem holds prop wonderful compactification local structure theorem relative holds thus holds denoted tange lemma let irreducible let line bundle acting trivially let lift nonzero rational section weight global section global dominant relative global divisor contains coefficient every simple root proof since global sections form rational clear global must global dominant assume latter let module highest weight sect consider sub module generated highest weight vector fact sum weight spaces corresponding weights congruent modulo root lattice sum weight spaces quotient isomorphic acting trivially using projection universal property weyl module obtain homomorphism since acting right multiplication frobenius reciprocity gives homomorphism direct description without identification let nonzero locus let simple root prime divisor intersects open set product root subgroups root isomorphism given representative choose isomorphism additive group denote corresponding defined let nonzerop weight vector weight notation furthermore since nowhere zero follows contains coefficient corollary assumption holds proof canonical projection relative canonical bundle let let global section weight considered global section note consider character sum embeddings spherical homogeneous spaces characteristic roots since since holds enough show lifts weight vanishes along prime invariant global section divisors simple root result follows lemma noting simple root clearly equivalent isomorphic along proof corollary pra canonical projections follows holds show assumption preserved parabolic induction proposition satisfies proof remark assumption equivalent existence bcanonical splitting compatibly splits closed orbit assume latter holds splitting also prop therefore also consider via result mathieu see prop thm therefore also nius split compatible fact morphism locally trivial fibration fiber deduce easily structure sheaf push splitting obtain frobenius split compatible closed remark clear whether preserved parabolic induction recall remark implied end description picard group terms similar prop case induction reductive group since interested may assume since contain connected centres may assume simply connected pic picg restriction derived group simply connected cover easy see obtain diagram exact rows pic pic pic pic pic pic proof prop first occurrence replaced second occurrence omitted tange arrows associated maps one uses pic pic similar furthermore injective see prop pic pic isomorphic certain subgroups weight lattices first freely generated fundamental weights simple root simple root second freely generated fundamental weights simple root furthermore pic freely generated line bundles corresponding prime divisors similar pic using fact pic one describe similarly follows line bundle mapped weight unique scalar multiples rational section nonzero case line bundle corresponding prime divisor weight canonical section similar use borel subgroup corresponding describe right inverse natural restriction map sends map natural right inverse unique restricts simple roots map goes assume describe right inverse corresponding prime divisor line bundle image line bundle image indeed canonical section canonical section lemma must weight thus conclude mapped note right inverse line bundles corresponding boundary divisors general mapped line bundles natural images actually lie character group images obtained pulling back along case symmetric space one simply obtains pic replacing formulas end remark fundamental weights corresponding fundamental weights adding fundamental weights simple root images obtained pulling weights back along acknowledgement would like thank timashev brion helpful discussions referee helpful comments research funded epsrc grant references borel linear algebraic groups second edition graduate texts mathematics new york case symmetric spaces see thm embeddings spherical homogeneous spaces characteristic brion curves divisors spherical varieties algebraic groups lie groups volume papers honor late richardson lehrer austral math soc lect ser cambridge university press cambridge brion orbit closures spherical subgroups flag varieties comment math helv brion notes session hamiltoniennes groupes grenoble http brion inamdar frobenius splitting spherical varieties algebraic groups generalizations classical methods proc sympos pure part amer math providence brion kumar frobenius splitting methods geometry representation theory progress mathematics boston boston brion pauer valuations des espaces comment math helv concini normality non normality certain semigroups orbit closures algebraic transformation groups algebraic varieties encyclopaedia math springer berlin concini springer compactification symmetric varieties transform groups demazure gabriel groupes tome groupes commutatifs masson cie paris publishing amsterdam donkin tilting modules algebraic groups math fulton introduction toric varieties annals mathematics studies princeton university press grothendieck globale quelques classes morphismes inst hautes sci publ math haboush lauritzen unseparated flags linear algebraic groups representations richard elman contemp math vol amer math hartshorne algebraic geometry graduate texts mathematics springerverlag new jantzen representations algebraic groups second edition american mathematical society providence kempf schubert methods application algebraic curves publications mathematisch centrum afdeling zuivere wiskunde amsterdam kempf knudsen mumford toroidal embeddings lecture notes mathematics vol york knop theory spherical embeddings proceedings hyderabad conference algebraic groups hyderabad knop bewertungen welche unter einer reduktiven gruppe invariant sind math ann luna grosses cellules pour les algebraic groups lie groups volume papers honor late richardson lehrer austral math soc lect ser cambridge university press cambridge mathieu tilting modules applications analysis homogeneous spaces representation theory lie groups adv stud pure math soc japan tokyo mehta ramanathan frobenius splitting cohomology vanishing schubert varieties ann math richardson orbits invariants representations associated involutions reductive groups invent math rittatore reductive embeddings proc amer math soc tange steinberg endomorphisms linear algebraic groups memoirs american mathematical society providence strickland vanishing theorem group compactifications math annalen sumihiro equivariant completion math kyoto univ sumihiro equivariant completion math kyoto univ tange embeddings certain spherical homogeneous spaces prime characteristic transform groups timashev homogeneous spaces equivariant embeddings encyclopaedia mathematical sciences invariant theory algebraic transformation groups springer heidelberg vust plongements espaces une classification ann scuola norm sup pisa sci school mathematics university leeds leeds address
| 4 |
oct coloring games algebraic problems matroids thesis theoretical computer science department faculty mathematics computer science jagiellonian university advised grytczuk contents introduction chapter introduction matroids terminology examples operations matroid union theorem part coloring games matroids chapter colorings matroid chromatic number list chromatic number basis exchange properties chapter game colorings matroid game coloring list coloring indicated coloring part algebraic problems matroids chapter simplicial complexes inequality vertex decomposable simplicial complexes extremal simplicial complexes chapter conjecture white introduction white conjecture strongly base orderable matroids white conjecture saturation remarks part applications matroids chapter obstacles splitting necklaces introduction topological setting hyperplane cuts arbitrary hyperplane cuts colorings distinguishing cubes bibliography introduction thesis basically devoted matroids fundamental structure combinatorial optimization though results concern simplicial complexes euclidean spaces study old new problems structures combinatorial algebraic topological therefore thesis splits naturally three parts accordingly three aspects part study several coloring games matroids proper colorings ground set matroid colorings every color class forms independent set already studied edmonds showed explicit formula chromatic number matroid least number colors proper coloring terms rank function generalize theorem theorem seymour list chromatic number matroid another important result concerning colorings connections matroids games back famous shannon switching game invented independently gale matroidal version introduced lehman solved edmonds already however games slightly nature instance one variant two players coloring matroid properly player interested job given set colors second bad one tries prevent happening natural question arises many colors needed good guy win compared chromatic number matroid show parameter twice big chromatic number theorem improves extends arbitrary matroids result bartnicki grytczuk kierstead concerns graphic matroids moreover show bound general almost optimal one another game consider coloring matroid lists situation part information colors lists known bad player assigns consecutive colors lists elements arbitrarily good one colors properly elements color list immediately color assigned called list coloring game prove wojciech lubawski chromatic parameter game version theorem generalizes theorem seymour setting games type initially investigated graphs leading lots interesting results fascinating conjectures respectively hope results add new aspect structural type showing pathological phenomena appearing graph coloring games longer present realm matroids introduction second part thesis studies two problems algebraic nature one concerns simplicial complexes vectors encode number faces given size complex characterized celebrated inequality similar characterization matroids generally pure complexes maximal faces size remains elusive main result theorem asserts every extremal simplicial complex pure equality inequality vertex decomposable extends theorem herzog hibi asserting extremal complexes property simple fact last property implied vertex decomposability argument purely combinatorial main inspiration coming proof inequality second problem consider part thesis long standing conjecture white conjecture problem concerns symmetric exchange phenomenon matroids solved special classes matroids simplest form white conjecture says two families bases matroid equal unions multisets one pass sequence single element symmetric exchanges algebraic language means toric ideal matroid equal ideal generated quadratic binomials corresponding symmetric exchanges joint work mateusz prove white conjecture saturation theorem saturations ideals equal believe result direction valid arbitrary matroids additionally prove full conjecture new large class strongly base orderable matroids theorem last part thesis concerns famous necklace splitting problem suppose given colored necklace segment integers line segment want cut resulting pieces fairly split two parts means part captures amount every color theorem goldberg west asserts number parts two every necklace fair splitting using cuts fact simple proof using celebrated theorem alon extended result arbitrary number parts showing cuts case parts joint paper alon grytczuk studied kind opposite question motivated problem strongly nonrepetitive sequences proved every real line segment fair splitting two parts cuts main results part thesis generalize theorem arbitrary dimension arbitrary number parts consider two versions cuts made hyperplanes theorem arbitrary hyperplanes theorem case upper bound achieve almost matches lower bound implied theorem alon general theorem longueville methods use relay topological baire category theorem several applications algebraic matroids acknowledgements firstly would like thank advisor grytczuk guidance entire studies great pleasure opportunity learn taste combinatorics throughout thesis provided encouragement lot interesting problems particular introduced game chromatic number matroid also person asked seymour theorem generalizes setting finally jarek coauthor publication sincerely grateful would like express gratitude professor herzog pointing direction generalize one results also showing conjecture white especially grateful wojciech lubawski mateusz coauthors results presented thesis thank inspiration motivation hours fruitful discussions thank friends farnik farnik adam andrzej grzesik grzegorz gutowski jakub kozik tomasz krawczyk piotr micek arkadiusz pawlik sulkowska bartosz walczak interesting talks questions answers great working atmosphere writing thesis would possible without constant support family thank lot mum love motivation author granted scholarship project doctus fundusz stypendialny dla european union european social fund also supported grant polish ministry science higher education tasks selection chapter introduction matroids matroids one basic mathematical structures abstract idea independence various areas mathematics ranging algebra combinatorics origins matroid theory turn back introduced whitney adequate introduction refer reader book oxley sketchy one chapter give brief sketch basic concepts matroid theory present examples several classes matroids also prove matroid union theorem basic tool already reveals regularity structure matroid therefore chapter treated preliminaries particular contain result definitions terminology matroids cryptomorphic structures many nontrivially equivalent ways introduce axiom systems important show correspondence one considered basic others proofs see chapter also provide necessary notation usually matroid ground set mean collection subsets satisfying following conditions definition independent sets matroid consists finite set called ground set set independent subsets satisfying following conditions empty set independent set subset independent set independent extend element augmentation axiom actually see subset vector space together family linearly independent sets contained constitutes basic example matroid connection large part terminology used matroid theory example one may bases matroid rank subset closure operation way vector spaces definition bases matroid consists finite set called ground set set subsets called bases satisfying following conditions bases basis introduction matroids bases maximal independent sets due augmentation axiom equal size bases fact maximum independent sets direction also clear set bases independent sets exactly contained basis definition rank function matroid consists finite set called ground set function called rank function satisfying following conditions rank monotone submodularity axiom rank set size maximal independent set contained knowing rank function independent sets exactly whose sets independent sets abstract among others linear independence algebraic independence rank function abstracts dimension transcendence degree sometimes rank whole matroid denoted order refer ground set definition closure matroid consists finite set called ground set closure operator satisfying following conditions closure set set elements satisfying words maximum set containing independent sets exactly sets proper subset closure another source matroid terminology comes graph theory see every graph turned graphic matroid set edges family sets containing cycle therefore circuits matroid generalize cycles graphic matroid minimal sets independent thus set independent contain circuit definition circuits matroid consists finite set called ground set set subsets called circuits satisfying following conditions empty set circuit inclusion two circuits two circuits circuit also several ways matroid example hyperplanes examples operations examples operations show basic examples classes matroids proofs satisfy axioms matroid see examples produced using operations matroids described second part section begin almost trivial example definition uniform matroid let finite set let integer matroid denoted rank function min next two examples come algebra second play big role last part thesis definition representable matroid let vector space field let finite set linearly independent matroid rank function dimk span closure operator span matroid called representable matroid representable field called representable definition algebraic matroid let field extension let finite set algebraically independent matroid rank function trdegk following example probably natural combinatorics big advantage easily visualized often refer graphic matroids examples theorems definition graphic matroid let multigraph contain cycle matroid set bases equal set spanning forests set circuits equal set cycles denote graphic matroids also interesting previous two families matroids coming algebra classes matroids certain property instead presentation previous examples definition base orderable matroid matroid base orderable two bases bijection bases element definition strongly base orderable matroid matroid strongly base orderable two bases bijection basis subset introduction matroids condition strongly base orderable matroid implies also basis moreover assume identity class strongly base orderable matroids closed taking minors see section chapter one results holds strongly base orderable matroids definition transversal matroid let finite set let multiset subsets following matroid exists injection every fact transversal matroid one corresponding multiset cardinality equal rank another similar class matroids class laminar matroids see definition laminar matroid family sets laminar two sets either disjoint one contained let laminar family subsets finite set let function following matroid every one prove transversal matroids laminar matroids strongly base orderable moreover none classes contained definition regular matroid matroid regular representable every field one easily show graphic matroids regular characterizations regular matroids see paper tutte seymour proved interesting structural result asserts every regular matroid may constructed piecing together graphic cographic matroids copies certain matroid pass operations matroids firstly going two basic restriction contraction produce given matroid matroid smaller ground set useful properties let matroid ground set set independent sets set bases rank function let definition restriction restriction matroid set denoted matroid ground set satisfying independent independent rank function equals restriction rank function set definition contraction contraction set matroid denoted matroid ground set satisfying independent every independent set independent rank function satisfies every matroid union theorem matroid called minor another matroid obtained sequence restrictions contractions one show fact minor obtained two operations matroid theory concept duality definition dual matroid dual matroid matroid denoted matroid ground set basis basis rank function satisfies every course describe duality operations restriction contraction namely bases circuits etc dual matroid called cobases cocircuits etc original matroid also easy direct sum matroids definition direct sum suppose matroids disjoint ground sets exists matroid satisfying every see theorem possible extend matroids necessary disjoint ground sets one also blow element matroid adding parallel elements operation extends idea adding parallel edges graphic matroids definition blow let matroid let exists matroid satisfying crucial fact case matroid described set independent sets bases circuits rank function exists proofs see matroid union theorem describe one possible formulations matroid union theorem best applications theorem matroid union theorem let matroids ground set rank functions respectively following conditions equivalent exist sets set independent every introduction matroids theorem follows weighted version prove let weight assignment elements collection necessarily subsets said every element exactly members collection containing integer write shortly covering mean theorem weighted matroid union theorem let matroids ground set rank functions respectively following conditions equivalent exists sets independent every proof implication easy every focus opposite implication prove double induction firstly size ground set secondly sum basis induction clearly true consider function notice condition guarantees nonnegative distinguish two cases case proper subset consider matroids restricted weight function since condition theorem inductive assumption independent consider also weight function matroids set obtained contracting set satisfy condition theorem inductive assumption independent unions form independent case every proper subset pick arbitrary element pick arbitrary clearly exists consider matroids weight function condition theorem still holds inductive assumption sets independent corresponding matroids observe eligible matroids matroid union theorem another formulation matroid union theorem describes rank function join matroids derive corollary previous formulation theorem join matroids let matroids ground set exists matroid satisfying min proof present proof case generalization arbitrary number matroids straightforward according matroid exists conditions rank function hold easy check two check submodularity axiom assume minima realized sets respectively second inequality follows submodularity suppose means matroid union theorem follows contrary sum two independent sets independent notice special case join matroids direct sum matroids see direct sum matroids disjoint ground sets join extensions sum ground sets corollary matroid covered independent sets every proof exactly matroid union theorem applied also easily get theorem namely matroid covered independent sets independent set rank matroid rank equals min corollary matroid disjoint bases every proof matroid disjoint bases rank least matroid due theorem rank equals min introduction matroids last theorem section gives explicit formula terms rank functions largest common independent set two matroids unfortunately easy generalization larger number matroids theorem edmonds intersection two matroids let two matroids ground set max min proof inequality obvious prove suppose observe thus two disjoint sets independent independent suppose extend elements basis independent moreover least elements part coloring games matroids chapter colorings matroid chapter extends concept proper graph coloring matroids namely coloring matroid proper elements color form independent set way get notion chromatic number list chromatic number matroid several game versions parameters studied following chapter present explicit formula edmonds chromatic number terms rank function theorem seymour asserting list chromatic number matroid equals chromatic number main result chapter theorem generalizes seymour theorem lists constant size case lists arbitrary size prove matroid colorable lists size colorable particular lists size theorem later even generalized see theorem last section shows applications theorem several basis exchange properties results chapter come paper chromatic number let matroid ground set coloring assignment colors usually natural numbers elements coloring proper elements color form independent set rest thesis whenever write term coloring always mean proper coloring element matroid called loop analogy loop graph loop corresponding graphic matroid notice matroid contains loop admit proper coloring hand matroid loop loopless least one proper coloring thus considering colorings restrict loopless matroids definition chromatic number chromatic number loopless matroid denoted minimum number colors proper coloring instance graphic matroid least number colors needed color edges cycle monochromatic number known arboricity graph confused usual chromatic number graph analogy graph theory fractional chromatic number integer matroid ground set assignment colors element still coloring proper color elements assigned form independent set colorings matroid notice proper matroid independent sets definition fractional chromatic number fractional chromatic number loopless matroid denoted infimum fractions proper colors matroid chromatic number well fractional chromatic number easily expressed terms rank function extending theorem graph arboricity edmonds proved following explicit formula chromatic number matroid observed proof gives also formula fractional chromatic number theorem edmonds formula chromatic number every looples matroid max max proof denote suppose independent sets max hence consequence since integer obviously get immediately prove opposite inequalities suppose maximum tion reached subset fractional chromatic number consider matroids ground set equal apply weighted matroid union theorem weight function second condition theorem hence exists independent sets means chromatic number consider matroids ground set equal apply matroid union theorem second condition theorem holds hence exists proper coloring independent sets means particular chromatic number matroid less one bigger fractional chromatic number contrary graphs chromatic number bounded function fractional chromatic number see example kneser graphs obviously graph theory chance analogical formula however one example almost exact formula chromatic list chromatic number number vizing proved chromatic index chromatic number line graph incidence graph edges graph always equal maximum degree maximum degree plus one list chromatic number suppose element ground set matroid assigned set list colors analogy list coloring graphs initiated vizing independently rubin taylor want proper coloring additional restriction element gets color list list assignment simply lists mean function size mean function matroid said colorable lists exists proper coloring every definition list chromatic number list chromatic number loopless matroid denoted minimum number colorable lists size least clearly theorem seymour asserts actually equality provide proof result completeness simple application matroid union theorem theorem seymour every loopless matroid equality proof suppose take list assignment size least let let restriction ground set extended assumption exists independent sets every since element belongs least values result follows matroid union theorem weight assignment list assignment say lists possible choose set colors list color elements assigned form independent set words list contains subset size choosing color results proper coloring matroid definition fractional list chromatic number fractional list chromatic number loopless matroid denoted chf infimum fractions lists size least colorings matroid graphs list chromatic number bounded function chromatic number already arbitrarily large graphs chromatic number two dinitz stated conjecture chromatic number list chromatic number coincide line graphs galvin bipartite graphs arbitrary graphs asymptotic sense kahn surprisingly alon tuza voigt proved fractional versions chromatic number list chromatic number coincide graphs generalize seymour theorem setting sizes lists still necessarily equal theorem generalization seymour theorem let matroid let list size weight functions following conditions equivalent matroid lists matroid lists size proof let list assignment size let denote max respectively lists consider matroids equals ground set extended observe matroid properly lists ground set sets independent namely two expressions cover color meaning weighted matroid union theorem get condition equivalent fact inequality equivalence conditions follows fact implication obvious prove enough show subset inequality use notion sum multisets since pboth list signments size get multisets observe prove inequality whenever two sets incomparable inclusion order may replace sets submodularity see multisets last parameter grows bounded number sets sizes bounded hence number steps replacement procedure stops sets linearly ordered inclusion let reorder way basis exchange properties easy see inequality proves assertion seymour theorem follows result taking constant size function weight function get also chf theorem implies well following stronger statement corollary let partition ground set matroid independent sets let whenever matroid colorable lists size next section show applications generalization seymour theorem basis exchange properties basis exchange properties let set family subsets consider following conditions every exists every exists every exists according family set bases matroid conditions nice exercise show case condition also holds called symmetric exchange property discovered brualdi often refer chapter surprisingly even condition known multiple symmetric exchange property true simple proofs see exchange properties demonstrate usefulness generalization seymour theorem giving easy proofs several basis exchange properties particular multiple symmetric exchange property idea proofs choose suitable list assignment guarantees existence required coloring crucial point generalization seymour theorem lists may distinct sizes particular list element size one color element already determined list assignment proposition multiple symmetric exchange property let bases matroid every exists bases proof observe restrict attention case disjoint indeed otherwise consider matroid disjoint bases set exists appropriate solves original problem colorings matroid bases disjoint restrict matroid union let assign list elements list elements list elements observe condition theorem get lists denote elements colored colored good choice since sets independent formulate also general multiple symmetric exchange property independent sets use section chapter exactly proof applies proposition let independent sets matroid every exists independent sets multiple symmetric exchange property slightly generalized instead partition one bases two pieces arbitrary partition prove partition exists partition second basis consistent two ways proposition let bases matroid every partition exists partition bases proof analogously proof proposition assume bases disjoint intersection sets coincide bases disjoint restrict matroid sum let assign list elements list elements weight function observe condition theorem exists wcoloring lists denote elements colored good partition since sets independent proposition let bases matroid every partition exists partition bases proof analogously proofs previous propositions assume bases disjoint restrict matroid let assign list elements list elements weight function observe condition theorem holds lists denote elements colored good partition since sets independent chapter game colorings matroid typical problem game coloring get optimal coloring given combinatorial structure graph poset matroid etc even one players craftily aims fail task scheme games following two players alice bob know structure loopless matroid play one coloring properly elements ground set obeying rules game ends whole matroid colored arrive partial coloring extended making legal move alice wins case goal color whole matroid therefore called good player bob goal opposite called bad guy chromatic parameter kind game denoted minimum number colors available element usually size set colors alice winning strategy always since alice win results proper coloring goal bound form best possible function pointwise smallest function every loopless matroid consider three coloring games matroids initially investigated graphs graphs function usually exist two cases get optimal functions third case almost optimal section study game coloring introduced planar graphs brams independently bodlaender theorem prove every matroid improves extends theorem bartnicki grytczuk kierstead proved holds every graphic matroid moreover theorem show bound general almost tight one section analyse list coloring introduced graphs schauz see also theorem prove corresponding game parameter fact equals generalizes setting generalization seymour theorem particular seymour theorem last section examine indicated coloring introduced graphs grytczuk theorem asserts hope results add new aspect structural type showing pathological phenomena appearing graph coloring games game colorings matroid longer present realm matroids results consecutive sections come papers wojciech lubawski respectively game coloring section study variant chromatic number matroid follows two players alice bob alternately color elements ground set matroid using set colors rule players obey moment play elements color must form independent set proper coloring game ends whole matroid colored arrive partial coloring extended happens trying color uncolored element possible color results monochromatic circuit alice wins case bob second game chromatic number matroid denoted minimum size set colors alice winning strategy game matroidal analog well studied graph game coloring introduced brams planar graphs motivation give easier proof four color theorem game independently reinvented bodlaender since topic developed several directions leading interesting results sophisticated methods challenging open problems see recent survey step studying game chromatic number matroids made bartnicki grytczuk kierstead proved every graphic matroid inequality holds theorem improve extend result showing every loopless matroid gives nearly tight bound since theorem provide class matroids satisfying every bounds remain true also fractional parameters well list version game chromatic number results section come paper pass proof upper bound achieve goal shall need general version matroid coloring game let matroids ground set set colors restricted players alternately color elements set elements colored must independent matroid call game coloring game initial game colors coincides coloring game theorem let matroids ground set sets independent alice winning strategy coloring game particular every loopless matroid proof let sets independent corresponding matroids denote set colored elements moment play set elements colored alice coloring element belongs game coloring always color try keep following invariant move set independent observe condition holds uncolored element player make obvious move namely color obvious move condition remains true prove alice keep condition assume condition holded remember sets moment later bob colored element color independent still holds use observation dependent using augmentation axiom see extend independent set independent set extension equals since sets form know alice admissible move strategy color color easy observe move condition preserved get second assertion suppose matroid partition independent sets may take repeat twice set get part assertion infer alice winning strategy therefore usually kind games following question seems natural non trivial graph coloring game asked zhu question suppose alice winning strategy matroid colors also winning strategy colors also fractional version game chromatic number let matroid ground set given integers consider slight matroid coloring game alice bob alternately properly color elements using set colors element receive colors fractions alice winning strategy called fractional game chromatic number denote clearly every matroid using theorem applied matroids equal blow loopless matroid element blown copies see get motivated behaviour fractional chromatic number ask two natural questions fractional game chromatic number fractional chromatic number always reached see proof theorem also true case fractional game chromatic number less theorem arbitrary large generalize results list version rules game alice bob change except element game colorings matroid list colors instead set colors element colored alice bob color list minimum number alice winning strategy every assignment lists size called game list chromatic number denoted chg clearly chg every matroid however upper bound theorem holds also list parameter corollary every loopless matroid chg proof let suppose list assignment size denote theorem sets independent matroid theorem alice winning strategy consequence chg way fractional game chromatic number one fractional game list chromatic number chf clearly chf chg chf holds every matroid using theorem easy get chf view seymour theorem ask following natural question question every matroid game list chromatic number equal game chromatic number chg pass proof lower bound present family matroids slightly improves lower bound paper bartnicki grytczuk kierstead give example graphical matroids believe family also however proof would much longer technical case fix let disjoint sets elements elements let transversal matroid see ground set multiset subsets consisting sets copies theorem every matroid satisfies proof prove part assertion take independent sets observe also prove second part notice rank equals elements colored elements colored suggests bob try color one color suppose set colors game show winning strategy bob alice always loose game able color elements set assume alice colors elements set goal color main case understand denote game coloring number elements colored respectively bob wants keep following invariant move color easy see always fact equality every color bob mimics alice moves whenever colors responds coloring element alice move also elements colored independent uncolored element observe bob plays strategy color consequence means alice color elements thus looses remains justify coloring elements alice help coloring elements see modify invariant bob wants keep assume careful case analysis needed denote number elements colored color respectively invariant bob wants keep move following color analogously previous case one show bob keep invariant obey one rule whenever alice colors element bob colors another element set always trying keep number sets among least one element colored low possible completes description bob strategy observe condition gives consequence let union element colored matroid rank otherwise therefore elements colored suppose alice wins game colors strategy bob exactly elements set colored every color additionally equalities inequalities showing contradiction proof theorem viewed extremal examples colors alice win results lead naturally following question question true holds arbitrary loopless matroid game colorings matroid list coloring section consider coloring matroid lists section chapter situation part information colors lists known modeled two person game alice coloring properly elements ground set matroid lists revealed game bob usually alice wins end game colors lists revealed whole matroid colored let loopless matroid ground set set colors let positive integer round bob chooses arbitrary subset inserts color lists elements alice decides elements list color color chooses independent set colors elements round bob picks arbitrarily subset inserts color lists elements alice chooses independent subset colors elements color game ends lists exactly elements end game whole matroid colored alice winner bob wins opposite case let denote minimum number alice winning strategy game called list chromatic number clearly every loopless matroid game matroidal analog graph coloring game introduced schauz see also several classes graphs online list chromatic number equals list chromatic number almost cases proofs upper bounds remain valid example simple argument thomassen shows every planar graph applies result galvin works also setting schauz proved even version combinatorial nullstellensatz alon also generalize combinatorial nullstellensatz direction implies bound consequence using combinatorial nullstellensatz also holds example planar bipartite graph asymptotic version dinitz conjecture janssen recently gutowski showed version theorem alon tuza voigt namely fractional list chromatic number equals fractional chromatic number graphs known parameters certain family graphs surprisingly best upper bound terms exponential observed zhu follows result alon therefore problem bringing upper lower bounds closer one challenging problems area list chromatic number graphs main result section comes joint paper wojciech lubawski asserts matroids equality list coloring two parameters proof relies multiple symmetric exchange property actually prove general theorem seen version generalization seymour theorem recall theorem provides equivalent conditions matroid lists size may generalize notion setting way game schauz except sizes lists exactly alice wins ended case say matroid lists size alice winning strategy conditions theorem necessary property hold aim prove theorem lubawski version generalization seymour theorem let list size weight functions matroid respectively following conditions equivalent matroid lists matroid lists size matroid lists size particular every loopless matroid proof implications part assertion obvious prove induction number zero vector assertion holds trivially suppose assertion theorem holds let set elements picked bob round game elements color lists given subset denote characteristic function otherwise prove inductive step show exists independent set matroid lists simply alice takes colors elements color condition exists lists let denote set elements colored let proposition multiple symmetric exchange property exists independent general let proposition exists sets independent let hard see lists assertion theorem follows induction second part assertion follows corollary one taking hence turns matroids nice structures possible color matroid using colors possible lists size one even game colorings matroid indicated coloring section consider following asymmetric variant coloring game originally proposed grytczuk graphs round game alice indicates uncolored element ground set loopless matroid colored bob chooses color element set colors rule bob obey proper coloring goal alice achieve proper coloring whole matroid least number colors guaranteeing win alice denoted called indicated chromatic number clearly indicated variant usual chromatic number graphs denoted studied grzesik exhibits rather strange behaviour bipartite graphs trivially graphs show theorem matroids always equality results section come paper need general approach let collection matroids ground set consider game set colors rules except elements color must form independent set matroid theorem let matroids ground set suppose partition independent alice winning strategy generalized indicated coloring game particular every loopless matroid proof proof goes induction number elements set consisting one element assertion clearly holds thus assume assertion true sets smaller size denote rank functions matroids respectively matroid union theorem know distinguish two cases case proper subset equality subtracting equality inequality get every subset holds left hand side inequality sum ranks set matroids thus matroid union theorem collection matroids assumptions theorem collection matroids clearly also inductive assumption alice winning strategy game matroids set plays strategy let indicated coloring denote set elements colored bob game play moves set original game matroids since collection matroids assumptions theorem collection matroids also alice winning strategy inductive assumption result wins whole game case proper subsets satisfy strict inequality round game alice indicates arbitrary element obviously thus bob admissible move namely may color suppose bob colors color original game played matroids set matroids second condition matroid union theorem holds since smaller one hence alice winning strategy inductive assumption part algebraic problems matroids chapter simplicial complexes results chapter come paper concern vectors simplicial complexes vectors encode number faces given dimension complex characterized inequality similar characterization matroids generally pure complexes maximal faces called facets dimension remains elusive pure simplicial complex called extremal equality highest dimensional inequality pure simplicial complex dimension facets extremal least possible number faces among complexes faces dimension main result theorem asserts every extremal simplicial complex vertex decomposable extends theorem herzog hibi asserting extremal complexes property simple fact last property implied vertex decomposability moreover argument purely combinatorial main inspiration coming proof inequality answers question herzog hibi asked proof inequality simplicial complexes one basic mathematical structures even basic matroids play important role number mathematical areas ranging combinatorics algebra topology see present important information point view simplicial complexes viewed discrete objects geometric realizations approach combinatorial thus introduce abstract simplicial complex definition simplicial complex simplicial complex finite set called set vertices set subsets called faces simplices satisfying property subset face face cardinality face minus one called dimension denoted dim dimension simplicial complex biggest dimension face maximal faces called facets simplicial complex called pure facets dimension matroids regarded simplicial complexes via set independent sets clearly matroid pure simplicial complex see since bases equal cardinality let simplicial complex dimension let number faces simplicial complexes definition simplicial complex sequence observe simplicial complex one face dimension least faces dimension one natural questions concerning simplicial complexes determine minimum number dimensional faces simplicial complex faces dimension slightly general problem characterize sequences simplicial complex problems resolved independently kruskal katona nonnegative integer enlisted subsets integers following order called squashed order max max let set sets list given set sets denote set subsets sets note subsets cardinality sets squashed order form also squashed order theorem reads follows theorem kruskal katona inequality let family sets size result generalized clements daykin gave two simple proofs later hilton gave another one even algebraic proof refer reader cardinality set may easily determined positive integer positive integer uniquely expressed denote one easily see theorem gives full characterization simplicial complexes corollary sequence simplicial complex lot work done characterize pure simplicial complexes partial results known see believed complicated easy description even shape matroids special class pure simplicial complexes mystery guess conditions expected necessary welsh conjectured matroid unimodal grows reaching maximum value decreases general pure simplicial complex need unimodal mason strengthen conjecturing matroid vertex decomposable simplicial complexes every recently breakthrough made seminal paper huh proves characteristic polynomial matroid representable characteristic zero form sequence proof uses advanced tools algebraic geometry soon generalized huh katz representable matroids using result lenz proved mason conjecture representable matroids see also conjectures results concerning matroid another fundamental invariant simplicial complex encoding number faces dimension also classes pure simplicial complexes exists example case pure faces cliques contained graph chordal graphs see research spirit theorem done also structures simplicial complexes mention result stanley see proved using toric geometry convex polytopes vertex decomposable simplicial complexes definition link face let face simplicial complex simplicial complex called link denoted vertex denote simplicial complex vertex decomposability inductive notion invented provan billera definition vertex decomposable pure simplicial complex pure simplicial complex vertex decomposable one following conditions satisfied consists empty set consists single vertex vertex pure vertex decomposable particular zero dimensional complex vertex decomposable special case much general fact proposition every matroid vertex decomposable proof proof goes induction size ground set matroid consists vertex empty set let element lkm matroids ground set hence assertion follows induction ready ring simplicial complex definition ring let simplicial complex set vertices let field ring face ring ideal generated monomials xil face enough consider circuits simplicial complexes say pure simplicial complex always mean ring property see rings simplicial complexes also combinatorial description namely reisner proved pure simplicial complex face except top homologies zero cohenmacaulay complexes full macaulay proved consist exactly see folklore result vertex decomposability stronger property refer reader see also section interesting method counting homologies proposition every vertex decomposable simplicial complex cohenmacaulay field due propositions theorem reisner get following interesting property matroids see also corollary homologies matroid except top ones zero extremal simplicial complexes definition extremal simplicial complex pure simplicial complex dimension facets called extremal least possible number faces among complexes faces dimension get following consequence theorem corollary pure simplicial complex dimension extremal particular zero dimensional complex extremal since complexes exactly one face namely empty set herzog hibi proved using algebraic methods particular results following theorem theorem herzog hibi every extremal simplicial complex field also asked combinatorial proof give proving combinatorial means extremal simplicial complex vertex decomposable due proposition vertex decomposable complex cohenmacaulay theorem every extremal simplicial complex vertex decomposable proof better understanding extremal simplicial complexes use hilton idea proof theorem first sets similar let ski subsets integers squashed order max max contain element denote set extremal simplicial complexes let family sets denote underlying set vertices cardinality let let respective cardinalities note always want vertex lemma either exists vertex consists possible subsets proof going count sum cardinalities sets runs elements since left hand side gives exactly distinct sets boundary belongs exactly sets sets boundaries sets number right hand side counted times hence desired bounds tight latter case possibilities completing set containing means consists possible subsets set delete element insert statement easy verify hand lemma extremal simplicial complex positive dimension exists vertex extremal proof let extremal simplicial complex dimension let set faces consists possible subsets given set assertion lemma clearly true take vertex otherwise due lemma exists since disjoint follows theorem notice thus comparable inclusion order get sdi contradicts assumption extremal since complex also maximal faces generated sets sdi thus obtain sdi simplicial complexes equality complex extremal equality get extremal denotes simplicial complex generated set faces observe equality obvious second clear face subface facet belong otherwise consequence take get assertion step proof goes induction dimension secondly number facets consists points almost vertex decomposable lemma exists vertex complexes extremal lower dimension second either dimension fewer facets smaller dimension inductive hypothesis vertex decomposable consequence also provide example shows opposite implication one theorem hold example path vertices vertex decomposable simplicial complex extremal theorem treated characterization implying vertex decomposability best possible following sense let pure simplicial complex dimension due corollary theorem asserts vertex decomposable example shows already true example let pure simplicial complex dimension exactly two disjoint facets set facets complex connected thus due theorem reisner field consequence also vertex decomposable unfortunately inclusion matroids extremal simplicial complexes simplicial complex generated extremal matroid matroid see extremal simplicial complex chapter conjecture white describing minimal generating set toric ideal problem commutative algebra white conjectured every matroid toric ideal generated quadratic binomials corresponding symmetric exchanges despite big interest algebraic combinatorics community conjecture still wide open special classes matroids like graphic matroids sparse paving matroids lattice path matroids subclass transversal matroids matroids rank three theorem prove white conjecture strongly base orderable matroids already large class matroids particular contains transversal matroids known toric ideal generated quadratic binomials also laminar matroids however important results theorem white conjecture saturation prove saturation respect irrelevant ideal ideal generated quadratic binomials corresponding symmetric exchanges equals toric ideal matroid result even natural formulation language algebraic geometry namely means ideals projective scheme conjecture asserts schemes equal knowledge result concerning white conjecture valid matroids last section discuss original formulation conjecture due white combinatorial asserts two multisets bases matroid equal union multiset one pass sequence single element symmetric exchanges fact white three properties matroid growing conjectured matroids satisfy reveal relations properties present beautiful alternative combinatorial reformulation blasiak also mention extend results discrete polymatroids results chapter come joint work mateusz introduction white conjecture expressed language combinatorics algebra geometry decided present conjecture results algebraic language since natural compact formulation however ideas proofs purely combinatorial conjecture white let matroid ground set set bases let polynomial ring natural definition toric ideal matroid toric ideal matroid denoted kernel map suppose pair bases obtained pair bases symmetric exchange clearly quadratic binomial belongs ideal say binomial corresponds symmetric exchange conjecture white every matroid toric ideal generated quadratic binomials corresponding symmetric exchanges since every toric ideal generated binomials hard rephrase conjecture combinatorial language asserts two multisets bases matroid equal union multiset one pass sequence single element symmetric exchanges fact original formulation due white immediately see conjecture depend actually white stated bunch conjectures growing weakest asserts toric ideal generated quadratic binomials second one conjecture analog conjecture noncommutative polynomial ring however conjecture attracted attention consequences commutative algebra discuss original conjectures white show subtle relations section result towards conjectue obtained blasiak graphical matroids kashiwabara checked case matroids rank schweig proved case lattice path matroids subclass transversal matroids recently bonin solved case sparse paving matroids circuits elements transversal matroids known due conca toric ideal generated quadratic binomials white conjecture strongly base orderable matroids prove white conjecture strongly base orderable matroids see already large class matroids advantage class matroids characterized certain property instead presentation thus easier check matroid strongly base orderable particular get white conjecture true transversal matroids also laminar matroids argument uses idea proof theorem davies mcdiarmid asserts ground set two strongly base white conjecture strongly base orderable matroids orderable matroids partitioned bases exists also common partition theorem strongly base orderable matroid toric ideal generated quadratic binomials corresponding symmetric exchanges proof ideal every toric ideal generated binomials thus enough prove quadratic binomials corresponding symmetric exchanges generate binomials fix going show decreasing induction overlap function ybn ydn max binomial ybn ydn generated quadratic binomials corresponding symmetric exchanges clearly biggest value ybn ydn exists permutation holds ybn ydn suppose assertion holds binomials overlap function greater let ybn ydn binomial overlap function equal without loss generality assume identity permutation realizes maximum overlap function exists clearly ybn ydn multisets thus exists without lost generality assume since strongly base orderable matroid exist bijections multiple symmetric exchange property assume identity similarly identity let graph vertex set edges graph bipartite since sum two matchings split vertex set two independent graph sense sets multiple symmetric exchange property sets bases obtained pair sequence symmetric exchanges therefore binomial ybn ybn generated quadratic binomials corresponding symmetric exchanges analogously binomial ydn ydn generated quadratic binomials corresponding symmetric exchanges moreover since disjoint ybn ydn ybn ydn thus inductive assumption ybn ydn generated quadratic binomials corresponding symmetric exchanges composing three facts get inductive assertion conjecture white white conjecture saturation let ideal generated variables irrelevant ideal recall amn called saturation ideal respect ideal let ideal generated quadratic binomials corresponding symmetric exchanges clearly white conjecture asserts equal prove slightly weaker property namely ideals equal saturation section fact ideal prime ideal saturated express result language algebraic geometry point view natural compare schemes main objects study algebraic geometry ideals instead ideals themself homogeneous ideal homogeneous every basis number elements two kinds schemes projective schemes equal ideals equal thus white conjecture asserts equality schemes whereas ideals projective scheme saturations respect irrelevant ideal equal prove every matroid projective schemes proj proj equal additionally since contained implies ideals equal radicals words means set zeros toric variety proj refer reader background toric geometry already studied white proved projectively normal theorem white conjecture true saturation every matroid proof since get prove opposite inclusion enough consider binomials since toric ideal generated binomials prove binomial ybn ydn belongs enough show basis ybn ydn ybn ydn let basis polynomial ring natural grading given degree function deg variable degb extend notion also bases degb notice ideal homogeneous respect gradings additionally zero thus multiplying change polynomial observe degb single element call basis corresponding variable balanced monomial binomial called balanced contains balanced variables prove induction following claim argued proof white conjecture saturation claim binomial equal degb degb deg deg divides statement claim means quotient belongs claim obvious since binomial equal suppose easier work balanced variables therefore provide following lemma lemma every basis exist bases bdegb balanced satisfy degb ybdeg proof proof goes induction degb degb assertion clear degb symmetric exchange property applied exists bases degb degb degb applying inductive assumption get balanced bases bdegb degb ybdeg hence since corresponds symmetric exchange get degb ybdeg degb degb ybdeg deg monolemma allows change factor degb product balanced variables notice mial preserved hence claim reduces following claim claim ybn ydn balanced binomial equal balanced monomial ybn associate bipartite multigraph vertex classes ground set matroid variable ybi monomial put edge let ybn ydn balanced binomial equal observe belongs vertex degree respect ybn ydn thus apply following lemma obvious graph theory lemma let bipartite multigraphs vertex classes suppose vertex degree respect symmetric difference multisets edges partitioned alternating cycles simple cycles even length two consecutive edges belonging different graphs conjecture white get simple cycle vertices consecutively cyclic numeration modulo divides ybn divides ydn suppose balanced binomial belongs less inductive assumption get balanced monomials ybn ydn balanced binomial less inductive assumption consequence suppose assume since otherwise contract restrict matroid set obviously property generated extends minor matroid say monomial achievable situation say also variables achievable observe variable achievable monomials ybn ydn assertion follows induction indeed variable achievable monomials ybn ydn binomial less thus inductive deg hence assumption degb deg degb deg degb deg suppose contrary variable achievable monomials reach contradiction forthcoming part proof illustrates sentence sturmfels combinatorics nanotechnology mathematics cyclic group mod ski tki uki sets ski tki set shift one notice skn arbitrary hence monomials lemma suppose fixed every following conditions satisfied sets ski tki bases achievable monomial achievable monomial every neither sets basis white conjecture saturation proof suppose contrary basis belongs thus assumption variable would achievable contradiction argument analogous shift one lemma suppose fixed every following conditions satisfied set ski basis achievable monomial set basis every set basis monomial achievable proof symmetric exchange property follows exists also bases thus notice contradicting condition particular binomial thus hence belongs condition guarantees variable exists thus assertion follows condition analogously get following shifted version lemma lemma suppose fixed every following conditions satisfied set tki basis achievable monomial set base every set basis monomial achievable ready reach contradiction inductive argument firstly verify assumptions lemma suppose assumptions lemma every lemma assumptions lemma lemma every thus assertions lemmas assumptions lemma obtain assumptions assertions lemmas every obtain achievable monomial gives contradiction corollary white conjecture holds matroid ideal saturated particular order prove white conjecture enough show ideal prime radical conjecture white remarks begin original formulation conjectures stated white end present beautiful alternative combinatorial reformulation blasiak natural obviously equivalent two sequences bases compatible multisets ybn ydn white three equivalence relations two sequences bases equal length relation may obtained composition symmetric exchanges transitive closure relation exchanges pair bases sequence pair obtained symmetric exchange may obtained composition symmetric exchanges permutations bases may obtained composition multiple symmetric exchanges let denote class matroids every compatible sequences bases length satisfy notion original algebraic meaning property toric ideal generated quadratic binomials property means toric ideal generated quadratic binomials corresponding symmetric exchanges property analog noncommutative polynomial ring ready formulate original conjecture conjecture white conjecture white following equalities class matroids class matroids class matroids clearly conjecture coincides conjecture straightforward proposition classes closed taking minors dual matroid classes closed direct sum white claims also class closed direct sum however unfortunately gap proof believe open question corollary show consequences closed direct sum relation classes lemma following conditions equivalent matroid two bases holds proof implication clear get opposite implication enough recall permutation composition transpositions remarks proposition following conditions equivalent proof implications already discussed get suppose matroid first prove let compatible sequences bases basis compatible sequences bases denotes basis consisting basis copy second assumption notice symmetric exchange bases appearing coordinate either trivial symmetric exchange thus symmetric exchanges certify due lemma order prove enough show two bases relation holds sequences bases compatible thus assumption one obtained composition symmetric exchanges permutations symmetry without loss generality assume permutation needed projection symmetric exchanges coordinate shows corollary proposition get reasonable classes matroids standard version white conjecture equivalent strong one corollary class matroids closed direct sums particular strongly base orderable matroids belong conjectures equivalent closed direct sum proceed alternative combinatorial description white conjectures blasiak matroid whose ground set union two disjoint bases called associate graph whose vertices pairs bases whose union edges pairs bases corresponding symmetric exchange graph already farber richter shank conjectured always connected proved graphic see also game version blasiak analog graph matroids whose ground set disjoint union bases let matroid whose ground set union disjoint bases called associate graph whose vertices sets bases whose union edge joins two vertices intersection one bases partitions blasiak proves following conjecture white proposition conjecture weak version white conjecture equivalent fact every every graph connected conjecture strong version white conjecture equivalent fact every every graph connected way associate toric ideal matroid one associate toric ideal discrete polymatroid herzog hibi extend white conjecture discrete polymatroids also ask toric ideal discrete polymatroid possesses quadratic basis see also remark theorem theorem also true discrete polymatroids several ways prove results hold also discrete polymatroids one possibility use lemma reduce question binomial generated quadratic binomials corresponding symmetric exchanges discrete polymatroid certain matroid another possibility associate discrete polymatroid matroid ground set set independent holds straightforward compatibility sequences bases generation moreover one easily prove symmetric exchange corresponds two symmetric exchanges part applications matroids chapter obstacles splitting necklaces results chapter come paper concern famous necklace splitting problem suppose given colored necklace want cut resulting pieces fairly split two parts minimum number cuts make continuous version problem discrete one follows necklace interval coloring partition necklace lebesgue measurable sets splitting fair part captures measure every color theorem goldberg west asserts number parts two every necklace fair splitting using cuts fact simple proof using celebrated theorem alon extended result arbitrary number parts showing case cuts joint paper alon grytczuk studied kind opposite question motivated problem strongly nonrepetitive sequences namely values real line interval fair splitting two parts cuts theorem goldberg west implies necessary condition prove chapter generalize theorem euclidean spaces arbitrary dimension arbitrary number parts higher dimensional problem role interval plays cube cuts made hyperplanes prove condition theorem bound order necessary condition consequence theorem alon general theorem longueville substantially improves result grytczuk lubawski exponential moreover get exactly result previous paper additionally prove stronger inequality avoid necklaces fair using arbitrary hyperplane cuts theorem another result slightly proved real line distinguishing intervals two intervals contain measure every color solves problem generalize proving distinguishing cubes theorem main framework proofs topological baire category theorem however crucial part reasoning impacts bounds achieve several application algebraic matroids obstacles splitting necklaces introduction following problem necklace splitting problem necklace colored colors stolen thieves intend share necklace fairly know values different colors therefore want cut resulting pieces fairly split parts means part captures amount every color minimum number cuts make precise discrete necklace interval colored colors fair size division necklace using cuts intervals split parts equal number integers every color goldberg west proved every discrete necklace number integers every color divisible fair size see also short proof using theorem applications theorem combinatorics easy see bound tight consider necklace consisting pairs consecutive integers pair colored famous generalization alon case thieves asserts every discrete necklace number integers every color divisible fair size similar argument shows number cuts optimal proofs discrete theorems continuous version problem necklace interval coloring partition necklace lebesgue measurable sets splitting fair part captures measure every color alon got even general version coloring replaced collection arbitrary continuous probability measures generalizes theorem hobby rice theorem alon let continuous probability measures interval exists division using cuts subintervals split parts equal measure interested multidimensional necklace splitting problem notice every hyperplane perpendicular exactly one axes parallel others perpendicular axis call aligned axis hence collection hyperplane cuts gives natural partition exactly hyperplanes aligned axis projecting alon theorem implies continuous probability measures exists fair using hyperplanes aligned axis longueville generalized version arbitrary partition theorem longueville let continuous probability measures cube selection integers satisfying exists division using hyperplane cuts perpendicular axis cuboids split parts equal measure introduction case arbitrary continuous probability measures also easy see total number hyperplanes needed best possible indeed divide diagonal cube intervals consider one dimensional measures however argument still one dimensional nature chapter restrict attention continuous probability measures measurable colorings interested splitting fully colored necklaces necklace nontrivial cube partitioned lebesgue measurable sets call colors fair size necklace division using hyperplane cuts cuboids split parts capturing exactly total measure every color interesting question concerning splitting multidimensional necklaces minimum number cuts necklace fair using hyperplane cuts clearly alon theorem generally theorem longueville implies theorem actually prove bound tight together theorem fully characterizes set tuples necklace fair using hyperplanes aligned axis namely set however interesting problem bit want measurable coloring whole space avoids necklaces fair bounded size course want minimize number colors use problem already formulated partially solved joint paper theorem goldberg west generally theorem implies colors needed prove colors theorem alon grytczuk every integers exists measurable necklace fair using cuts grytczuk lubawski generalized result higher dimensional spaces proving holds provided expected lower bound number colors far optimal even gives much worse result one grytczuk lubawski asked problem colors case two parts theorem prove answer yes namely condition parts get even better bound moreover case set good colorings dense note bound order necessary condition obstacles splitting necklaces consequence theorem alon general theorem longueville theorem coincides result previous paper conjecture bound tight captures idea degrees freedom objects want split section generalize results situation arbitrary hyperplane cuts allowed closely related mass equipartitioning problem every continuous mass distributions exist hyperplanes dividing space pieces split parts equal measure necessary condition positive answer shown considering one dimensional measure along moment curve basic positive result area ham sandwich theorem measures asserting answer yes consequence theorem see also discrete versions concerning equipartitions set points space hyperplane several things known see however even small values question remains open problem studying continuous mass distributions restricted measurable given necklace variant theorem reprove necessary inequality spirit case hyperplane cuts investigate existence measurable colorings avoiding necklaces fair bounded size theorem prove condition number colors last section generalize another result slightly proved measurable real line distinguishing intervals two intervals contain measure every color solves problem theorem generalize proving measurable distinguishing cubes suspect number optimal recently inspired question solved case arbitrary continuous probability measures showed measures distinguishing cubes proved continuous probability measures two fact number nontrivial cubes distinguished proofs results based topological baire category theorem applied space measurable colorings equipped suitable metric section describes background remaining part argument algebraic uses algebraic independence suitably chosen rank corresponding algebraic matroids describe well idea degrees freedom considered geometric objects demonstrate full details proof main result theorem proofs results pattern omit parts rewritten minor changes describe main topological setting topological setting recall set metric space nowhere dense interior closure empty sets represented countable union nowhere dense sets said first category theorem baire complete metric space set first category dense particular results assert existence certain good colorings main framework argument mimic one plan construct suitable metric space colorings prove subset bad colorings category resulting good colorings exist prove use entirely technique previous paper approach purely algebraic compare transcendence degrees sets numbers suitably chosen towards proof main result theorem asserts existence measurable necklace fair size provided demonstrate proof full details proofs results pattern omit parts rewritten minor changes describe main section describes common part proofs results let positive integer let set colors let space measurable set measurable maps set lebesgue measurable positive integer let clearly measurable set may normalized distance denotes lebesgue measure set distance two colorings get metric space need identify colorings whose distance zero set measure zero indistinguishable point view formally metric space consisting equivalence classes distance function lemma metric space complete straightforward generalization fact sets measure measure space form complete metric space measure symmetric distance function see coloring called cube coloring exists constant half open cube division obstacles splitting necklaces equal size cubes let denote set colorings cube colorings lemma let every integer exists cube coloring proof let let union cuboids coloring color remaining part set cube color agrees everywhere outside clearly union cuboids let whole family cuboids divide cube equal cubes new coloring whenever constant equal color otherwise clearly cube coloring moreover small cubes total volume thus get hyperplane cuts theorem every integers exists measurable necklace fair using hyperplane cuts set colorings dense holds provided proof splitting using axisaligned hyperplane cuts described partition exactly hyperplanes aligned axis coordinates hyperplanes obtained pieces granularity splitting generally division mean length shortest subinterval denote set exists least one necklace contained fair using exactly hyperplanes aligned axis granularity least finally let set bad colorings let subset consisting necklace fair using hyperplane cuts clearly hyperplane cuts aim show sets nowhere dense provided suitable relation holds imply union category set good colorings looking dense lemma sets closed proof since union enough show sets closed let sequence colorings converging exists necklace fair granularity least since compact assume cubes converge cube hyperplane cuts also converge notice granularity division least particular nontrivial thing left pieces division get equal parts number labelings one appears many times sequence easy see gives fair necklace lemma every empty interior holds provided proof suppose assertion lemma false lemma cube coloring idea modify slightly coloring new coloring still close necklaces inside possessing size granularity least without loss generality may assume equally spaced cubes cube unique color cube coloring notice edges cubes rational equal let rational number satisfying min inside cube choose volume cube corners rational points let call white color set colors cube color choose small enough real number cube length side equals cubes disjoint one corners rational lies set algebraically independent coloring let equals outside cubes inside cube let equal white everywhere except cubes clearly hence necklace fair size granularity least denote coordinates hyperplane cuts get obstacles splitting necklaces since granularity splitting least edges cubes smaller equal piece division contains cube particular divided hyperplane cut parts contains least one piece consequence least one cube part cube cil key observation amount color white contained part polynomial comparing amount color part one get since cube cil divided hyperplane splitting variable appears component xdj polynomial get polynomial equations denote xdj let algebraic matroid extension suitable big enough subset denote set suppose rank matroid equals show notice case assume without loss generality observe volume part splitting polynomial variables parts volume get nontrivial polynomial equations shows dependent set hence ideal situation would equations independent leading since would give better lower bound unfortunately always case see example however dimension one equations linear easy see linearly independent hence denote ird clearly inclusion implies rmk rmq denote set left sides equations set right sides equations clmk see closure rmk rmk condition choice follows set independent rmk obviously hence get contradiction lemmas follows sets nowhere dense proves assertion theorem hyperplane cuts example consider equality volumes three parts splitting implies fourth one also volume phenomena caused geometry axisaligned hyperplanes shows general easier fairly whole necklace lebesgue measurable subset dimension one situations happen modifying slightly proof theorem one easily obtain version cuboids inequality idea standing behind proof count degrees freedom moving necklace cuts splitting compare number equations forced existence fair splitting number degrees freedom less number equations exists coloring property necklace fairly split even though intuitive meaning degree freedom clear hard rigorously able make use turns rank algebraic matroid set works see section allow arbitrary hyperplane cuts cut adds degrees freedom one hyperplane cut therefore arbitrary hyperplane cuts get roughly times worse bound reason explains results cubes cuboids latter larger degree freedom theorem gives exactly theorem result joint paper alon grytczuk however technique showing sets empty interior quite show contradiction used one color colors cut points end points interval always possible additionally previous argument algebraic contained also analytic part used inequalities measure colors ask bound theorem tight question fix every measurable exist necklace fair using hyperplane cuts suspect answer positive however know replace arbitrary continuous measures negative section provide examples case arbitrary continuous measures case measurable coloring last part section devoted colorings given necklace avoid fair splitting certain size theorem every integers measurable necklace fair using hyperplane cuts set colorings dense proof necklace set fair exactly hyperplanes aligned obstacles splitting necklaces axis granularity least finally let bad colorings aim previously show sets nowhere dense provided suitable relation holds imply union category set colorings looking dense analogously lemma sets closed similarly lemma empty interior want set algebraically independent also add base rank matroid left side equations equal right side still proves assertion theorem theorem tight asserts measurable given necklace fair using hyperplane cuts set colorings even dense due result goldberg west minimum number colors together theorem longueville fully characterizes set tuples necklace fair using hyperplanes aligned axis namely set arbitrary hyperplane cuts theorem every integers measurable necklace fair using arbitrary hyperplane cuts set colorings dense proof arbitrary hyperplane necessarily described numbers call parameters several ways parameters fortunately usually transition functions rational example hyperplane passing parameters normalized equation general pick rational points general position hyperplane contains describe hyperplane using normalized equation moved point belong make use following classical fact lemma volume bounded set intersection finite number half spaces rational function parameters hyperplanes supporting half spaces proof set convex polytope cramer formula follows coordinates vertices rational functions arbitrary hyperplane cuts parameters make barycentric subdivision simplices vertices simplices barycenters faces hence also rational functions parameters volume simplex equals determinant matrix coordinates vertices divided thus rational function parameters volume original polytope granularity splitting largest piece splitting contains cube side length denote set exists least one ddimensional necklace contained fair hyperplanes granularity least bad colorings previous section aim show sets nowhere dense provided suitable relation holds imply set colorings looking dense proof sets closed goes exactly proof lemma show every empty interior repeat proof lemma make small instead single parameter hyperplane parameters due lemma amount color white contained part rational function instead polynomial rank matroid ird set left sides equations analogous set right sides equals hence hypothesis interior implies inequality follows sets nowhere dense proves assertion theorem modifying slightly argument one may obtain version cuboids inequality similar proof theorem leads following theorem theorem every integers measurable ddimensional necklace fair using arbitrary hyperplane cuts set colorings dense get necessary condition mass equipartitioning problem interesting question condition also existence fair size ddimensional necklace question coincides conjecture ramos small support positive answer follows directly applications ham sandwich theorem measures obstacles splitting necklaces colorings distinguishing cubes interested problem slightly say measurable coloring distinguishes cubes two nontrivial cubes contain measure every color minimum number colors needed coloring get results similar way proof theorem set exist two ddimensional cubes contained measure every color contains translation cube bad colorings sets closed easily get analog lemma shows following theorem theorem every exists measurable distinguishing cubes set colorings dense gives second result previous paper namely existence measurable two equally colored intervals suspect number colors optimal question fix every measurable exist two nontrivial cubes measure every color slight argument leads version theorem cuboids colors solved case arbitrary continuous probability measures instead measurable colorings prove measures distinguish cubes tight theorem every continuous probability measures two nontrivial cubes holds every moreover exist continuous probability measures distinguishing cubes fact show even one number cubes distinguished given measures also get results analogous theorem cuboids colors sure minimum number colors two cases arbitrary continuous measures measurable colorings one show following example two measures distinguishing intervals measurable distinguishes intervals bibliography aharoni berger intersection matroid simplicial complex trans amer math soc alon west theorem bisection necklaces proc amer math soc alon splitting necklaces advances math alon tuza voigt choosability fractional chromatic numbers discrete math alon combinatorial nullstellensatz combin probab comput alon degrees choice numbers random struct algor alon grytczuk splitting necklaces measurable colorings real line proc amer math soc andres merkel base exchange game bispanning graphs technical report hagen mathematik und informatik aramova herzog hibi gotzmann theorems exterior algebras combinatorics algebra atiyah macdonald introduction commutative algebra publishing mills bartnicki grytczuk kierstead zhu map coloring game amer math monthly bartnicki grytczuk kierstead game arboricity discrete math korte homotopy properties greedoids adv appl math topological methods graham handbook combinatorics elsevier amsterdam ziegler index dihedral group action mass partition two hyperplanes topology appl blasiak toric ideal graphic matroid generated quadrics combinatorica bodlaender complexity coloring games internat found comput sci boij migliore nagel zanello shape pure mem amer math soc bonin properties sparse paving matroids adv appl math brualdi comments bases dependence structures bull austral math soc clements generalization combinatorial theorem macaulay combin theory conca linear spaces transversal polymatroids asl domains algebraic combin constantinescu kahle varbaro generic special constructions pure cox little schenck toric varieties graduate studies mathematics american mathematical society providence bibliography currie unsolved problems open problems pattern avoidance amer math monthly davies mcdiarmid disjoint common transversals exchange structures london math soc dawson collection sets related tutte polynomial matroid graph theory singapore lecture notes mathematics daykin simple proof theorem combin theory ser daykin algorithm cascades giving inequalities nanta math dekking strongly nonrepetitive sequences sets combin theory ser eagon reiner resolutions rings alexander duality pure appl algebra edelsbrunner algorithms combinatorial geometry volume eatcs monographs theoretical computer science berlin edmonds minimum partition matroid independent subsets res nat bur standards sect edmonds lehman switching game theorem tutte research national bureau standards unsolved problems magyar tud akad mat int rubin taylor choosability graphs congr numer farber richter shank spanning trees connectedness theorem graph theory fulton introduction toric varieties annals mathematics studies princeton university press princeton galvin list chromatic index bipartite multigraph combin theory ser gardner mathematical games american goemans lecture notes matroid optimization mit goldberg west bisection circle colorings siam algebraic discrete methods greene new short proof kneser conjecture amer math monthly grytczuk colorings sets discrete math grytczuk thue type problems graphs points numbers discrete math grytczuk lubawski splitting multidimensional necklaces measurable colorings euclidean spaces grzesik indicated coloring graphs discrete math gutowski paint mrs correct fractional electron comb janssen new bounds index complete graph simple graphs combin probab comput herzog hibi componentwise linear ideals nagoya math herzog hibi discrete polymatroids algebr combin herzog hibi murai trung zheng type theorems clique complexes arising chordal strongly chordal graphs combinatorica hilton simple proof theorem associated binomial inequalities period math hungar hobby rice moment problem approximation proc amer math soc bibliography huh milnor numbers projective hypersurfaces chromatic polynomial graphs amer math soc huh matroids logarithmic concavity huh katz characteristic polynomials bergman fan matroids math ann kahn asymptotics list chromatic index multigraphs random struct algor kashiwabara toric ideal matroid rank generated quadrics electron combin katona theorem sets theory graphs proc colloq tihany academic press new york abelian squares avoidable letters lecture notes comput sci kruskal number simplices complex mathematical optimalization techniques university california press berkeley kung properties white theory matroids encyclopedia math appl cambridge university press cambridge list coloring matroids base exchange properties european combin coloring game matroids discrete math lubawski list coloring matroids discrete appl math indicated coloring matroids discrete appl math implying vertex decomposability discrete comput geom toric ideal matroid advances math obstacles splitting multidimensional necklaces proc amer math soc generalization combinatorial nullstellensatz electron combin full strongly exceptional collections toric varieties picard number three coll math lehman solution shannon switching game siam lenz realizable matroid complex strictly longueville splitting multidimensional necklaces advances math macaulay properties enumeration theory modular systems proc london math soc topology combinatorics partitions masses hyperplanes advances math mason matroids unimodal conjectures motzkin theorem welsh woodall combinatorics proc conf combinatorial math oxford using theorem lectures topological methods combinatorics geometry berlin matschke note mass partitions hyperplanes arxiv constructive degree bounds models combin theory ser miller sturmfels combinatorial commutative algebra graduate texts mathematics new york minsky steps toward intelligence proc ire decomposition graphs forests london math soc oxley matroid theory oxford science publications oxford university press oxford bibliography oxley matroid cubo mat educ oxtoby measure category graduate texts mathematics new pastine zanello two unfortunate properties pure provan billera decompositions simplicial complexes related diameters convex polyhedra math oper res ramos equipartition mass distributions hyperplanes discrete comput geom reisner quotients polynomial rings advances math schauz paint mrs correct electron combin schauz paintability version combinatorial nullstellensatz list colorings hypergraphs electron combin schauz flexible color lists alon tarsi theorem time scheduling unreliable participants electron combin schrijver combinatorial optimization polyhedra new york schweig toric ideals lattice path matroids polymatroids pure appl algebra seymour decomposition regular matroids combin theory ser seymour note list arboricity combin theory ser stanley complexes aiger higher combinatorics nato adv study inst ser math phys sci stanley number faces simplicial convex polytope advances math stanley combinatorics commutative algebra second progress mathematics boston boston sturmfels bases convex polytopes univ lecture series american mathematical society providence sturmfels equations toric varieties proc sympos pure math thomassen every planar graph combin theory ser tutte homotopy theorem matroids trans amer math soc vizing estimate chromatic class diskret analiz russian vizing coloring vertices graph prescribed colors diskret analiz russian measurable patterns necklaces sets indiscernible measure welsh combinatorial problems matroid theory combinatorial mathematics applications proc oxford academic press london west introduction graph theory prentice hall new york white basis monomial ring matroid advances math white unique exchange property bases linear algebra app whitney abstract properties linear dependence amer math woodall exchange theorem bases matroids combin theory ser zhu game coloring number planar graphs combin theory ser zhu list colouring graphs electron comb
| 0 |
jan radical kernel certain differential operator applications locally algebraic derivations wenhua zhao abstract let commutative ring necessary commutative radical mean set elements derive show necessary conditions satisfied elements radicals kernels partial differential operators differential operators commutative algebras differential operators noncommutative certain conditions polynomial commutative free variables either commutating locally finite commutating decomposed direct sum generalized etc apply results mentioned study rderivations locally algebraic locally integrable particular show integral domain characteristic zero reduced nonzero locally algebraic also show formula determinant differential vandemonde matrix commutative algebras formula provides information radicals kernels ordinary differential operators commutative algebras also interesting right background motivation let commutative ring necessary commutative derivation map also call date february mathematics subject classification key words phrases radical kernel differential operator locally algebraic integral derivations differential vandemonde determinant mathieu subspaces spaces author partially supported simons foundation grant wenhua zhao denote map maps call associative algebra generated derivations weyl algebra denote subalgebra generated denoted elements called differential operators also easy check exist derivations polynomial noncommutative free variables throughout paper defined first writing coefficients left replacing furthermore true call differential operator ordinary differential operator univariate partial differential operator multivariate next recall following two notions associative algebras first introduced definition said mathieu subspace bam note also called space literature see dez etc suggested van den essen introduction notion mainly motivated study jacobian conjecture see bcw see also dez interesting aspect notion provides natural highly generalization notion ideals definition let subset define radical commutative ideal coincides radical new notion also interesting right also crucial study mss example easy see every nil nil denotes set nilpotent elements frequently use fact implicitly throughout paper recent studies show many mss arise images differential operators especially images locally finite locally nilpotent derivations certain associative algebras see radical kernel differential operator ewz mss arisen kernels ordinary differential operators univariate polynomial algebras field see paper study radicals kernels ordinary partial differential operators show certain differential operators kernel ker also also apply results proved paper study locally algebraic locally integrable see definition particular show integral domain characteristic zero reduced nonzero locally algebraic see theorem furthermore also show formula determinant differential vandemonde matrix commutative algebras see proposition formula provides information radicals kernels ordinary differential operators commutative algebras also interesting right arrangement content section assume commutative derive necessarily conditions elements radical kernel arbitrary differential operators see theorem corollary particular every differential operator zero kernel ker forms section drop commutativity assumption assume reduced first derive theorem necessary conditions satisfied elements radical kernel differential operator polynomial commutative free variables commutating decomposed direct sum generalized show proposition also integral domain characteristic zero conclusions theorem also hold differential operators multivariate polynomials commuting locally finite finally show proposition similar conclusions proposition assumptions also hold ordinary differential operators particular differential operators theorem propositions ker forms section apply results proved sections study properties locally algebraic wenhua zhao locally integral see definition first show theorem commutative every locally integral image nil also show theorem also integral domain characteristic zero reduced nonzero locally algebraic section assume commutative first show proposition formula determinant differential vandemonde matrix apply formula proposition derive necessary conditions satisfied elements radicals kernels ordinary differential operators point remark formula derived proposition also used derive formulas determinants several families matrices commutative algebra case section unless stated otherwise denotes unital commutative ring commutative unital noncommutative free variables denote polynomial algebra fix section nonzero write homogeneous polynomials degree set call gradient respect clear context simply write define first writing polynomial coefficients left monomials replacing respectively main result section following theorem setting let ker furthermore also lies ker show theorem first need following two lemmas first lemma also easily verified using radical kernel differential operator mathematical induction similar proof usual binomial formula lemma let maps denote adu maps adu lemma let following statements hold exists either deg proof first deg statement holds trivially commutative hence assume deg linearity also commutativity may assume use induction hence statement holds choosing assume statement holds consider case since derivation dij dim dim dim dim means term omitted dim dij dim applying induction assumption terms sum see exists wenhua zhao deg dim dij hence induction statement follows first statement easy see linearity also commutativity may assume applying statement times dkd equation commutativity easy see equation statement follows prove main result section proof theorem lemma applying sides equation using condition ker get also easy check every derivation commutative ring annihilates identity element ring hence follows similarly applying using condition ker get get one immediate consequence theorem following corollary let theorem nil set nilpotent elements following statements hold ker ann ann set elements zero ker nil ker radical kernel differential operator ker particular single derivation leading coefficient ker nil example let smooth complex nonzero valued functions let univariate polynomial ker set solutions ordinary differential equation let set distinct roots multiplicity theory ode see standard text book ode ker spanned fact easy verify directly ker ker consequently theorem corollary also proposition section hold case end section following two remarks first show propositions ordinary differential operators certain necessarily commutative radical ker also satisfies necessarily conditions theorem second theorem corollary always hold differential operators noncommutative algebra seen following example let two noncommutative free variables polynomial algebra let ideal generated let denotes identity map multiplication map left let easy check therefore ker ker cases algebras section unless stated otherwise denotes commutative ring abelian group ralgebra necessarily commutative rmodule denote simply identity map nil set nilpotent elements say reduced nil wenhua zhao furthermore denote ann set elements let say decomposable respect written direct sum generalized precisely let thei set generalized eigenvalues ker easy verify inductively identity words decomposition ker actually additive grading examples respect decomposable coincides corresponding eigenvalue also locally finite derivations base ring algebraically closed field let commuting decomposable exists abelian group particular ker ker dij note also invariant let commutative free variables set write homogeneous polynomials degree let differential operator obtained replacing since commute one anther radical kernel differential operator first main result section following theorem sense extends theorem differential operator necessarily commutative theorem setting assume reduced following statements hold nonzero common zeros ker ker ker order show theorem first need show lemmas lemma let arbitrary commutative ring ralgebra let fixed following statements hold ker homogeneous grading ker ker let set ker proof since preserved hence also preserved follows let write distinct ker ker may assume write define nonnegative integer follows first let greatest integer inductively let great integer ker set hence since desired wenhua zhao definition let subset say extremal element written linear combination elements positive integer coefficients whose sum less equal following lemma sake completeness include direct proof lemma let commutative ring abelian group torsion free every nonempty finite subset least one extremal element proof write use induction nothing show assume consider first case lemma fails hence fact contradiction assume lemma holds consider case extremal point nothing show assume otherwise exist induction assumption set extremal element say claim also extremal point set otherwise exist radical kernel differential operator eqs sum coefficients linear combination right hand side equation eqs eqs extremal element contradicts choice therefore extremal point lemma follows lemma let ker write distinct extremal element set either nilpotent proof assume nilpotent since extremal element set easy see homogeneous component equal since ker lemma ker explicitly since vandemonde determinant proof theorem let ker write distinct let set nonzero lemma least one extremal element say definition wenhua zhao also extremal element set since reduced nilpotent lemma nonzero common zero contradiction therefore case whence statement follows also furthermore since lemma whence contradiction therefore statement follows next show theorem extra conditions also holds commuting locally finite recall rderivation locally finite spanned elements finitely generated proposition assume integral domain characteristic zero reduced denote field fractions algebraic closure let commuting locally finite write homogeneous degree following statements hold nonzero common zeros ker ker ker ker proof set since standard map injective prop isomorphic localization since every field absolutely flat standard map also injective therefore may view rsubalgebra standard way extend denote note commuting also locally finite proposition decomposable applying theorem using fact see proposition follows radical kernel differential operator next use proposition show corollary extra conditions extended ordinary differential operators noncommutative algebras proposition let proposition arbitrary single every univariate polynomial following statements hold ker ker ker ker proof case deg trivial assume deg let field fractions algebraic closure set pointed proof proposition may view standard way extend denote let ker preserved set map algebraic hence see proposition decomposed direct sum generalized let generated elements furthermore easy see decomposable let ker exists ker hence also consequently also ker note nonzero commons zero univariate polynomial degree greater equal therefore applying proposition differential operator ker since ker ker hence statement follows applying proposition differential operator since reduced hence statement also follows end section following open problem worthy investigations open problem let arbitrary commutative ring arbitrary unital noncommutative let polynomial noncommutative free variables set denote wenhua zhao ann set elements decide whether always true ker ann applications locally algebraic derivations section use results proved last two sections derive properties locally algebraic locally integral derivations definition let unital commutative ring say algebraic exists nonzero polynomial say locally algebraic exists containing nonzero polynomial statement statement definition chosen monic polynomial say integral locally integral example derivation locally algebraic algebraic follows example let sequence free commutative variables polynomial algebra let ideal generated readily verified locally algebraic globally algebraic theorem let commutative ring commutative abelian group every locally integral image nil nil denotes set nilpotent elements proof let monic polynomial ker particular ker replacing tpa assume deg theorem since abelian group whence nil theorem follows since every nilpotent locally integrable theorem immediately following radical kernel differential operator corollary let theorem nilpotent nil furthermore proof theorem also easy see following corollary let theorem assume every locally algebraic nil next consider reduced necessarily commutative locally algebraic theorem let unital integral domain characteristic zero unital reduced necessarily commutative nonzero locally algebraic particular nonzero nilpotent proof let locally algebraic let ker whence ker replacing tpa assume applying proposition differential operator ker consequently lemma locally nilpotent let field fractions pointed proof proposition may view standard way extend denote let fixed write since gcd min hence therefore nilpotent lemma whence theorem follows one remark theorem without characteristic zero condition theorem may false seen following example integral derivations algebras field characteristic see example let field characteristic hence nonzero algebraic wenhua zhao one immediate consequence theorem corollary theorem following corollary sense gives affirm answer lned conjecture proposed nilpotent locally integral locally algebraic derivations certain algebras corollary let theorem rderivation locally integral maps every let corollary theorem locally algebraic maps every end section following proposition let commutative ring reduced ralgebra necessarily commutative let ker consequently ker ker ker note commutative lemma follows easily theorem give proof independent commutativity proof proposition case obvious assume leibniz rule since one term sum equal namely term therefore since reduced repeating procedure see ker let ker exists ker applying result shown ker hence ker ker ker conversely since ker ker ker also ker ker ker ker hence proposition follows radical kernel differential operator differential vandemonde determinant throughout section stands commutative ring derivation proposition let fixed det idea proof show matrix transformed elementary column operations upper triangular matrix whose diagonal entry equal example case subtracting second column multiple first column get see achieved suffices show following lemma proposition immediately follows lemma let proposition exist kronecker delta function proof use induction solves equations already pointed assume lemma holds consider case writing applying leibniz rule wenhua zhao applying induction assumption noticing since set solve equations case hence induction lemma follows remark one application formula follows first apply formula special function derivation evaluate fixed point may get formulas determinants several families matrices letting particular choose little argument get following formula det another consequence proposition following proposition let derivation free variable let ker radical kernel differential operator proof let transpose matrix since denotes column vector proposition follows corollary let proposition assume zero nilpotent proof proposition hence zero max whence corollary follows references atiyah macdonald introduction commutative algebra publishing bcw bass connell wright jacobian conjecture reduction degree formal expansion inverse bull amer math soc dez derksen van den essen zhao gaussian moments conjecture jacobian conjecture appear israel math see also van den essen polynomial automorphisms jacobian conjecture prog verlag basel van den essen introduction mathieu subspaces international affine algebraic geometry jacobian conjecture chern institute mathematics nankai university tianjin china july van den essen van hove spaces appear van den essen nieman spaces univariate polynomial rings strong radical pure appl algebra van den essen zhao mathieu subspaces univariate polynomial algebras pure appl algebra see also ewz van den essen wright zhao images locally finite derivations polynomial algebras two variables pure appl algebra see also humphreys introduction lie algebras representation theory graduate texts mathematics springer keller ganze monats math physik wenhua zhao lefschetz differential equations geometric theory reprinting second edition dover publications new york mathieu conjectures invariant theory applications non commutative groupes quantiques invariants reims soc math france paris nowicki integral derivations alg zhao images commuting differential operators order one constant leading coefficients alg see also zhao generalizations image conjecture mathieu conjecture pure appl alg see also zhao mathieu subspaces associative algebras alg see also zhao open problems locally finite locally nilpotent derivations preprint zhao idempotents intersection kernel image locally finite derivations preprint zhao lfed lned conjectures algebraic algebras preprint zhao images ideals derivations univariate polynomial algebras field characteristic zero preprint department mathematics illinois state university normal email wzhao
| 0 |
brownian forgery statistical dependences vincent sep laboratoire cartographie fonctionnelle cerveau uni ulb neurosciences institute libre bruxelles ulb magnetoencephalography unit department functional neuroimaging service nuclear medicine cub erasme brussels belgium dated january balance held brownian motion temporal regularity randomness embodied remarkable way levy forgery continuous functions describe property extended forge arbitrary dependences two statistical systems establish new brownian independence test based fluctuating random paths also argue result allows revisiting theory brownian covariance physical perspective opens possibility engineering nonlinear correlation measures general functional integrals introduction overview modern theory brownian motion provides exceptionally successful example physical models consequences beyond initial field development since introduction model particle diffusion brownian motion indeed enabled description variety phenomena cell biology neuroscience engineering finance mathematical formulation based wiener measure also represents fundamental prototype stochastic process serves powerful tool probability statistics following similar vein develop note new way applying brownian motion characterization statistical independence connection brownian motion independence motivated recent developments statistics specifically unexpected coincidence two dependence measures distance covariance characterizes independence fully thanks sensitivity possible relationships two random variables brownian covariance version covariance involves nonlinearities randomly generated brownian process equivalence provides realization aforementioned connection albeit somewhat indirect way conceals naturalness goal explicit brownian motion reveal statistical independence relying directly geometry sample paths brute force method establish dependence independence two random variables consists examining potential relations formally sufficient measure covariances cov associated transformations bounded continuous see theorem ref question pursued whether using sample paths brownian motion place bounded continuous functions also allows characterize independence shall demonstrate answer yes nutshell statistical fluctuations vwens brownian paths enable stochastic covariance index cov probe arbitrary dependences random variables strategy realize idea consists establishing given pair bounded continuous functions level accuracy covariance cov approximated generically cov crucially notion genericity used refers fact probability picking paths fulfilling approximation nonzero ensures appropriate selection stochastic covariance achieved finite sampling brownian motion core result paper referred forgery statistical dependences analogy levy classical forgery theorem actually levy remarkable theorem states continuous function approximated finite interval generic brownian paths provides obvious starting point analysis indeed stands reason paths approach functions respectively cov approach cov well technical difficulty however lies restriction finite intervals since random variables may unbounded although turns intervals prolonged without ruining genericity shall describe first suitable extension levy forgery holds infinite domains forgery statistical dependences follow practical standpoint using brownian motion establish independence turns advantageous indeed exploring bounded continuous transformations exhaustively realistically impossible practical difficulty motivates use reproducing kernel hilbert spaces see ref review generating possible realizations brownian motion obviously poses problem unwieldy task bypassed averaging directly sample paths way quite amazingly measurement uncountable infinity covariance indices replaced single functional integral shall discuss idea leads back concept brownian covariance forgery statistical dependences clarifies way characterize independence without reference test trajectory sample path neighborhood sample path neighborhood bottlenecks space equivalence distance covariance brownian covariance represents promising tool modern data analysis appears still scarcely used applications seminal exceptions nonlinear time series brain connectomics approach based random paths physically grounded mathematically rigorous believe may help disseminate method establish standard tool statistics main results motivate describe main results sufficient precision provide presentation ideas introduced avoiding technical details mathematical proofs developed dedicated section iii also use assumptions slightly stronger necessary generalizations relegated appendix extension levy forgery imagine recording movement free brownian particle large number trials essence levy forgery ensures one traces follow closely predefined test trajectory least time formulate precisely let focus definiteness standard brownian motion whose initial value set variance time normalized fix continuous function test trajectory consider uniform approximation event brownian path fits tightly constant distance time interval levy forgery theorem states event generic occurs nonzero probability see chapter theorem ref result requires randomness continuity brownian motion neither deterministic processes white noises satisfy property trials though particle eventually drift away infinity thus deviate bounded test trajectory indeed let assume function bounded examine happens limit event occurs path must bounded since particle forever trapped neighborhood test trajectory however brownian motion almost surely unbounded long times hence levy forgery theorem work infinite time domains accommodate asymptotic behavior thus allow particle diverge test trajectory time fig extended forgery continuous functions example depicts test trajectory smooth curve allowed neighborhood shaded area two sample paths one solid random walk illustrating generic event dotted random walk fact arbitrary paths low chances enter expanding neighborhoods bottlenecks least controlled way let recall escape infinity slower brownian motion movement constant velocity one way state law large numbers suggests adjoining event loose approximation event whereby particle confined neighborhood test trajectory expands finite speed asymptotic forgery theorem let bounded continuous elegant albeit slightly abstract proof rests time duality classes events maps levy forgery asymptotic version onto see section iii concrete approach let focus large limit used study statistical dependences since path neighborhood size diverge bounded term neglected event thus merely requires outrun deterministic particles moving speed asymptotic forgery thus reduces law large numbers ensures close one probability decreases continuously lowered since defining condition becomes stricter drop zero reached line reasoning completed also generalized allow slower expansions see strong asymptotic forgery theorem appendix combine levy forgery asymptotic version obtain extension valid timescales specifically let examine joint approximation event words particle constrained follow closely test trajectory time allowed afterwards deviate slowly fig reason lies temporal continuity brownian motion particle staying uniform neighborhood necessarily passes bottlenecks thus likely remain within expanding neighborhood arbitrary particles low chances even meet bottlenecks fig words proportion sample paths among sample paths larger unconstrained probability hence bound forgery statistical dependences turn analysis statistical relations using brownian motion let fix two random variables pair bounded test trajectories consider covariance approximation event cov whereby stochastic covariance cov built picking independently two sample paths coincides test covariance cov small error fig argue event generic first step ensure set measurable probability meaningful physically technical issue rooted escape brownian particles infinity stochastic covariance expressed difference two averages computed fixed sample paths involving coordinates random moments long times thus large coordinates sampled often two terms may diverge lead covariance avoid situation therefore assume asymptotic values unlikely enough actually shall adopt hereafter sufficient condition finite mean variance see section iii proof measurability space result relies suitable integration local version theorem see section iii also understood rather intuitively follows imagine moment events independent joint probability would merely equal product marginal probabilities positive levy forgery asymptotic forgery genericity would follow actually interact associated neighborhoods connected narrow bottlenecks fig increase joint probability extended forgery theorem let bounded continuous probability density stochastic covariance time cov cov covariance value fig forgery statistical dependences distribution stochastic covariance black histogram standard deviation shown simulated dependent random variables black dots insert well test covariance plain arrow sample dotted arrow falling within allowed error shaded area insert shows associated functions case independent variables superimposed comparative purposes forgery theorem statistical dependences let random variables bounded continuous idea one way realizing event pick sample paths fit test trajectories tightly long time period see fig indeed shown cov cov rough estimate explains event must occur whenever small enough see section iii particular eqs precise error bound nested forgery lemma full proof turn extended forgery ensures genericity selection sample paths thus event well necessary condition indeed assumed without loss generality see section iii understand imagine first random times bounded differ less times covariance error must unbounded random times distance sample paths test trajectories may exceed must actually diverge long times could led infinitely large covariance error fact occurence unlikely fit divergence see counterbalanced within averages fast decay long times probability contribution covariance error thus finite scales leads forgery theorem allows probe enough possible relationships establish statistical dependence independence explain consider two probability densities stochastic covariance shown fig generated using simulations differing presence absence coupling distribution appears significantly wider dependent variables suggests width key indicator relation actually independent variables narrow peak observed reflects underlying dirac delta function nonzero width fig due finite sampling errors covariance estimates indeed vanishing stochastic covariances necessary condition independence impossibility sampling nonzero values also turns sufficient brownian independence test two random variables independent iff cov prove sufficiency show hypothesis cov implies test covariances vanish equivalent independence theorem ref understood concretely using following thought experiment imagine cov pair test trajectories let fix say fig generate sequentially samples stochastic covariance approximation event shaded area fig occurs forgery statistical dependences ensures sequence stops eventually choice last covariance sample nonzero however contradicts hypothesis imposes trials result vanishing covariance see also section iii settheoretic argument bbn ovn fixed forgery statistical dependences provides alternative approach theory brownian covariance hereafter denoted dependence index emerges naturally context root mean square stochastic covariance equivalently standard deviation since mean hcov vanishes identically symmetry see also fig thus slight reformulation definition ref random variables quadratic gaussian functional integrals sample paths computed analytically result reduces distance covariance inherits properties alternatively argue central results theory follow natural manner first key property independent iff mirrors directly brownian independence test since measures precisely deviations stochastic covariance zero argument thus replaces formal manipulations determines estimator bbn brownian covariance random variables procedure allows build estimation theory brownian thus distance covariance elementary covariance instance rather intricate algebraic formula unbiased sample distance covariance recovered using quite naturally eqs unbiased estimator ovn covariance squared see section iii explicit expressions estimators derivation statement unbiasedness property covn cov automatically transferred corresponding estimator bbn brownian covariance revisited cov regularized singular integrals underlie theory distance covariance furthermore forgery statistical dependences clarifies works physically brownian motion fluctuates enough make functional integral probe possible test covariances second key aspect straightforward sample estimation using algebraic formula important practical advantage dependency measures mutual information instead relying sample formula distance covariance prompts estimate stochastic covariance rather square averaging brownian paths given joint samples expression sample covariance ovn functional integral convergence hypothesis sufficient ensure averaging samples commutes functional integration brownian paths noteworthy construction brownian covariance estimator generalized formally replacing brownian paths stochastic processes fields case may consider multivariate variables determines simple rule engineer wide array dependence measures via functional integrals opens question processes allow characterize independence approach relied ability probe generically possible test covariances critically class processes satisfying forgery statistical dependences might relatively restricted hand original theory brownian covariance extend multidimensional brownian fields fractional brownian motion continuous markovian two properties central forgery theorems forgery statistical dependences provides new elegant tool establish independence may represent particular case general theory functional correlation measures iii mathematical analysis proceed detailed examination results presented section technical parts proofs relegated appendix except defining conditions enforced respectively events generic constrained less contains asymptotic forgery duality sketched section derivation asymptotic forgery theorem using law large numbers generalization actually developed fully see appendix however particular case enjoys concise proof based symmetry argument time duality start establishing duality relation ufd dual function given proof symmetry brownian motion replacing side determines new event probability explicitly event monotonicity probability measure levy forgery asymptotic forgery going focus derivation sufficient prove extended forgery part brownian motion merely independent copy part hence similar result necessarily holds events turn integral formula first step demonstration relies following explicit expression joint probability functional integral let introduce family auxiliary events denote probability used condition holds identically since coincides ufd yields proof asymptotic forgery theorem theorem naturally follows duality levy forgery explicit formula establishes continuous corresponds continuity boundedness since levy forgery theorem thus applies shows side nonzero first factor expectation value denotes dicator function event enforces constraint considered sample paths must lie within uniform neighborhood second factor function evaluated position random path explain function borel measurable represents proper random variable expectation value well defined proof formula direct consequence two following statements involving conditional probability local extended forgery describe analytical derivation extended forgery theorem formalizes intuitive argument given section ensuing bound probability joint event quite reach sufficient ensure genericity restriction events preliminary useful consider events correspond follow fairly standard arguments brownian motion detail appendix appendix first implements markov erty depends past via boundary time second provides explicit representation random variable see also chapter theorem ref next step show integrand side vanish identically provides local version extended forgery full theorem follow integration extended forgery theorem local version exists subinterval bottleneck continuous function satisfying distance parameter eqs lower bounds positive two parts theorem closely related levy forgery asymptotic forgery treat separately proof property actually holds arbitrary subintervals shall parameterize ensure inclusion bottleneck also define continuous function setup ensures condition implies bound follows two inclusions monotonicity two properties function second part useful interject following simple statements function vanishes identically outside bottleneck continuous within bottleneck therefore borel measurable well first claim rests observation set empty whenever since defining condition satisfied find second claim may appear quite clear well since set vary continuously intuition quite right accurate statement let fix point bottleneck consider arbitrary sequence converging limit event assuming exists coincides modulo set continuity lim full proof appears rather technical sketch key ideas relegate details appendix limit event imposes sample paths lie within expanding neighborhood associated allowed reach boundary essentially large limit strict inequalities yields nonstrict inequality however hitting event never happens typical roughness brownian motion forbids meeting another trajectory without crossing see appendix law generalizes lemma page ref proof key observation exp merely integral formula straint removed equivalently expectation made plicit everywhere almost inequality must saturate equality thus contradicts property must hold least one point bottleneck thus also subinterval continuity property proof extended forgery theorem finally combine local version theorem integral formula obtain indeed constraining paths allows bound side passing first inequality also used identity event theorem follows inequality together eqs noteworthy stated proved extended forgery assumption convenience arguments restriction actually artificial theorem holds arbitrary parameters measurability stochastic covariance index premise forgery theorems associated approximation events measurable sets enforcing algebraic conditions time domain general property separable stochastic processes brownian motion indeed principles probability theory allow constraints countable infinity time points notion separability enables considering continuous time domains well nutshell brownian motion separable conditions imposed dense grid rational time points extend automatically nonrational times sample path continuity story different set involves functional cov ihb sample paths depends particular distribution random times provide convergence conditions ensure measurability functional thus covariance approximation event sufficient conditions measurability expectation values measurable functionals sample paths whenever finite respectively three conditions hold stochastic covariance thus measurable noteworthy conditions replaced stronger constraint finite since inequality implies section used even stronger assumption simplify shorten formulation results proof direct consequence theorem let focus functional argument completely similar setup theorem ensures measurability iterated expectation value computed first averaging sample paths random times absolutely convergent since gaussian random variable zero mean variance find explicitly nested forgery statistical dependences showed forgery statistical dependences induced extended forgery using somewhat rough estimate covariance error provide exact upper bound complete proof theorem error bound arbitrary sample paths cov cov error bound given derivation estimate relies relatively straightforward application series inequalities relegated appendix significance rests fact polynomial coefficients finite covariance error made arbitrarily small taking hence obtain following key intermediate result nested forgery lemma assume finite let exists prime event indicates applies process proof forgery theorem statistical dependences direct corollary hypothesis boundedness assumption ensure lemma applies using monotonicity independence yields thus finiteness condition indeed fulfills requirement theorem technically application also needs extra prerequisite random variable evaluation map jointly measurable fixed path fixed time property continuous stochastic processes brownian motion tempting invoke extended forgery theorem conditions imposed however issue thanks translation invariance covariance cov equal cov thus used extended forgery theorem second line note assumption used section slightly stronger needed made clear nested forgery lemma brownian independence test dichotomy prove assertion cov implies cov used section discrete sampling method straightforward covariance approximation event must occur simultaneously zero covariance event cov compatibility indeed central derivation actually general property probability theory use provide alternative settheoretic argument dichotomy lemma arbitrary functions parameter otherwise proof direct consequence eqs intersection characterized equivalently two conditions cov second proof sufficiency brownian independence test forgery statistical dependences event generic bounded continuous hypothesis event occurs since intersection generic event almost sure event never empty empty generic event would subset zero probability event complement almost sure event forbidden monotonicity second possibility ruled must hold true since arbitrary conclude cov unbiased estimation brownian covariance exemplify explicit estimator brownian covariance derived functional integral construction enforces unbiasedness finite sampling allows recover unbiased sample formula distance covariance review first unbiased estimator let introduce distance matrix aij samples well version akl aik akj aij aij analogous matrices corresponding samples denoted bij bij respectively notations assuming bbn aij bij distinct expression differs formula given refs trivial factor due use standard normalization brownian motion fact provides unbiased estimation brownian covariance see follow directly derivation based functional integral starts naturally unbiased estimation covariance squared estimation covariance squared convenient introduce new random variables samples defined fixed sample paths unbiased estimator cov elementary statistics checked developing square estimation cov hampered systematic errors order given shall rather define ovn primed sum taken distinct indices expression differs aforementioned development indeed free finite sampling biases proven averaging joint samples indeed using identities hxi hxi hxyihxihyi hxi fact primed sum contains terms find ovn cov derivation average independent random paths samples kept constant computation factors functional integration bbn obtained side replacing autocorrelation functions aij bij respectively substitution rule actually simplified terms involving cancel thanks algebraic identity distinct similar cancellations also allow use thus obtain unbiased estimator bbn aij bij bik bkj bkl equivalence obvious first sight make contact let fix consider sum second factor indices distinct contribution sum following four terms bij bij bik bij bik bkj bij bkj bkl bkl bik bkj sums sides running unconstrained indices total thus reduces bij rewritten bbn aij bij growth function positive continuous satisfies divergence condition log log lim particular example leads back asymptotic behavior together law iterated logarithm indeed ensures expansion described faster brownian motion lim finally recover aij replaced aij indeed extra terms cancel sum thanks centering property bij fixed checked definition acknowledgments research carried context methodological developments meg project cub erasme libre bruxelles brussels belgium financially supported fonds erasme brussels belgium lim sup lim sup log log log log lim sup strong asymptotic forgery theorem let bounded continuous growth function specified proof revisit strategy proof sketched main text let start demonstrating lim stronger version forgery theorems various forgery theorems obtained extending levy original forgery relied approximations involving neighborhoods expanding long times constant speed however turns assumption weakened describe stronger version asymptotic forgery theorem briefly discuss implications slowly expanding neighborhoods key observation discussion asymptotic forgery law large numbers replaced almost verbatim precise law iterated logarithm establishes brownian motion diverges long times faster log log thus naturally generalize definition using neighborhoods expand strictly faster almost brownian paths explicitly let law iterated logarithm states precisely first factor side equals one second factor tends zero definition therefore obtain lim equivalent appendix fact shall use momentarily prove assertion let start inequality distinct event increases monotonously raised since defining condition becomes less strict limt equal continuity measure therefore need show limit event occurs version statement almost sample paths find ensuring inequality turn particular case slightly stronger assertion lim holds thanks property boundedness assumption equation follows limit implies existence value thus remains prove theorem aim let split identify joint event strategy show first two events generic apply techniques section iii derive genericity intersection case taken care observe whenever chosen minimum value continuous function compact time domain since pick way possible since positive obtain monotonicity levy forgery theorem last step ensure intersection generic analogous extended forgery theorem proven along exact lines consequences forgery theorems strong versions subsequent forgery theorems obtained replacing event generalization end result weaken convergence assumptions required random variables therefore widen applicability forgery statistical dependences brownian independence test gather results without repeating derivations strong extended forgery theorem states bounded continuous growth function covariance error inequality similar holds arbitrary sample paths bound instead consequence forgery theorem statistical dependences brownian independence test hold weaker convergence conditions exists growth function finite appendix completion proofs markovian decomposition formula equation relies weak markov property actually valid general markovian processes ing generated event write merely definition conditional probability furthermore contained generated process since involves conditions time interval observation denotes expectation value conditional first line applied tower property conditioning follows second used weak markov property events involving conditions depend past via generated random variable combining eqs therefore find side equal conditional pectation dropped expression mere definition conditioning recover side representation defining property conditional probability borel sets characterizes modulo set prove statement thus suffices check satisfies property consequence fact brownian motion independent increments distributed brownian motions indeed side computed integral sample paths satisfying defining increment time step latter constraint becomes equivalently view definition recalled brownian motion independent position application theorem thus enables inb tegrate first fixed therefore obtain remains derive two inclusions used identity definition passing second equality ends demonstration continuity property shall prove limit exists equals whenever inside bottleneck interval start difference formula argue two terms side converge zero since sequences probabilities nonnegative actually sufficient show lim sup first inclusion let consider arbitrary samxn ple path satisfying explicitly means build diverging sequence xnk indices xnk vtk point two cases emerge lead conclusion corresponds sought inclusion set times bounded situation theorem provides subsequence tkm converging finite time assuming continuous let evaluate along take using xnkm continuity find contradicts fact therefore continuous henceforth set times unbounded divide sides let since bounded follows lim shows sample path satisfy law large numbers hence second inclusion similarly let consider meaning exists diverging sequence lim sup xnk vanish stands infinitely often means infinite number events sequence occur indeed allows evaluating superior limits proceed proof let introduce zeroprobability set paths continuous satisfy law large numbers event equality set times closely related imposes sample paths actually reach boundary associated neighborhood event turns also probability zero brownian path hitting boundary must also cross thus leave neighborhood completeness provide derivation law appendix definitions hand claim imply directly two limits vanish monotonicity subadditivity bex cause therefore side indeed tend zero vtk taking yields nonstrict inequality repeating analysis also shows equality must hold time else obtain sought inclusion law convenient introduce upper sign lower sign boundary curves inherit continuity since lies within bottleneck interval also satisfy condition law shall derive stated formally follows max min provides sought estimate indeed observing event implies either equality time else equality time find max min applying monotonicity subadditivity law yield proof law shall focus derivation case completely similar simplify notations let introduce running maximum max bound covariance error let introduce shorthand notations cov cov monotonicity subadditivity obtain evaluate two terms righthand side show vanish first term sufficient show limit event means explicitly hits arbitrarily small times implies thus continuous probability occurs zero second term vanishes noting increment process thus independent applying theorem write explicitly exp general property probability theory may nonzero countable set values therefore almost thus integral vanishes using bilinearity covariance triangle inequality cov cov cov start examining first term developing covariance applying triangle inequalities using boundedness find proviso say maximum attained thus simply reads follows fix eventually let hitting without crossing event implies either reaches corresponds event else increment reaches corresponds hitting event maxt thus follows lim sup cov last line obtained splitting expectation value contributions two regions see see second term completely analogous cov last term use four regions obtain similar way cov covariance error inequality bound recovered combining eqs sethna statistical mechanics entropy order parameters complexity oxford university press freeman brownian motion diffusion springerverlag new york gikhman skorokhod introduction theory random processes dover rizzo bakirov annals statistics rizzo annals applied statistics jacod protter probability essentials universitext berlin gretton mach learn newton annals applied statistics rizzo annals applied statistics zhou journal time series analysis geerligs henson neuroimage kraskov grassberger phys rev arxiv rizzo journal multivariate analysis rizzo annals statistics
| 10 |
apr cohomology extensions braces victoria lebed leandro vendramin abstract braces linear cycle sets algebraic structures playing major role classification involutive solutions yangbaxter equation paper introduces two versions homology theories theories mix harrison homology abelian group structure homology theory general cycle sets developed earlier authors different classes brace extensions completely classified terms second cohomology groups introduction left brace abelian group additional group operation following compatibility condition holds two group structures necessarily share neutral element denoted braces slightly different equivalent form introduced rump definition goes back jespers get feeling braces look like convince oneself rare practice one might think reader referred bachiller classification braces order growing interest structures due number reasons first braces generalize radical rings second unveiled role version notion classification problem regular subgroups affine groups field third braces enriched cycle sets therefore important study solutions equation recall cycle set defined rump set binary operation bijective left translations satisfying relation rump showed cycle sets invertible squaring map bijection involutive solutions equation solutions form combinatorially rich class structures connected many domains algebra semigroups bieberbach groups hopf algebras garside groups etc cycle set approach turned extremely fruitful elucidating structure solutions obtaining classification results see instance mathematics subject classification key words phrases brace cycle set equation extension cohomology work partially supported conicet ictp thanks program henri lebesgue center university nantes support authors grateful reviewer useful remarks interesting suggestions development subject victoria lebed leandro vendramin references therein spite intensive ongoing research cycle sets structure still far completely understood illustrated numerous conjectures open questions area many formulated cameron jespers del etingof schedler soloviev initiated study structure group solution particular cycle set ideas explored solutions concretely structure group cycle set free group set modulo relations structure group cycle set shown isomorphic set free abelian group see also explicit graphical form isomorphism group thus carries second abelian group one pushed back becomes brace moreover inherits cycle set structure yields key example following notion linear cycle set cycle set abelian group operation satisfying compatibility conditions structure also goes back rump showed equivalent brace structure via relation understanding structure groups certain classes quotients often regarded reasonable first step towards understanding cycle sets even better bachiller jespers recently reduced classification problem cycle sets braces explains growing interest towards braces linear cycle sets pointed bachiller jespers extension theory braces would crucial classification purposes well elaborating new examples served motivation paper cohomology theory general cycle sets developed authors second cohomology groups given particular attention shown encode central cycle set extensions propose homology cohomology theories linear cycle sets thus braces usual central linear cycle set extensions turn classified second cohomology groups pedagogical reasons first study extensions trivial level abelian groups together corresponding homology theory sections extensions still interest since often cycle set operation significant part linear cycle set structure example structure groups hand technically much easier handle general extensions sections therefore found instructive present reduced case general one finishing paper learned analogous extension theory independently developed bachiller using language braces fragments setting also appeared work authors prefer alternative relation set defines isomorphic group cohomology braces alternative approach extensions suggested earlier ben david ginosar concretely studied lifting problem bijective yet another avatar braces work translated language braces bachiller choice linear cycle set language leads transparent constructions moreover made possible development full cohomology theory extending degree constructions motivated extension analysis theory missing previous approaches reduced linear cycle set cohomology work linear cycle sets explained introduction constructions results directly translated language braces perform translation major results take lcs abelian group let rcn denote abelian group modulo linearity relation last copy denote rcnd abelian subgroup rcn generated degenerate consider also quotient rcnn rcn define maps linearizations complete family maps dually let denote set maps linear last coordinate let rcn comprise maps vanishing degenerate define maps fun fun formulas resemble group homology construction show indeed define homology theory proposition let linear cycle set abelian group maps square zero induce maps rcn restrict maps rcnd maps square zero restrict maps rcn restrict maps rcn victoria lebed leandro vendramin induced restricted maps proposition abusively denoted symbols proof shall need special properties zero element lcs lemma lcs relations hold proof lcs axioms one hence similarly relation follows cancelling recall left translation bijective proof proposition treat homological statements imply cohomological ones duality maps presented signed sums relation classically reduces almost commutativity case latter relation either tautological follows associativity follows left distributivity consequence second lcs relation using linearity left translations one sees applied expressions type maps yield expressions type hence signed sums induce differential possibility restrict guaranteed lemma proposition legitimizes following definition definition reduced normalized cycles boundaries homology groups linear cycle set coefficients abelian group cycles boundaries homology groups chain complex respectively dually reduced normalized cocycles coboundaries cohomology groups complex spectively rcn use usual notations rqn rqn rqn one letters remark proof proposition actually showed homology constructions refined simplicial ones example recall introduction cycle set free abelian group seen linear cycle set cycle set operation induced case simply abelian group one calculates standard arguments lcs theory yield orb orb set orbits classes equivalence relation generated similarly one calculates first reduced cohomology group fun orb cohomology braces finish comparison homology lcs homology underlying cycle set defined recall homology hncs computed complex dually cohomology hcs computed complex fun define map rcn position map sign permutation obvious projection rcn proposition let linear cycle set abelian group map defined yields map chain complexes rcn proof one compare evaluations maps convenient use decomposition map zero evaluation terms sum ith position cancel careful sign inspection yields hence maps coincide consequence one obtains dual map fun cochain complexes induced maps homology extensions reduced turn study reduced linear cycle set maps abelian group satisfying last relation together commutativity yields necessarily cocycle cycle set implying among reduced distinguish reduced map linear victoria lebed leandro vendramin example let abelian groups consider trivial linear cycle set structure map reduced lcs bicharacter sense bilinearity relations reduced trivial case thus abelian group bicharacters values observe cycle set differentials vanish second cohomology group hcs cycle set thus comprises maps strictly larger example let cyclic group elements written additively brace corresponding linear cycle set structure given operation one even otherwise take map relation means form product taken reduced modulo relation translates substitution yields analyzing values one sees chosen arbitrarily equal reduced trivial linear map necessarily form constant yielding since summarizing one gets let turn underlying cycle set playing condition one verifies maps verifying linear relations mod implies zcs hcs construct extensions lcs show central extension isomorphic one type reduced modulo reduced classify extensions lemma let linear cycle set abelian group map abelian group operation linear cycle set reduced notation lcs lemma denoted cohomology braces proof left translation invertibility follows property properties equivalent respectively properties definition cycle set property follows commutativity lemma correspondence linear cycle sets braces yield following result lemma let brace abelian group map abelian group product brace corresponding linear cycle set map reduced introducing notion lcs extensions need preliminary definitions definition morphism linear cycle sets map preserving structure one kernel defined ker notions image short exact sequence linear cycle sets linear cycle subsets defined obvious way linear cycle subset called central one lsc morphism ker clearly linear cycle subsets respectively lemma rephrased stating central linear cycle subset definition central extension linear cycle set abelian group datum short exact sequence linear cycle sets endowed trivial cycle set structure image central sense definition short exact sequence abelian groups underlying splits adjective refers fact extensions interesting level cycle set operation trivial level additive operation since require short exact sequences linearly split general taking account additive operation postponed next section extensions important example comparing lcs structures structure group cycle set cycle set extension introduction detail structure groups cycle set extension theory lcs lemma extension obvious way show example essentially exhaustive definition two central lcs extensions called equivalent exists lcs isomorphism making victoria lebed leandro vendramin following diagram commute set equivalence classes central extensions denoted ctext lemma let central lcs extension linear section map takes values defines reduced cocycle extensions equivalent furthermore cocycle obtained another section cohomologous proof computation yields ker definition short exact sequence hence map defined formula remains check relations map linearity left translations gives hence linearity similarly one got rid since centrality yields centrality also used relation obtained corresponding relation applying next show linear map yields equivalence extensions bijective inverse given map map well defined since ker cohomology braces let check entwines cycle set operations one used centrality commutativity diagram obvious completes proof suppose reduced cocycles obtained sections respectively put linear map image contained ker hence defines linear map show cohomologous establish property one computes desired relation obtained applying used centrality compare extensions constructed different lemma let linear cycle set abelian group two reduced linear cycle set extensions equivalent cohomologous proof suppose linear map provides equivalence extensions commutativity diagram forces form linear map one computes thus map entwines cycle set operations coboundary opposite direction take cohomologous cocycles means relation holds linear map repeating arguments one verifies map equivalence extensions put together preceding lemmas yield theorem let linear cycle set abelian group construction lemma yields bijective correspondence ctext victoria lebed leandro vendramin finish section observing degree normalization brings nothing new reduced lcs cohomology theory proposition reduced linear cycle set normalized moreover one isomorphism cohomology rhn proof putting defining relation reduced using properties element lemma one gets moreover linearity normalized hence identification rzn degree normalized usual complexes coincide yielding desired cohomology group isomorphism degree full linear cycle set cohomology previous section treated linear cycle set extensions form thought direct product lcs cycle set operation deformed handle general situation additive operation deformed well proofs general case analogous technical previous sections take linear cycle set abelian group let shci abelian subgroup generated partial shuffles taken shr subset permutations elements satisfying term shuffle used put recall notations proof proposition consider coordinate omitting maps combine linearizations maps show horizontal vertical differentials bicomplex empty sums zero convention denotes abelian subgroup generated degenerate quotient shci dually fun put extended linearity let abelian group maps whose linearization vanishes partial shuffles omitted let comprise maps moreover zero degenerate assemble data chain cochain bicomplex structures normalization cohomology braces theorem let linear cycle set abelian group abelian groups together linear maps form chain bicomplex words following relations satisfied moreover maps restrict subgroups shci thus induce chain bicomplex structures linear maps yield cochain bicomplex structure abelian groups fun structure restricts abusively denote induced restricted maps theorem symbols etc proof theorem relies following interpretation bicomplex jth row almost complex proposition slight modification last entry nothing acted left translation replaced last elements behaving way ith column first entries never affected remaining entries vertical differentials act differentials proposition computed trivial cycle set operation alternatively ith column seen hochschild complex coefficients acts trivially sides modding shci means passing hochschild harrison complex column proof usual suffices treat homological statements due observation preceding proof horizontal relation vertical relation follow proposition mixed relation note horizontal vertical differentials involved affect respectively first last entries exception component however component also commutes linearity respect left translation involved applying left translation entry partial shuffle one still gets partial shuffle consequently horizontal differentials restrict shci order show restrict shci well suffices check expression victoria lebed leandro vendramin linear combination shuffles denote three sums expression also use classical notation shuffles convention recall also notations sums rewrite empty sums declared zero decomposition follows analysis two possibilities shr namely decomposition corresponds dichotomy summands appear opposite signs therefore discarded remaining ones divided two classes giving decomposition thus signed sums shuffles exception cases terms appear respectively annihilate total sum case treated similarly possibility restrict taken care usual lemma consequence one obtains chain bicomplex structure position define full homology linear cycle set definition normalized cycles boundaries homology groups linear cycle set coefficients abelian group cycles boundaries homology groups total chain complex respectively cnn bicomplex dually normalized cocycles coboundaries cohomology groups complex respectively use usual notations one letters remark fact chain bicomplex constructions refined bisimplicial ones remark instead considering total complex bicomplex one could start say computing homology column horizontal differv entials induce chain complex structure row observe first row precisely complex proposition homology reduced homology linear cycle set general linear cycle set extensions next step describe looks like full version linear cycle set cohomology theory consists two components cohomology braces seen elements fun sym respectively sym denotes abelian group symmetric maps sense maps satisfy three identities one component explicitly identities read particular cycle set symmetric group reduced cocycles precisely couples maps next give elementary properties lemma let linear cycle set coefficients abelian group one normalized one proof let prove first claim relation follows choosing similarly relation specialized substitutions either yield last relation second claim directly follows previous point lemma let linear cycle set abelian group two maps set operations linear cycle set notation lcs lemma denoted proof left translation invertibility follows property properties equivalent respectively properties associativity commutativity encoded property symmetry respectively finally lemma implies zero element opposite victoria lebed leandro vendramin lemma translate lemma language braces lemma let brace abelian group two maps set operations brace corresponding linear cycle set maps form proof recall correspondence corresponding brace lcs operations also rewritten map inverse left translation given formulas lemma describe lcs structure operation reads operations yield brace structure formulas desired form substitution equivalent conversely starting brace structure desired form one sees associated lcs structure described lemma repeating argument one obtains relation connecting definition central extension linear cycle set abelian group datum short exact sequence linear cycle sets endowed trivial cycle set structure image central sense definition notion equivalence central lcs extensions definition transports verbatim general extensions set equivalence classes central extensions denoted ext lcs lemma extension obvious way show example essentially exhaustive lemma let central lcs extension section maps defined cohomology braces take values determine cocycle cocycle normalized sense extensions equivalent cocycle obtained another section cohomologous cocycles normalized cohomologous normalized sense lemma let linear cycle set abelian group linear cycle set extensions equivalent normalized recall normalized couple maps form map normalized sense proof lemmas technical conceptually analogous proofs lemmas therefore omitted put together preceding lemmas prove theorem let linear cycle set abelian group construction lemma yields bijective correspondence ext words central extensions lcs thus braces completely determined second normalized cohomology groups references bachiller classification braces order pure appl algebra bachiller examples simple left braces bachiller cedo jespers solutions equation associated left brace mar bachiller jespers family irretractable solutions equation ben david ginosar groups involutive groups mar catino colazzo stefanelli regular subgroups affine group bull aust math catino colazzo stefanelli regular subgroups affine group asymmetric product radical braces journal algebra catino rizzo regular subgroups affine group radical circle algebras bull aust math jespers del involutive groups trans amer math jespers retractability set theoretic solutions yangbaxter equation adv jespers braces equation comm math chouraqui garside groups equation comm algebra dehornoy solutions equation garside germs adv etingof schedler soloviev solutions quantum yangbaxter equation duke math combinatorial approach solutions yangbaxter equation math victoria lebed leandro vendramin solutions equation braces symmetric groups july cameron multipermutation solutions equation comm math majid matched pairs approach set theoretic solutions equation algebra van den bergh semigroups algebra jespers monoids groups algebr represent theory lebed vendramin homology left solutions equation yan zhu equation duke math rump decomposition theorem unitary solutions quantum yangbaxter equation adv rump braces radical rings quantum equation algebra rump semidirect products algebraic logic solutions quantum equation algebra rump brace classical group note smoktunowicz note solutions equation smoktunowicz engel groups nilpotent groups rings braces equation soloviev solutions quantum equation math res vendramin extensions solutions equation conjecture pure appl algebra laboratoire jean leray nantes rue nantes cedex france address depto fcen universidad buenos aires ciudad universitaria buenos aires argentina address lvendramin
| 4 |
adaptive submodularity theory applications active learning stochastic optimization daniel golovin antispam golovin gmail com california institute technology pasadena usa dec andreas krause antispam krausea ethz eth zurich zurich switzerland abstract many problems artificial intelligence require adaptively making sequence decisions uncertain outcomes partial observability solving stochastic optimization problems fundamental notoriously difficult challenge paper introduce concept adaptive submodularity generalizing submodular set functions adaptive policies prove problem satisfies property simple adaptive greedy algorithm guaranteed competitive optimal policy addition providing performance guarantees stochastic maximization coverage adaptive submodularity exploited drastically speed greedy algorithm using lazy evaluations illustrate usefulness concept giving several examples adaptive submodular objectives arising diverse applications including management sensing resources viral marketing active learning proving adaptive submodularity problems allows recover existing results applications special cases improve approximation guarantees handle natural generalizations keywords adaptive optimization stochastic optimization submodularity partial observability active learning optimal decision trees introduction many problems arising artificial intelligence one needs adaptively make sequence decisions taking account observations outcomes past decisions often outcomes uncertain one may know probability distribution finding optimal policies decision making partially observable stochastic optimization problems notoriously intractable see littman fundamental challenge identify classes planning problems simple solutions obtain optimal performance paper introduce concept adaptive submodularity prove partially observable stochastic optimization problem satisfies property simple adaptive greedy algorithm guaranteed obtain solutions fact reasonable assumptions polynomial time algorithm able obtain better solutions general adaptive submodularity generalizes classical notion successfully used develop approximation algorithms variety optimization problems submodularity informally intuitive notion diminishing returns states adding element small set helps adding element larger set celebrated result work nemhauser guarantees submodular functions simple greedy algorithm adds element maximally increases objective value selects set elements similarly guaranteed find set cost achieves desired quota utility wolsey using average time streeter golovin work appeared journal artificial intelligence research golovin krause eariler extended abstract appeared international conference learning theory golovin krause extensive treatment submodularity see books fujishige schrijver daniel golovin andreas krause besides guaranteeing theoretical performance bounds submodularity allows speed algorithms without loss solution quality using lazy evaluations minoux often leading performance improvements several orders magnitude leskovec submodularity shown useful variety problems artificial intelligence krause guestrin challenge generalizing submodularity adaptive planning action taken step depends information obtained previous steps feasible solutions policies decision trees conditional plans instead subsets propose natural generalization diminishing returns property adaptive problems reduces classical characterization submodular set functions deterministic distributions show results nemhauser wolsey streeter golovin minoux generalize adaptive setting hence demonstrate adaptive submodular optimization problems enjoy theoretical practical benefits similar classical nonadaptive submodular problems demonstrate usefulness generality concept showing captures known results stochastic optimization active learning special cases admits tighter performance bounds leads natural generalizations allows solve new problems performance guarantees known first example consider problem deploying controlling collection sensors monitor spatial phenomenon sensor cover region depending sensing range suppose would like find best subset locations place sensors application intuitively adding sensor helps placed sensors far helps less already placed many sensors formalize diminishing returns property using notion submodularity total area covered sensors submodular function defined sets locations krause guestrin show many realistic utility functions sensor placement improvement prediction accuracy probabilistic model submodular well consider following stochastic variant instead deploying fixed set sensors deploy one sensor time certain probability deployed sensors fail goal maximize area covered functioning sensors thus deploying next sensor need take account sensors deployed past failed problem studied asadpour case sensor fails independently random paper show coverage objective adaptive submodular use concept handle much general settings rather failures different types sensor failures varying severity also consider related problem goal place minimum number sensors achieve maximum possible sensor coverage coverage obtained deploying sensors everywhere generally goal may achieve fixed percentage maximum possible sensor coverage first goal problem equivalent one studied goemans generalizes problem studied liu another example consider viral marketing problem given social network want influence many people possible network buy product giving product free subset people hope convince friends buy product well formally graph edge labeled number influence subset nodes graph influenced node neighbors get randomly influenced according probability annotated edge connecting nodes process repeats node gets influenced kempe show set function quantifies expected number nodes influenced submodular natural stochastic variant problem pick node get see nodes influenced adaptively pick next node based observations show large class adaptive influence maximization problems satisfies adaptive submodularity third application active learning given unlabeled data set would like adaptively pick small set examples whose labels imply labels problem arises automated diagnosis hypotheses state system illness patient would like perform tests identify correct hypothesis domains want pick examples tests shrink remaining version space set consistent hypotheses quickly possible show reduction version space probability mass adaptive submodular use observation prove adaptive greedy algorithm querying policy results active learning name maximization min cost coverage min sum cover data dependent bounds accelerated greedy stochastic submodular maximization stochastic set cover adaptive viral marketing active learning hardness absence adapt submodularity new results tight objectives squared logarithmic approx objectives tight objectives generalization functions generalization lazy evaluations adaptive setting generalization previous arbitrary set distributions item costs arbitrary set distributions item costs adaptive analog previous viral marketing general reward functions squared logarithmic approx adaptive min cost cover version new analysis generalized binary search approximate versions without item costs hardness maximization min cost coverage cover adaptive submodular location page page page page page page page page page page table summary theoretical results shorthand adaptive stochastic shorthand adaptive monotone automated diagnosis also related recent results guillory bilmes study generalizations submodular set cover interactive setting contrast approach however guillory bilmes analyze costs use rather different technical definitions proof techniques summarize main contributions provide technical summary table high level main contributions consider particular class partially observable adaptive stochastic optimization problems prove hard approximate general introduce concept adaptive submodularity prove problem instance satisfies property simple adaptive greedy policy performs adaptive stochastic maximization coverage also natural objective show adaptive submodularity exploited allowing use accelerated adaptive greedy algorithm using lazy evaluations obtain bounds optimum illustrate adaptive submodularity several realistic problems including stochastic maximum coverage stochastic submodular coverage adaptive viral marketing active learning applications adaptive submodularity allows recover known results prove natural generalizations organization article organized follows page set notation formally define relevant adaptive optimization problems general objective functions reader convenience also provided reference table important symbols page page review classical notion submodularity introduce novel adaptive submodularity property page introduce adaptive greedy policy well accelerated variant page discuss theoretical guarantees adaptive greedy policy enjoys applied problems adaptive submodular objectives sections provide examples apply adaptive submodular framework various applications namely stochastic submodular maximization page stochastic submodular coverage page adaptive viral marketing page active learning page page report empirical results two sensor selection problems page discuss adaptivity gap problems consider page prove hardness results indicating problems adaptive submodular extremely inapproximable reasonable complexity assumptions review related work page provide concluding remarks page appendix page gives details incorporate item costs includes proofs omitted main text adaptive stochastic optimization start introducing notation defining general class adaptive optimization problems address paper sake clarity illustrate notation using sensor placement application mentioned give examples applications items realizations let finite set items sensor locations item particular initially unknown state set possible states describing whether sensor placed location would malfunction represent item states using function called realization states items ground set thus say state realization use denote random realization take bayesian approach assume known prior probability distribution realizations modeling sensors fail independently failure probability compute posterior consider problems sequentially pick item get see state pick next item get see state place sensor see whether fails pick observations far represented partial realization function subset set items already picked states encodes placed sensors failed notational convenience sometimes represent relation equals use notation dom refer domain set items observed partial realization consistent realization equal everywhere domain case write consistent dom dom say subrealization equivalently subrealization viewed relations partial realizations similar notion belief states partially observable markov decision problems pomdps encode effect actions taken items selected observations made determine posterior belief state world state items yet selected policies encode adaptive strategy picking items policy function set partial realizations specifying item pick next particular set observations chooses next sensor location given placed sensors far whether failed also allow randomized policies functions set partial realizations distributions though emphasis primarily deterministic policies domain policy terminates stops picking items upon observation use dom denote domain technically require dom closed subrealizations dom subrealization dom use notation refer set items selected realization deterministic policy associated decision tree natural way see fig situations may exact knowledge prior obtaining algorithms robust incorrect priors remains interesting source open problems briefly discuss robustness guarantees algorithm page figure illustration policy corresponding decision tree representation decision tree representation level truncation defined illustration adopt view admits concise notation though find decision tree view valuable conceptually since partial realizations similar pomdp belief states definition policies similar notion policies pomdps usually defined functions belief states actions discuss relationship stochastic optimization problems considered paper pomdps section adaptive stochastic maximization coverage coverage wish maximize subject constraints utility function depends items pick state item modeling total area covered working sensors based notation expected utility policy favg expectation taken respect goal adaptive stochastic maximization problem find policy arg max favg subject budget many items picked would like adaptively choose sensor locations working sensors provide much information possible expectation alternatively specify quota utility would like obtain try find cheapest policy achieving quota would like achieve certain amount information cheaply possible expectation formally define average cost cavg policy expected number items picks cavg goal find arg min cavg policy minimizes expected number items picked possible realizations least utility achieved call problem adaptive stochastic minimum cost cover problem also consider problem want minimize cost cwc cost cwc cost incurred adversarially chosen realizations equivalently depth deepest leaf decision tree associated yet another important variant minimize average time required policy obtain utility formally let expected utility obtained let maximum possible expected utility define cost policy define adaptive stochastic cover problem search arg min unfortunately show even linear functions simply sum weights depending realization problems hard approximate formal definition see page reasonable complexity theoretic assumptions despite hardness general problems following sections identify conditions sufficient allow approximately solve incorporating item costs instead quantifying cost set number elements pcan also consider case item cost cost set consider variants problems replaced clarity presentation focus unit cost case explain results generalize case appendix adaptive submodularity first review classical notion submodular set functions introduce novel notion adaptive submodularity background submodularity let first consider special case deterministic equivalently sensor placement applications sensors never fail case realization known decision maker advance thus benefit adaptive selection given realization problem equivalent finding set arg max interesting classes utility functions optimization problem however many practical problems mentioned satisfies submodularity set function called submodular whenever holds adding smaller set increases least much adding superset furthermore called monotone whenever holds adding sensor never reduce amount information obtained celebrated result nemhauser states monotone submodular functions simple greedy algorithm starts empty set chooses arg max guarantees thus greedy set obtains least fraction optimal value achievable using elements furthermore feige shows result tight assumption polynomial time algorithm strictly better greedy algorithm achieve constant even special case maximum cardinality union sets indexed similarly wolsey shows greedy algorithm also solves deterministic case problem called minimum submodular cover problem arg min pick first set constructed greedy algorithm submodular functions log maxe greedy set logarithmic factor larger smallest set achieving quota special case set cover cardinality union sets indexed result matches lower bound feige unless dtime log log set cover hard approximate factor better number elements covered let relax assumption deterministic case may still want find nonadaptive solution constant policy always picks set independently maximizing favg pointwise submodular submodular fixed function favg submodular since nonnegative linear combinations submodular functions remain submodular thus greedy algorithm allows find policy sensor placement example willing commit locations finding whether sensors fail greedy algorithm provide good solution problem however practice may interested obtaining policy adaptively chooses items based previous observations takes account sensors working placing next sensor many settings selecting items adaptively offers huge advantages analogous advantage binary search sequential linear thus question whether natural extension submodularity policies following develop notion adaptive submodularity adaptive monotonicity submodularity key challenge find appropriate generalizations monotonicity diminishing returns condition begin considering special case deterministic policies case policy simply specifies sequence items selects order monotonicity context characterized property marginal benefit selecting item always nonnegative meaning sequences items holds similarly submodularity viewed property selecting item later never increases marginal benefit meaning sequences items take views monotonicity submodularity defining adaptive analogues using appropriate generalization marginal benefit moving general adaptive setting challenge items states random revealed upon selection natural approach thus condition observations partial realizations selected items take expectation respect items consider selecting hence define adaptive monotonicity submodularity properties terms conditional expected marginal benefit item definition conditional expected marginal benefit given partial realization item conditional expected marginal benefit conditioned observed denoted dom dom expectation computed respect similarly conditional expected marginal benefit policy dom dom sensor placement example quantifies expected amount additional area covered placing sensor location expectation posterior distribution whether sensor fail taking account area covered placed working provide example active learning illustrates phenomenon crisply see fig page consider general question magnitude potential benefits adaptivity page sensors encoded note benefit accrued upon observing hence selected items dom dom benefit term subtracted similarly expected total benefit obtained observing selecting dom corresponding benefit running observing slightly complex realization final cumulative benefit dom taking expectation respect subtracting benefit already obtained dom yields conditional expected marginal benefit ready introduce generalizations monotonicity submodularity adaptive setting definition adaptive monotonicity function adaptive monotone respect distribution conditional expected marginal benefit item nonnegative definition adaptive submodularity function adaptive submodular respect distribution conditional expected marginal benefit fixed item increase items selected states observed formally adaptive submodular subrealization dom decision tree perspective condition amounts saying decision tree node selects item compare expected marginal benefit selected expected marginal benefit would obtained selected ancestor latter must smaller former note comparing two expected marginal benefits difference set items previously selected dom dom distribution realizations also worth emphasizing adaptive submodularity defined relative distribution realizations possible adaptive submodular respect one distribution respect another give concrete examples adaptive monotone adaptive submodular functions arise applications introduced appendix explain notion adaptive submodularity extended handle costs since cost placing sensor easily accessible location may smaller location hard get properties adaptive submodular functions seen adaptive monotonicity adaptive submodularity enjoy similar closure properties monotone submodular functions particular adaptive monotone submodular distribution adaptive monotone submodular similarly fixed constant adaptive monotone submodular function function min adaptive monotone submodular thus adaptive monotone submodularity preserved nonnegative linear combinations truncation adaptive monotone submodularity also preserved restriction adaptive monotone submodular function defined also adaptive submodular finally adaptive monotone submodular partial realization conditional function dom adaptive monotone submodular problem characteristics suggest adaptive submodularity adaptive submodularity diminishing returns property policies speaking informally applied situations objective function optimized feature synergies benefits items conditioned observations cases primary objective might property suitably chosen proxy case active learning persistent noise golovin bellala scott give example applications also worth mentioning adaptive submodularity directly applicable extreme example synergistic effects items conditioned observations class treasure hunting instances used prove theorem page binary state certain groups items encode treasure location complex manner another problem feature adaptive submodularity directly address possibility items selection alter underlying realization case problem optimizing policies general pomdps adaptive greedy policy classical greedy algorithm natural generalization adaptive setting greedy policy greedy tries iteration myopically increase expected objective value given current observations suppose objective partial realization indicating states items selected far greedy policy select item maximizing expected increase value conditioned observed states items already selected conditioned select maximize conditional expected marginal benefit defined pseudocode adaptive greedy algorithm given algorithm difference classic greedy algorithm studied nemhauser line observation selected item obtained note algorithms section presented adaptive stochastic maximization coverage objectives simply keep selecting items prescribed greedy achieving quota objective value objective selected every item objective incorporating item costs adaptive greedy algorithm naturally modified handle item costs replacing selection rule arg max following focus uniform cost case defer analysis costs appendix approximate greedy selection applications finding item maximizing may computationally intractable best find best greedy selection means find max call policy always selects item greedy policy robustness approximate greedy selection show greedy policies performance guarantees several problems fact performance guarantees greedy policies robust approximate greedy selection suggests particular robustness guarantee incorrect priors specifically incorrect prior evaluate err multiplicative factor compute greedy policy respect actually implementing greedy policy respect true prior hence obtain corresponding guarantees example sufficient condition erring multiplicative factor exists true prior input budget ground set distribution function output set size begin foreach compute select arg maxe set observe set end algorithm adaptive greedy algorithm implements greedy policy lazy evaluations accelerated adaptive greedy algorithm definition adaptive submodularity allows implement accelerated version adaptive greedy algorithm using lazy evaluations marginal benefits originally suggested case minoux idea follows suppose run greedy fixed realization select items let partial realizations observed run greedy adaptive greedy algorithm computes unless dom naively algorithm thus needs compute marginal benefits expensive compute key insight nonincreasing adaptive submodularity objective hence deciding item select know items may conclude hence eliminate need compute accelerated version adaptive greedy algorithm exploits observation principled manner computing items decreasing order upper bounds known finds item whose value least great upper bounds items pseudocode version adaptive greedy algorithm given algorithm setting use lazy evaluations shown significantly reduce running times practice leskovec evaluated naive accelerated implementations adaptive greedy algorithm two sensor selection problems obtained speedup factors range roughly problems see page details guarantees greedy policy section show objective function adaptive submodular respect probabilistic model environment operate greedy policy inherits precisely performance guarantees greedy algorithm classic submodular maximization submodular coverage problems maximum minimum set cover well submodular coverage problems set cover fact show holds true generally greedy policies inherit precisely performance guarantees greedy algorithms classic problems guarantees suggest adaptive submodularity appropriate generalization submodularity policies section focus unit cost case every item cost appendix provide proofs omitted section show results extend item costs greedily maximize expected ratio maximum coverage objective section consider maximum coverage objective goal select items adaptively maximize expected value task maximizing expected value subject complex constraints input budget ground set distribution function output set size begin priority queue empty queue foreach insert emax null maxpriority pop insert emax emax remove emax observe emax set emax emax end algorithm accelerated version adaptive greedy algorithm insert inserts priority pop removes returns item greatest priority maxpriority returns maximum priority elements remove deletes matroid constraints intersections matroid constraints considered work golovin krause stating result require following definition definition policy truncation policy define policy obtained running terminates selects items terminating formally dom dom dom following result generalizes classic result work nemhauser greedy algorithm achieves problem maximizing monotone submodular functions cardinality constraint setting theorem see greedy policy selects items adaptively obtains least value optimal policy selects items adaptively measured respect favg proof see theorem appendix generalizes theorem nonuniform item costs theorem fix adaptive monotone adaptive submodular respect distribution greedy policy policies positive integers favg favg particular implies greedy policy achieves approximation expected reward best policy terminated running equal number steps greedy rule implemented small absolute error rather small relative error maxe argument similar used prove theorem shows favg favg important since small absolute error always achieved high probability whenever evaluated efficiently sampling efficient case approximate dom dom sampled data ependent ounds maximum coverage objective adaptive submodular functions another attractive feature allow obtain data dependent bounds optimum manner similar bounds case minoux consider problem maximizing monotone submodular function subject constraint let optimal solution fix max setting note unlike original objective easily compute maxb computing summing largest values hence quickly compute upper bound distance optimal value practice bounds much tighter performance guarantees nemhauser greedy algorithm leskovec note bounds hold set sets selected greedy algorithm data dependent bounds following analogue adaptive monotone submodular functions see appendix proof lemma adaptive data dependent bound suppose made observations selecting dom let policy adaptive monotone submodular max thus running policy efficiently compute bound additional benefit optimal solution could obtain beyond reward computing conditional expected marginal benefits elements summing largest note bounds computed fly running greedy algorithm similar manner discussed leskovec setting min cost cover objective another natural objective minimize number items selected ensuring sufficient level value obtained leads adaptive stochastic minimum cost coverage problem described namely arg cavg recall cavg expected cost unit cost case equals expected number items selected cavg objective adaptive monotone submodular adaptive version minimum submodular cover problem described line recall greedy algorithm known give minimum submodular cover assuming coverage function addition monotone submodular wolsey adaptive stochastic minimum cost coverage also related noisy interactive submodular set cover problem studied guillory bilmes considers setting distribution states instead states realized adversarial manner similar results active learning proved kosaraju dasgupta discuss detail assume throughout section exists quality threshold note discussed section replace new function min constant adaptive submodular thus varies across realizations instead use greedy algorithm function truncated threshold achievable realizations contrast adaptive stochastic maximization coverage problem additional subtleties arise particular enough policy achieves value true realization order terminate also requires proof fact formally require covers definition coverage let partial realization encoding states observed execution realization given say policy covers respect dom say covers covers every realization respect coverage defined way upon terminating might know realization true one guaranteed achieved maximum reward every possible case every realization consistent observations obtain results average cost objectives inimizing average ost presenting approximation guarantee adaptive stochastic minimum cost coverage introduce special class instances called instances make distinction greedy policy stronger performance guarantees instances instances arise naturally applications example stochastic submodular cover stochastic set cover instances adaptive viral marketing instances active learning instances definition instances instance adaptive stochastic minimum cost coverage whenever policy achieves maximum possible value true realization immediately proof fact formally instance dom dom one class instances commonly arise depends state items uniform maximum amount reward obtained across realizations formally following observation proposition fix instance exists exists proof fix assuming existence treating relation dom dom hence dom dom results minimum cost coverage also need stronger monotonicity condition stronger submodularity condition definition strong adaptive monotonicity function strongly adaptive monotone respect informally selecting items never hurts respect expected reward formally dom possible outcomes require dom dom strong adaptive monotonicity implies adaptive monotonicity latter means selecting items never hurts expectation dom dom define strong adaptive submodularity first need following extension definition conditional expected marginal benefit extended version given partial realizations let dom dom definition strong adaptive submodularity function strongly adaptive submodular respect distribution adaptive submodular moreover expected marginal benefit fixed item increase items selected states observed conditioned item observation pairs formally adaptive submodular subrealization dom words conditioning adding items dom dom increase expected marginal benefit sufficient condition strong adaptive submodularity respect function adaptive submodular pointwise submodular submodular fixed prove appendix worth noting pointwise submodularity sufficient establish adaptive submodularity simple counterexample case yet state main result average case cost cavg theorem suppose strongly adaptive submodular strongly adaptive monotone respect exists let value implies let minimum probability realization let optimal policy minimizing expected number items selected guarantee every realization covered let greedy policy respect item costs general cavg cavg instances cavg cavg note range valid choice general instances cavg cavg cavg cavg respectively historical note earlier version theorem claimed logarithmic approximation factors rather factors present unfortunately proof flawed pointed nan saligrama determining whether logarithmic bounds interesting open hold remains problem particular remains open whether cavg cavg general instances cavg cavg instances conditions specified theorem also remains open whether strong adaptive submodularity condition required inimizing ase ost cost cwc strong adaptive monotonicity strong submodularity required adaptive monotonicity adaptive submodularity suffice obtain following result theorem suppose adaptive monotone adaptive submodular respect let value implies let minimum probability realization let optimal policy minimizing number queries guarantee every realization covered let greedy policy finally let maximum possible expected reward cwc cwc proofs theorems given appendix thus even though adaptive submodularity defined particular distribution perhaps surprisingly adaptive greedy algorithm competitive even case adversarially chosen realizations policy optimized minimize cost theorem therefore suggests strong prior obtain strongest guarantees choose distribution uniform possible maximizes still guaranteeing adaptive submodularity iscussion note approximation factor instances theorem reduces approximation guarantee greedy algorithm set cover instances elements case deterministic distribution moreover deterministic distribution distinction cost hence immediate corollary result feige mentioned every constant polynomial time approximation algorithm instances adaptive stochastic min cost cover either cavg cwc objective unless dtime log log remains open determine whether adaptive stochastic min cost cover cost objective admits approximation instances via polynomial time algorithm particular whether greedy policy approximation guarantee however lemma show feige result also implies polynomial time approximation algorithm general non instances adaptive stochastic min cost cover either objective unless dtime log log sense theorem theorem improved logarithmic factor reasonable assumptions cover objective yet another natural objective objective unrealized reward incurs cost time step goal minimize total cost incurred background adaptive roblem setting perhaps simplest form coverage problem objective set cover problem feige input set system output permutation sets goal minimize sum element coverage times coverage time index first set contains problem generalizations objective useful modeling processing costs certain applications example ordering diagnostic tests identify disease cheaply kaplan ordering multiple filters applied database records processing query munagala ordering multiple heuristics run boolean satisfiability instances means solve faster practice streeter golovin particularly expressive generalization set cover studied names submodular cover streeter golovin set cover golovin former paper extends greedy algorithm natural online variant problem latter studies parameterized family set cover problems objective analogous minimizing norm coverage times set cover instances submodular cover problem monotone submodular function defining reward obtained collection integral cost element output sequence elements define set encode set cover instance let subset elements elements sequence within budget cost wish minimize feige proved set cover greedy algorithm achieves minimum cost also optimal sense polynomial time algorithm achieve unless interestingly greedy algorithm also achieves general submodular cover problem well streeter golovin golovin daptive tochastic roblem article extend result streeter golovin golovin adaptive version submodular cover clarity sake consider case show extend adaptive submodularity handle general costs appendix adaptive version problem plays role favg plays role goal find policy minimizing favg call problem adaptive stochastic cover problem key difference objective minimum cost cover objective cost step fractional extent covered true realization whereas minimum cost cover objective charged full step completely covered true realization according definition prove following result adaptive stochastic cover problem arbitrary item costs appendix theorem fix adaptive monotone adaptive submodular respect distribution greedy policy respect item costs policy application stochastic submodular maximization first application consider sensor placement problem introduced suppose would like monitor spatial phenomenon temperature building discretize environment set locations would like pick subset locations informative use set function quantify informativeness placement krause guestrin show many natural objective functions reduction predictive uncertainty measured terms shannon entropy conditionally independent observations monotone submodular consider problem informativeness sensor unknown deployment deploying cameras surveillance location objects associated occlusions may known advance varying amounts noise may reduce sensing range model extension assigning state possible location indicating extent sensor placed location working quantify value set sensor deployments realization indicating extent various sensors working first define represents placement sensor location state suppose function quantifies informativeness set sensor deployments arbitrary states note set function taking set sensor deployment state pairs input utility placing sensors locations realization aim adaptively place sensors maximize expected utility assume sensor failures location independent probability sensor placed location state asadpour studied special case problem sensors either fail completely case contribute value work perfectly name stochastic submodular maximization proved adaptive greedy algorithm obtains approximation optimal adaptive policy provided monotone submodular extend result multiple types failures showing adaptive submodular respect distribution invoking theorem fig illustrates instance stochastic submodular maximization cardinality union sets index parameterized theorem fix prior integer let objective function monotone submodular let greedy policy attempting maximize let policy positive integers favg favg particular greedy policy favg favg proof prove theorem first proving adaptive monotone adaptive submodular model applying theorem adaptive monotonicity readily proved observing monotone moving adaptive submodularity fix dom aim show intuitively clear expected marginal benefit adding larger base set case namely dom compared dom realizations independent prove rigorously define coupled distribution pairsq realizations dom formally dom otherwise note implies dom aspwell since also note calculating using see support dom dom dom dom submodularity hence dom dom dom dom completes proof figure illustration part stochastic set cover instance shown supports two distributions sets indexed items marked blue yellow application stochastic submodular coverage suppose instead wishing adaptively place unreliable sensors maximize utility information obtained discussed quota utility wish adaptively place minimum number unreliable sensors achieve quota amounts coverage version stochastic submodular maximization problem introduced call stochastic submodular coverage stochastic submodular coverage problem suppose function quantifies utility set sensors arbitrary states also states sensor independent goal obtain quota utility minimum cost thus define objective min want find policy covering every realization minimizing cavg additionally assume quota always obtained using sufficiently many sensor placements formally amounts obtain following result whose proof defer end section theorem fix prior independent sensor states let mon submodular function fix min satisfies let value implies finally let greedy policy maximizing let policy cavg cavg special case stochastic set coverage problem stochastic submodular coverage problem generalization stochastic set coverage problem goemans stochastic set coverage underlying submodular objective number elements covered input set system words ground set elements covered items item associated distribution subsets item selected set sampled distribution illustrated fig problem adaptively select items elements covered sampled sets minimizing expected number items selected like goemans also assume subsets sampled independently item every element covered every realization goemans primarily investigated adaptivity gap quantifying much adaptive policies outperform policies stochastic set coverage variants items repeatedly selected prove adaptivity gaps log former case latter also provide algorithm recently liu considered special case stochastic set coverage item may one two states motivated streaming database problem collection queries sharing common filters must evaluated stream element transform problem stochastic set coverage instance filter query pairs covered filter evaluations pairs covered filter depends binary outcome evaluating stream element resulting instances satisfy assumption every element covered every realization study among algorithms adaptive greedy algorithm specialized toq setting show subsetspare sampled independently item approximation recall moreover liu report empirically outperforms number algorithms experiments adaptive submodularity framework allows prove approximate results richer item distributions subsets considered liu corollary theorem specifically obtain stochastic set coverage problem arbitrarily many outcomes stochastic set model stochastic set coverage problem letting indicateq random set sampled distribution since sampled sets independent let number elements covered sets sampled items previous work mentioned assume therefore may set since range includes integers may set applying theorem yields following result corollary adaptive greedy algorithm achieves stochastic set coverage size ground set provide proof theorem proof theorem ultimately prove theorem applying bound theorem stochastic submodular cover instances proof mostly consists justifying application without loss generality may assume truncated otherwise may use min lieu removes need truncate since established adaptive submodularity proof theorem assumption apply theorem need show strongly adaptive monotone strongly adaptive submodular instances consideration begin showing strong adaptive monotonicity fix partial realization item dom state let treating subsets using monotonicity obtain dom dom equivalent strong adaptive monotonicity condition next show strong adaptive adaptive submodularity showing pointwise submodular already proven adaptive submodularity clearly true since monotone submodular assumption finally prove instances consider consistent dom dom since assumption follows dom iff dom instance shown satisfy assumptions theorem instance hence may apply obtain claimed approximation guarantee figure illustration adaptive viral marketing problem left underlying social network middle people influenced observations obtained one person selected application adaptive viral marketing next application consider following scenario suppose would like generate demand genuinely novel product potential customers realize valuable new product conventional advertisements failing convince try case may try spur demand offering special promotional deal select people hope demand builds virally propagating social network people recommend product friends associates supposing know something structure social networks people inhabit ideas innovation new product adoption diffuse begs question initial set people offer promotional deal order spur maximum demand product broadly viral marketing problem problem arises context spreading technological cultural intellectual innovations broadly construed interest unified terminology follow kempe talk spreading influence social network say people active adopted idea innovation question inactive otherwise influences convinces adopt idea innovation question many ways model diffusion dynamics governing spread influence social network consider basic model independent cascade model described detail model kempe obtain interesting result show eventual spread influence ultimate number customers demand product monotone submodular function seed set people initially selected conjunction results nemhauser implies greedy algorithm obtains least value best feasible seed set size arg maxs interpret budget promotional campaign though kempe consider maximum coverage version viral marketing problem result conjunction wolsey also implies greedy algorithm obtain quota value cost times cost optimal set arg mins takes integral values adaptive viral marketing viral marketing problem natural adaptive analog instead selecting fixed set people advance may select person offer promotion make observations resulting spread demand product repeat see fig illustration use idea adaptive submodularity obtain results analogous kempe adaptive setting specifically show greedy policy obtains least value best policy moreover extend result achieving guarantee case reward simply number influenced people also nonnegative monotone submodular function set people influenced consider minimum cost cover objective show greedy policy obtains squared logarithmic approximation knowledge approximation results adaptive variant viral marketing problem known ndependent ascade odel model social network directed graph vertex person edge associated binary random variable xuv indicating influence xuv influence influenced xuv otherwise random variables xuv independent known means puv xuv call edge xuv live edge edge xuv dead edge node activated edges xuv neighbor sampled activated live influence spread neighbors neighbors according process active nodes remain active throughout process however kempe show assumption without loss generality removed eedback odel adaptive viral marketing problem independent cascades model items correspond people activate offering promotional deal define states depends information obtain result activating given nature diffusion process activating effects state state social network whole particular specifically model function means activating revealed dead means activating revealed live means activating revealed status value xuv require realization consistent complete consistency means edge declared live dead two states completeness means status edge revealed activation exists consistent complete realization thus encodes xuv edge let denote live edges encoded several candidates edge sets allowed observe activating node consider call feedback model activating get see status live dead edges exiting nodes reachable via live edges reachable true realization illustrate feedback model fig bjective unction simplest case reward influencing set nodes kempe obtain slightly general case node weight indicating importance reward generalize result include arbitrary nonnegative monotone submodular reward functions allows example encode value associated diversity set nodes influenced notion better achieve market penetration five different equally important demographic segments market penetration one others guarantees maximum coverage objective ready formally state result maximum coverage objective theorem greedy policy greedy obtains least value best policy adaptive viral marketing problem arbitrary monotone submodular reward functions independent cascade feedback models discussed set activated nodes seed set activated nodes realization arbitrary monotone submodular function indicating reward influencing set objective function policies greedy favg favg generally greedy policy favg favg proof adaptive monotonicity follows immediately fact monotonic thus suffices prove adaptive submodular respect probability distribution realizations invoke theorem complete proof say observed edge know status live dead fix dom must show prove rigorously define coupled distribution pairs realizations note given feedback model realization function random variables xuw indicating status edge conciseness use notation xuw define implicitly terms joint distribution realizations induced two distinct sets random edge statuses respectively hence next let say partial realization observes edge dom revealed status live dead edges observed random variable xuw deterministically set status observed similarly edges observed random variable xuw deterministically set status observed note since state edges observed support properties additionally construct status edges unobserved meaning xuw xuw edges else constraints leave following degrees freedom may select xuw unobserved select independently xuw puw prior hence satisfying constraints puw unobserved otherwise note claim support next dom dom dom dom recall set activated nodes seed set activated nodes realization let dom dom denote active nodes selecting dom realizations similarly define respect let equivalent submodularity suffices show prove inequality start proving fix exists path dom moreover every edge path live also observed live definition feedback model since support implies every edge path also live edges observed must status follows path since clearly also dom conclude hence next show fix suppose way contradiction hence exists path path exists edges live least one must dead let edge status edge differs support must observed observed observed feedback model must active dom selected however implies nodes reachable via edges also active dom selected since edges live hence nodes including since disjoint implies contradiction proved proceed use show dom dom dom dom completes proof omparison tochastic ubmodular aximization worth contrasting adaptive viral marketing problem stochastic submodular maximization problem latter problem think items random independently distributed sets adaptive viral marketing contrast random sets nodes influenced fixed node selected depend random status edges hence may correlated nevertheless obtain approximation factor problems comment myopic feedback model conference version article golovin krause considered alternate feedback model called myopic feedback model activating see status edges exiting social network claimed objective defined previously adaptive submodular independent cascade model myopic feedback hence greedy policy obtains approximation hereby retract claim furthermore give counterexample demonstrating adaptive submodular myopic feedback consider graph vertices edges edge parameters puv pvw let construct accordingly let let clearly note since marginal benefit dom one dead zero live former occurs probability contrast since contains observation dead hence violates adaptive submodularity however conjecture greedy policy still obtains constant factor approximation even myopic feedback model minimum cost cover objective may also wish adaptively run campaign certain level market penetration achieved certain number people adopted product formalize goal using minimum cost cover objective objective instance adaptive stochastic minimum cost cover given quota quantifying desired level market penetration must adaptively select nodes activate set active nodes satisfies obtain following result theorem fix monotone submodular function reward influencing set quota suppose objective min set activated nodes seed set activated nodes realization let value implies greedy policy average times average cost best policy obtaining reward adaptive costs viral marketing problem independent cascade model feedback described cavg covers every realization cavg proof prove theorem recourse theorem already established adaptive submodular proof theorem remains show strongly adaptive monotone strongly adaptive submodular instances equal corresponding terms statement theorem start strong adaptive monotonicity fix dom must show dom dom let denote active nodes selecting dom observing definition full adoption feedback model consists precisely nodes exists path puv dom via exclusively live edges edges whose status observe consist edges exiting nodes follows every path contains least one edge observed dead hence every set nodes activated selecting dom therefore dom similarly define dom note activated nodes never become inactive hence implies since monotone assumption means implies strong adaptive monotonicity next establish strong adaptive submodularity given already established adaptive submodularity sufficient also prove pointwise submodularity fixed realization set live edges xuv induce set system covers nodes reachable via live edges let denote set straightforward verify monotone submodular function nodes induces monotone submodular function sets nodes submodular whenever particular submodular every however one easily verify set constraints subset corresponding submodularity constaints next instances note every establish min earlier remarks know dom every hence consistent dom dom dom dom proves instance finally show equal corresponding terms statement theorem noted earlier defined value implies since range min follows satisfies requirements corresponding term theorem hence may apply theorem instance obtain claimed result application automated diagnosis active learning important problem automated diagnosis example suppose different hypotheses state patient run medical tests rule inconsistent hypotheses goal adaptively choose tests infer state patient quickly possible similar problem arises active learning obtaining labeled data train classifier typically expensive often involves asking expert active learning cohn mccallum nigam figure illustration active learning problem simple special case data binary threshold hypotheses otherwise key idea labels informative others labeling unlabeled examples imply labels many unlabeled examples thus cost obtaining labels expert avoided standard assume given set hypotheses set unlabeled data points independently drawn distribution let set possible labels classical learning theory yields probably approximately correct pac bounds bounding number examples drawn needed output hypothesis expected error probability least fixed target hypothesis zero error error error require error latter probability taken respect learned hypothesis thus error depend key challenge active learning avoid bias actively selected examples longer thus sample complexity bounds passive learning longer apply one careful active learning may require samples passive learning achieve generalization error one natural approach active learning guaranteed perform least well passive learning active learning mccallum nigam idea draw unlabeled examples however instead obtaining labels labels adaptively requested labels unlabeled examples implied obtained labels obtained labeled examples drawn classical pac bounds still apply key question request labels pool infer remaining labels quickly possible case binary labels test outcomes various authors considered greedy policies generalize binary search garey graham loveland arkin kosaraju dasgupta guillory bilmes nowak simplest called generalized binary search gbs splitting algorithm works follows define version space set hypotheses consistent observed labels assume label noise setting gbs selects query minimizes bayesian setting assume given prior hypotheses case gbs selects query minimizes intuitively policies myopically attempt shrink measure version space cardinality probability mass quickly possible former provides log number queries arkin latter provides log minh expected number queries kosaraju dasgupta natural generalization gbs obtains guarantees larger set labels guillory bilmes kosaraju also prove running gbs modified prior max log sufficient obtain log viewed perspective previous sections shrinking version space amounts covering false hypotheses stochastic sets queries query covers hypotheses disagree target hypothesis covers sets may correlated complex ways determined set possible hypotheses show reduction version space mass adaptive submodular allows obtain new analysis gbs using adaptive submodularity weaker approximation guarantee optimal arguably amenable extensions generalizations previous analyses theorem bayesian setting prior finite set hypotheses generalized binary search algorithm makes opt minh queries expectation identify hypothesis drawn opt minimum expected number queries policy minh sufficiently small algorithm modified prior max improves approximation factor devote better part remainder section proof theorem several components first address important special case uniform prior hypotheses reduce case general prior uniform prior wish appeal theorem convert problem adaptive stochastic min cost cover problem reduction adaptive stochastic min cost cover define realization hypothesis ground set outcomes binary define instead using consistent earlier exposition set meaning define objective function first need notation given observed labels let denote version space set hypotheses dom see fig illustration active learning problem case indicator hypotheses set hypotheses let denote total prior probability finally let function domain agrees define objective function use let optimal policy adaptive stochastic min cost cover instance note exact correspondence policies original problem finding target hypothesis problem covering true realization identified target hypothesis version space reduced occurs covered hence cavg opt note assumed uniform prior hypotheses furthermore instances lemma instances described arbitrary priors proof intuitively theses instances cover policy must identify formally instances dom implies turn means realization consistent trivially implies realization also dom hence instance uniform prior next prove instances generated adaptive submodular strongly adaptive monotone uniform prior lemma instances described strongly adaptive monotone strongly adaptive submodular respect uniform prior proof demonstrating strong adaptive monotonicity uniform prior amounts proving adding labels grow version space clear model query eliminates subset hypotheses queries performed subset hypotheses eliminated grow moving adaptive submodularity consider expected marginal contribution two partial realizations subrealization dom let partial realization domain dom agrees domain maps let since hypothesis eliminated version space later appear version space next note expected reduction version space mass hence expected marginal contribution due selecting given partial realization corresponding quantity substituted prove adaptive submodularity must show suffices show functional form expression implies growing version space manner decrease expected marginal benefit query hence shrinking manner increase expected marginal benefit indeed case specifically holds derived elementary calculus show strong adaptive submodularity prove pointwise submodularity objective fixed realization objective amounts weighted set cover problem incorrect hypotheses plus constant thus submodular hence apply theorem instance maximum reward threshold minimum gap obtain upper bound opt number queries made generalized binary search algorithm corresponds exactly greedy policy adaptive stochastic min cost cover assumption uniform prior arbitrary priors consider general priors construct adaptive stochastic min cost cover instance change objective function first note instances remain proof lemma goes completely unchanged modification proceed show adaptive submodularity strong adaptive monotonicity lemma objective function described strongly adaptive monotone strongly adaptive submodular respect arbitrary priors proof modified objective still adaptive submodular clearly adaptive submodularity defined via linear inequalities preserved taking nonnegative linear combinations note still strongly adaptive submodular still pointwise submodular showing strongly adaptive monotone requires slightly work fix dom must show dom dom plugging definition inequality wish prove may simplified random realization hypothesis let velim set hypotheses eliminated version space observation rewriting get velim let denote left hand side prove follows since since velim conclude adaptive submodular strongly adaptive monotone hence apply theorem instance maximum reward threshold minimum gap minh result obtain upper bound opt minh number queries made generalized binary search arbitrary priors completing proof theorem improving approximation factor highly nonuniform priors improve log event minh extremely small using observation kosaraju call policy progressive eliminates least one hypothesis version query let max modified prior max normalizing constant let cost queries target cavg expected cost prior show cavg good approximation cavg call rare common otherwise first note max hence cavg cavg next show cavg cavg consider quantity cavg cavg positive contributions must come rare hypotheses however total probability mass since progressive hence difference costs one let minh approximation factor generalized binary search run let policy generalized binary search let optimal policy prior cavg cavg cavg thus general prior simple modification gbs yields log cavg extensions arbitrary costs multiple classes approximate greedy policies result easily generalizes handle setting multiple classes test outcomes greedy policies lose factor approximation factor describe appendix generalize adaptive submodularity incorporate costs items allows extend result handle query costs well recently gupta showed simultaneously remove dependence costs probabilities approximation ratio specifically within context studying adaptive travelling salesman problem investigated optimal decision tree problem equivalent active learning problem consider using clever complex algorithm adaptive greedy achieve log case costs general priors extensions active learning noisy observations theorem extensions mentioned far noise free case result query observes target hypothesis many practical problems may noisy observations nowak considered case outcomes binary query may asked multiple times instance query noise independent case gives performance guarantees generalized binary search setting may appropriate noise due measurement error applications noise persistent query asked several times observation always recently golovin bellala scott used adaptive submodularity framework obtain first algorithms provable approximation guarantees active learning persistent noise experiments greedy algorithms often straightforward develop implement explains popular use practical applications bayesian experimental design active learning discussed also see excellent introduction nowak adaptive stochastic set cover filter design streaming databases discussed besides allowing prove approximation guarantees algorithms adaptive submodularity provides following immediate practical benefits ability use lazy evaluations speed execution ability generate bounds optimal value section empirically evaluate benefits within sensor selection application setting similar one described deshpande application deployed network wireless sensors monitor temperature building traffic road network since sensors battery constrained must adaptively select sensors given sensor readings predict temperature remaining locations prediction possible since temperature measurements typically correlated across space consider case sensors fail report measurements due hardware failures environmental conditions interference sensor selection problem unreliable sensors formally imagine every location associated random variable describing temperature location joint probability distribution models correlation temperature values random vector temperature values follow deshpande assume joint distribution sensors multivariate gaussian sensor make noisy observation zero mean gaussian noise known variance measurements obtained subset locations conditional distribution allows predictions unobserved locations predicting minimizes mean squared error furthermore conditional distribution quantifies uncertainty prediction intuitively would like select sensors minimize predictive uncertainty one way quantify predictive uncertainty use remaining shannon entropy would like adaptively select sensors maximize expected reduction shannon entropy sebastiani wynn krause guestrin however practice sensors often unreliable might fail report measurements assume selecting sensor find whether failed deciding sensor select next suppose sensor associated probability pfail failure case reading reported sensor failures independent ambient temperature thus instance stochastic maximization problem working failed multivariate normal distributions entropy given sets denotes covariance matrix random vectors note predictive covariance depend actual observations set chosen locations thus usual krause guestrin show function monotone submodular whenever observations conditionally independent given insight allows apply result show objective defined adaptive monotone submodular using working data xperimental etup first data set consists temperature measurements network sensors deployed intel research berkeley sampled second intervals consecutive days starting define objective function respect empirical covariance estimated data also use data traffic sensors deployed along highway south california use traffic speed data working days one month sensors goal predict speed road segments estimate empirical covariance matrix enefits azy valuation data sets run adaptive greedy algorithm using naive implementation algorithm accelerated version using lazy evaluations algorithm vary probability sensor failure evaluate execution time number evaluations function defined algorithm makes figures plot execution time given sensor failure rate computer ghz dual core processor ram applications function evaluations bottleneck computation number serves proxy running time figures show performance ratio terms proxy temperature data set lazy evaluations speed computation factor roughly depending failure probability larger traffic data set obtain speedup factors find benefit lazy evaluations increases problem size failure probability dependence problem size must ultimately explained terms structural properties instances also benefit nonadaptive accelerated greedy algorithm dependence failure probability simpler explanation note applications accelerated greedy algorithm selects fails need make additional function evaluations select next sensor contrast naive greedy algorithm makes function evaluation sensor selected far enefits data ependent ound adaptive submodularity allows prove performance guarantees adaptive greedy algorithm many practical applications expected bounds quite loose sensor selection application use data dependent bounds lemma compute upper bound favg described compare performance guarantee theorem accelerated greedy algorithm use upper bounds marginal benefits stored priority queue instead recomputing marginal benefits thus expect somewhat looser bounds find application bounds tighter worst case bounds also find lazy data dependent bounds almost tight eager bounds using eagerly recomputed marginal benefits latest greatest though former slightly higher variance figures show performance greedy algorithm well three bounds optimal value two subtleties arise using bound favg first lemma tells whereas would like bound differenceh optimal reward algorithm current expected reward conditioned seeing dom however applications strongly adaptive monotone strong adaptive monotonicity implies dom hence let opt lemma implies opt dom max second subtlety obtain sequence bounds consider random sequence partial realizations observed adaptive greedy algorithm obtain bounds dom taking expectation note favg opt therefore random variable whose expectation upper bound optimal expected reward policy point may tempted use minimum mini ultimate bound however collection random variables general satisfy mini possible case independent sensor failures use concentration inequalities bound mini mini high probability thus add appropriate term obtain true upper bound take different approach simply use average bound course depending application particular bound chosen independently sequence may superior example modular best whereas exhibits strong diminishing returns bounds larger values may significantly tighter adaptivity gap important question adaptive optimization much better adaptive policies perform compared policies quantified adaptivity gap ratio time standard accelerated adaptive greedy time standard accelerated adaptive greedy adaptive greedy adaptive greedy accelerated adaptive greedy accelerated adaptive greedy temperature data execution time sec naive accelerated implementations adaptive greedy budget number sensors selected pfail plotted standard errors traffic data execution time sec naive accelerated implementations adaptive greedy budget number sensors selected pfail plotted standard errors redu ction function evaluations failure reduction function evaluations failur temperature data ratio function evaluations made naive accelerated implementations adaptive greedy budget number sensors selected various failure rates averaged runs traffic data ratio function evaluations made naive accelerated implementations adaptive greedy budget number sensors selected various failure rates averaged runs reward adaptive greedy bounds reward adaptive greedy bounds standard bound lazy adaptive bound adaptive bound standard bound lazy adaptive bound adaptive bound adaptive greedy adaptive greedy temperature data rewards bounds optimal value pfail budget number sensors selected plotted standard errors traffic data rewards bounds optimal value pfail budget number sensors selected plotted standard errors figure experimental results problem instances performance optimal adaptive policy optimal solution asadpour show stochastic submodular maximization problem independent failures considered expected value optimal policy constant factor worse expected value optimal adaptive policy currently lower bounds adaptivity gap general adaptive stochastic maximization problem show even case adaptive submodular functions cover cover versions large adaptivity gaps thus large benefit using adaptive algorithms cases adaptivity gap defined ratio expected cost optimal policy divided expected cost optimal adaptive policy adaptive stochastic minimum cost coverage problem goemans show special case stochastic set coverage without multiplicities adaptivity gap exhibit adaptive stochastic optimization instance adaptivity gap log adaptive stochastic cover problem also happens adaptivity gap adaptive stochastic minimum cost coverage theorem even adaptive submodular functions adaptivity gap adaptive stochastic cover log proof suppose consider active learning problem hypotheses threshold functions threshold uniform distribution thresholds order identify correct hypothesis threshold policy must observe least one let optimal policy problem note represented permutation observing element multiple times increase cost providing benefit observing element must eventually selected guarantee coverage cover objective consider playing time steps observed steps likewise observed steps since least one events must occur identify correct hypothesis union bound identifies correct hypothesis steps thus lower bound expected cost since time steps cost least incurred thus cover objectives cost optimal policy example adaptive policy implement natural binary search strategy guaranteed identify correct hypothesis log steps thus incurring cost log proving adaptivity gap log hardness approximation paper developed notion adaptive submodularity characterizes certain adaptive stochastic optimization problems sense simple greedy policy obtains constant factor polylogarithmic factor approximation best policy contrast also show without adaptive submodularity adaptive stochastic optimization problems extremely inapproximable even pointwise modular objective functions first argument hope achieve approximation ratio problems unless polynomial hierarchy collapses theorem possibly polynomial time algorithm adaptive stochastic maximization budget items approximate reward optimal policy budget items within multiplicative factor unless holds even pointwise modular provide proof theorem appendix note setting obtain hardness adaptive stochastic maximization turns instance distribution construct proof theorem optimal policy covers every realization always finds treasure using budget items hence randomized polynomial time algorithm wishing cover instance must budget times larger optimal policy order ensure ratio rewards equals one yields following corollary corollary polynomial time algorithm adaptive stochastic min cost coverage approximate cost optimal policy within multiplicative factor unless holds even pointwise modular furthermore since instance distribution construct optimal policy covers every realization using budget moreover since shown complexity theoretic assumptions polynomial time randomized policy budget achieves unit value obtained optimal policy budget follows since require cover set realizations constituting half probability mass obtain following corollary corollary polynomial time algorithm adaptive stochastic cover approximate cost optimal policy within multiplicative factor unless holds even pointwise modular related work large literature adaptive optimization partial observability relates adaptive submodularity broadly organized several different categories review relevant related work already discussed elsewhere manuscript adaptive versions classic optimization problems many approaches consider stochastic generalizations specific classic optimization problems set cover goemans liu knapsack dean traveling salesman gupta contrast paper goal introduce general problem structure adaptive submodularity unifies number adaptive optimization problems similar classic notion submodularity unifies various optimization problems set cover facility location nonadaptive bayesian experimental design etc competitive online optimization another active area research sequential optimization study competitive online algorithms particularly relevant example online set cover alon known set system arbitrary sequence elements presented algorithm algorithm must irrevocably select sets purchase times purchased sets cover elements appeared far alon obtain polylogarithmic approximation problem via online framework profitably applied many problems buchbinder naor provide detailed treatment framework note competitive analysis focuses scenarios contrast assume probabilistic information world optimize average case noisy interactive submodular set cover recent work guillory bilmes considers class adaptive optimization problems family monotone submodular objectives problem one must cover monotone submodular objective depends initially unknown target hypothesis adaptively issuing queries getting responses unlike traditional active learning query may generate response set valid responses depending target hypothesis reward calculated evaluating set query response pairs observed goal obtain threshold objective value minimum total query cost queries may nonuniform costs noisy variant problem guillory bilmes set query response pairs observed need consistent hypothesis goal obtain value hypotheses close consistent observations variants guillory bilmes consider policy cost provide greedy algorithms optimizing clever hybrid objective functions prove approximation guarantee integer valued objective functions case similar logarithmic approximation guarantees noisy case similar spirit work several significant differences two guillory bilmes focus policy cost focus mainly policy cost structure adaptive submodularity depends prior whereas dependence interactive submodular set cover dependence turn allows obtain results theorem certifying instances whose approximation guarantee depend number realizations way guarantees interactive submodular set cover depend guillory bilmes prove latter dependence fundamental reasonable interesting open problem within adaptive submodularity framework highlighted work interactive submodular set cover identify useful properties sufficient improve upon approximation guarantee theorem greedy frameworks adaptive optimization paper perhaps closest spirit work one stochastic depletion problems chan farias also identify general class adaptive optimization problems solved using greedy algorithms setting give factor approximation however similarity mainly conceptual level problems approaches well example applications considered quite different stochastic optimization recourse class adaptive optimization problems studied extensively operations research since dantzig area stochastic optimization recourse optimization problem set cover steiner tree facility location presented multiple stages stage information revealed costs actions increase key difference problems studied paper problems information gets revealed independently actions taken algorithm general efficient sampling based approximate reductions optimization deterministic setting see gupta bayesian global optimization adaptive stochastic optimization also related problem bayesian global optimization recent survey area brochu bayesian global optimization goal adaptively select inputs order maximize unknown function expensive evaluate possibly evaluated using noisy observations common approach successful many applications recent application machine learning lizotte assume prior distribution gaussian process unknown objective function several criteria selecting inputs developed expected improvement jones criterion however recently performance guarantees obtained setting srinivas aware approximation guarantees bayesian global optimization reduce set cover use result feige requires assumption dtime log log suffices assume using set cover approximation hardness result raz safra instead probabilistic planning problem decision making partial observability also extensively studied stochastic optimal control particular partially observable markov decision processes smallwood sondik abbreviated pomdps general framework captures many adaptive optimization problems partial observability unfortunately solving pomdps pspace hard papadimitriou tsitsiklis thus typically heuristic algorithms approximation guarantees applied pineau ross special instances pomdps related bandit problems optimal policies found include optimal policy classic bandit problem gittins jones approximate policies bandit problem metric switching costs guha munagala special cases restless bandit problem guha problems considered paper formalized pomdps albeit exponentially large state space world state represents selected items item thus results interpreted widening class partially observable planning problems efficiently approximately solved previous work authors subsequent developments manuscript extended version paper appeared conference learning theory golovin krause recently golovin krause proved performance guarantees greedy policy problem maximizing expected value policy constraints complex simply selecting items include matroid constraints policy select independent sets items greedy policy obtains adaptive monotone submodular objectives generally system constraints greedy policy obtains golovin shortly thereafter bellala scott used adaptive submodularity framework obtain first algorithms provable squared logarithmic approximation guarantees difficult fundamental problem active learning persistent noise finally golovin used adaptive submodularity context dynamic conservation planning obtain competitiveness guarantees ecological reserve design problem conclusions planning partial observability central notoriously difficult problem artificial intelligence paper identified novel general class adaptive optimization problems uncertainty amenable efficient greedy approximate solution particular introduced concept adaptive submodularity generalizing submodular set functions adaptive policies generalization based natural adaptive analog diminishing returns property well understood set functions special case deterministic distributions adaptive submodularity reduces classical notion submodular set functions proved several guarantees carried greedy algorithm submodular set functions generalize natural adaptive greedy algorithm case adaptive submodular functions constrained maximization certain natural coverage problems minimum cost minimum sum objectives also showed adaptive greedy algorithm accelerated using lazy evaluations one compute bounds optimal solution illustrated usefulness concept giving several examples adaptive submodular objectives arising diverse applications including sensor placement viral marketing automated diagnosis active learning proving adaptive submodularity problems allowed recover existing results applications special cases lead natural generalizations experiments real data indicate adaptive submodularity provide practical benefits significant speed ups tighter bounds believe results provide interesting step direction exploiting structure solve complex stochastic optimization planning problems partial observability acknowledgments research partially supported onr grant nsf grant nsf grant darpa msee grant gift microsoft corporation okawa foundation research grant caltech center mathematics information wish thank vitaly feldman jan providing elegant proof lemma references noga alon baruch awerbuch yossi azar niv buchbinder joseph seffi naor online set cover problem siam journal computing june issn doi url http esther arkin henk meijer joseph mitchell david rappaport steven skiena decision trees geometric models proceedings symposium computational geometry pages new york usa acm isbn doi http arash asadpour hamid nazerzadeh amin saberi stochastic submodular maximization wine proceedings international workshop internet network economics pages berlin heidelberg isbn doi http bellala scott modified group generalized binary search performance guarantees technical report university michigan eric brochu mike cora nando freitas tutorial bayesian optimization expensive cost functions application active user modeling hierarchical reinforcement learning technical report department computer science university british columbia november niv buchbinder joseph seffi naor design competitive online algorithms via approach foundations trends theoretical computer science february issn doi url http carri chan vivek farias stochastic depletion problems effective myopic policies class dynamic optimization problems mathematics operations research issn doi http david cohn zoubin gharamani michael jordan active learning statistical models journal artificial intelligence research jair george dantzig linear programming uncertainty management science sanjoy dasgupta analysis greedy active learning strategy nips advances neural information processing systems pages mit press dean goemans adaptivity approximation stochastic packing problems proceedings symposium discrete algorithms pages dean goemans approximating stochastic knapsack problem benefit adaptivity mathematics operations research deshpande guestrin madden hellerstein hong data acquisition sensor networks proceedings international conference large data bases vldb pages uriel feige threshold approximating set cover journal acm july uriel feige carsten lund hardness computing permanent random matrices computational complexity issn doi http uriel feige prasad tetali approximating min sum set cover algorithmica vitaly feldman jan private communication satoru fujishige submodular functions optimization volume annals discrete mathematics north holland amsterdam edition isbn garey ronald graham performance bounds splitting algorithm binary testing acta informatica gittins jones dynamic allocation index discounted multiarmed bandit problem biometrika december michel goemans jan stochastic covering adaptivity proceedings international latin american symposium theoretical informatics pages daniel golovin andreas krause adaptive submodularity new approach active learning stochastic optimization annual conference learning theory pages daniel golovin andreas krause adaptive submodularity theory applications active learning stochastic optimization journal artificial intelligence research jair daniel golovin andreas krause adaptive submodular optimization matroid constraints corr daniel golovin anupam gupta amit kumar kanat tangwongsan approximation algorithms ramesh hariharan madhavan mukund vinay editors iarcs annual conference foundations software technology theoretical computer science fsttcs dagstuhl germany schloss dagstuhl fuer informatik germany daniel golovin andreas krause debajyoti ray bayesian active learning noisy observations nips advances neural information processing systems pages daniel golovin andreas krause beth gardner sarah converse steve morey dynamic resource allocation conservation planning aaai proceedings aaai conference artificial intelligence pages aaai press pranava goundan andreas schulz revisiting greedy approach submodular set function maximization technical report massachusetts institute technology steffen audibert manfred opper john regret bounds gaussian process bandit problems proceedings international conference artificial intelligence statistics sudipto guha kamesh munagala bandits metric switching costs proceedings international colloquium automata languages programming icalp sudipto guha kamesh munagala peng shi approximation algorithms restless bandit problems technical report arxiv andrew guillory jeff bilmes active learning costs international conference algorithmic learning theory university porto portugal october andrew guillory jeff bilmes interactive submodular set cover proceedings international conference machine learning icml number haifa israel andrew guillory jeff bilmes simultaneous learning covering adversarial noise international conference machine learning icml bellevue washington anupam gupta martin ravi amitabh sinha wednesday approximation algorithms multistage stochastic optimization proceedings international workshop approximation algorithms combinatorial optimization problems approx anupam gupta ravishankar krishnaswamy viswanath nagarajan ravi approximation algorithms optimal decision trees adaptive tsp problems proceedings international colloquium automata languages programming icalp volume lecture notes computer science pages springer isbn donald jones matthias schonlau william welch efficient global optimization expensive functions journal global optimization haim kaplan eyal kushilevitz yishay mansour learning attribute costs proceedings acm symposium theory computing pages david kempe jon kleinberg tardos maximizing spread influence social network kdd proceedings ninth acm sigkdd international conference knowledge discovery data mining pages new york usa acm isbn doi http rao kosaraju teresa przytycka ryan borgstrom optimal split tree problem proceedings international workshop algorithms data structures pages london isbn krause guestrin nonmyopic value information graphical models proceedings uncertainty artificial intelligence uai andreas krause carlos guestrin observation selection using submodular functions conference artificial intelligence aaai nectar track pages andreas krause carlos guestrin optimal value information graphical models journal artificial intelligence research jair andreas krause carlos guestrin intelligent information gathering submodular function optimization tutorial international joint conference artificial intelligence jure leskovec andreas krause carlos guestrin christos faloutsos jeanne vanbriesen natalie glance outbreak detection networks kdd proceedings acm sigkdd international conference knowledge discovery data mining pages new york usa acm isbn doi http littman goldsmith mundhenk computational complexity probabilistic planning journal artificial intelligence research zhen liu srinivasan parthasarathy anand ranganathan hao yang algorithms shared filter evaluation data stream systems sigmod proceedings acm sigmod international conference management data pages new york usa acm isbn doi http daniel lizotte tao wang michael bowling dale schuurmans automatic gait optimization gaussian process regression proceedings twentieth international joint conference artificial intelligence ijcai pages donald loveland performance bounds binary testing arbitrary weights acta informatica issn doi http andrew mccallum kamal nigam employing active learning text classification proceedings international conference machine learning icml pages michel minoux accelerated greedy algorithms maximizing submodular set functions proceedings ifip conference optimization techniques pages springer kamesh munagala shivnath babu rajeev motwani jennifer widom eiter thomas pipelined set cover problem proceedings intl conf database theory pages nan saligrama comments proof adaptive stochastic set cover based adaptive submodularity implications group identification problem active query selection rapid diagnosis situations ieee transactions information theory nov issn doi george nemhauser laurence wolsey marshall fisher analysis approximations maximizing submodular set functions mathematical programming rob nowak noisy generalized binary search nips advances neural information processing systems pages papadimitriou tsitsiklis complexity markov decision processses mathematics operations research joelle pineau geoff gordon sebastian thrun anytime approximations large pomdps journal artificial intelligence research jair ran raz shmuel safra test errorprobability pcp characterization stoc proceedings annual acm symposium theory computing pages new york usa acm isbn doi http ross joelle pineau paquet brahim online planning algorithms pomdps journal artificial intelligence research july issn url http alexander schrijver combinatorial optimization polyhedra efficiency volume part chapters springer sebastiani wynn maximum entropy sampling optimal bayesian experimental design journal royal statistical society series smallwood sondik optimal control partially observable markov decision processes finite horizon operations research niranjan srinivas andreas krause sham kakade matthias seeger gaussian process optimization bandit setting regret experimental design proceedings international conference machine learning icml matthew streeter daniel golovin online algorithm maximizing submodular functions technical report carnegie mellon university matthew streeter daniel golovin online algorithm maximizing submodular functions nips advances neural information processing systems pages laurence wolsey analysis greedy algorithm submodular set covering problem combinatorica additional proofs incorporating item costs appendix provide proofs omitted main text results first explaining results generalize case items costs proving generalizations incorporate item costs incorporating costs preliminaries section provide preliminaries required define analyze versions problems item costs suppose item cost cost set given modular function define generalizations problems respectively results respect greedy policy greedy greedy policies costs greedy policy selects item maximizing current partial realization definition approximate greedy policy costs policy greedy policy exists max terminates upon observing greedy policy always obtains least maximum possible ratio conditional expected marginal benefit cost terminates benefit obtained expectation greedy policy greedy policy convenient imagine policy executing time policy selects item starts run finishes running units time next generalize definition policy truncation actually require three generalizations equivalent unit cost case definition strict policy truncation strict level truncation policy denoted obtained runningn time units unselecting items whose runs finished time formally domain dom agrees everywhere domain definition lax policy truncation lax level truncation policy denoted obtained time units selecting items running time formally domain running forp dom agrees everywhere domain definition policy truncation costs policy denoted randomized policy obtained running time units item running time time selecting independently probability formally randomized policy agrees everywhere domain dom dom dom certainty includes dom dom domain independently probability proofs follow need notion conditional expected cost policy well alternate characterization adaptive monotonicity based notion policy concatenation prove equivalence two adaptive monotonicity conditions lemma definition conditional policy cost conditional policy cost conditioned denoted expected cost items selects definition policy concatenation given two policies define policy obtained running completion running policy fresh start ignoring information running definition adaptive monotonicity alternate version function adaptive monotone respect distribution policies holds favg favg favg defined lemma adaptive monotonicity equivalence fix function policies favg favg proof fix policies begin proving favg favg fix note hence favg favg therefore favg favg holds favg favg first prove forward direction suppose note expression favg favg written conical combination nonnegative terms favg favg hence favg favg favg favg favg next prove backward direction contrapositive form suppose let items dom define policies follows select observe either policy observes immediately terminates otherwise continues succeeds selecting dom terminates succeeds selecting dom selects terminates claim favg favg note unless also dom hence favg favg dom dom last term negative assumption therefore favg favg favg completes proof technically realization policy selects item previously selected written function set partial realizations policy amended allowing partial realizations multisets elements played twice appears twice however interest readability avoid cumbersome multiset formalism abuse notation slightly calling policy issue arises whenever run policy run another fresh start adaptive data dependent bounds costs adaptive data dependent bound following generalization costs lemma adaptive data dependent bound costs suppose made observations selecting dom let policy adaptive monotone submodular max maxw proof order items dom arbitrarily consider policy dom order selects terminating proceeding otherwise succeed selecting dom without terminating occurs iff proceeds run fresh start forgetting observations construction expected marginal benefit running portion conditioned equals let probability selected running conditioned whenever dom selected current partial realization contains subrealization hence adaptive submodularity implies follows total contribution upper bounded summing dom get bound next pnote dom contributes cost hence must case obviously since probability hence setting feasible linear program optimal value show maxe consider feasible solution linear program defining attains objective value max max since feasibility simple greedy algorithm used compute provide pseudocode algorithm correctness algorithm readily discerned upon rewriting linear program using variables obtain max intuitively clear optimize shift mass towards variables highest ratio clearly optimal solution moreover optimal solution implies since otherwise would possible shift mass obtain increase objective value values distinct distinct items unique solution satisfying constraints algorithm compute otherwise imagine perturbing independent random quantities drawn uniformly make distinct changes thepoptimum value vanishes let tend towards zero hence solution satisfying implies optimal since algorithm outputs value solution correct input groundset partial realization costs budget conditional expected marginal benefits pall output maxw begin sort set null min output end algorithm algorithm compute data dependent bound lemma objective item costs adaptive stochastic maximization problem becomes one finding arg max favg budget cost selected items define favg randomized policy favg expectation internal randomness determines prove following generalization theorem theorem fix item costs adaptive monotone adaptive submodular respect distribution greedy policy policies positive integers favg favg proof proof goes along lines performance analysis greedy algorithm maximizing submodular function subject cardinality constraint nemhauser extension analysis greedy algorithms analogous nonadaptive case shown goundan schulz brevity assume without loss generality favg favg favg favg favg first inequality due adaptive monotonicity lemma may infer favg favg second inequality may obtained corollary lemma follows fix partial realization form consider equals expected marginal benefit portion conditioned lemma allows bound max expectations taken internal randomness note since form know expectation taken internal randomness hence follows maxe definition greedy policy obtains least maxe expected marginal benefit per unit cost step immediately following observation next take appropriate convex combination different values let random partial realization distributed favg favg max favg favg simple rearrangement terms yields second inequality define favg favg implies infer last inequality hence used fact thus favg favg favg favg favg favg favg objective section provide arbitrary item cost generalizations theorem theorem item costs adaptive stochastic minimum cost cover problem becomes one finding quota utility arg min cavg cavg without loss generality may take truncated version namely min rephrase problem finding arg min cavg covers hereby recall covers expectation internal randomness consider problem remainder also consider worstcase variant problem replace expected cost cavg objective cost cwc definition coverage definition page requires modification handle item costs note however coverage sense covering realization probability less one count covering corollary items whose runs finished help coverage whereas currently running items simple example consider case policy selects terminates randomized policy probability empty policy probability hence even though half time covers realizations counted covering begin claim relating pointwise submodularity strong adaptive submodularity lemma adaptive submodular respect pointwise submodular meaning submodular strongly adaptive submodular respect proof assumption adaptive submodular respect sufficient prove holds fix let definition dom dom used dom dom pointwise submodularity provide approximation guarantee policy cost arbitrary item costs theorem suppose strongly adaptive submodular strongly adaptive monotone respect exists let value implies let minimum probability realization let optimal policy minimizing expected cost items selected guarantee every realization covered let greedy policy respect item costs general cavg cavg instances cavg cavg note range valid choice general instances cavg cavg cavg cavg proof theorem first require additional definitions extend defining via dom dom let definition execution trace given policy realization let trace sequence partial realizations specifying set observations time dom dom consists precisely ith item selected definition execution arborescence policy define execution arborescence digraph dom vertices dom edges words vertices partial realizations may encountered directed edge may immediately encounter execution trace definition spanning edges sources targets successors given policy execution arborescence arcs positive let edges span expected benefit let sources edges targets finally successors sources minimal supersets interpreting partial realizations set item observation pairs formally definition concatenative pseudopolicies given two policies nonnegative define concatenative pseudopolicy stochastic process realization runs policy achieve expected reward formally reaching runs scratch hence realization play dom defined relative note proper policy decides switch executing based information infer current partial realization specifically upon reaching select exists select otherwise trategy overall strategy bound expected cost cavg bounding price pays per unit expected reward gained runs measured dom current partial realization integrating run bound price pays define alternative cost scheme policies bound price pays terms alternative cost optimal policy finally bound alternative cost terms actual cost rogress bounds first observe definition maximum possible reward holds since covers every realization since generate violation strong adaptive monotonicity fixing selecting selecting dom reduce expected reward bound useful far however require stronger bound close use slightly different analyses general instances instances begin general instances prove stronger bound obtaining final reward fix dom say covers covers time observes definition covered dom hence last item selects say upon observing must increase conditional expected value dom instances show using similar argument first argue last item selects must increase conditional expected value suppose currently observes achieved conditional value uncovered since instance every uncovered dom definition dom implies dom pper bounding price pays lternate osts next price pays terms alternative pricing scheme note cost may higher cavg eventually bound alternative cost terms true cost fix may played realization consider fix dom charge price reward case hence realization alternative cost dom dom dom dom taking expectation equivalent yields expected alternative cost brevity define arc note could multiple however notational convenience imagine creating separate copies partial realization assume without loss generality item unique partial realization plays one next arc define probability plays conditioned next consider cost expected alternative cost paid starting dom conditioned fixed note since alternative cost depends alternative cost well formally expected benefit running starting conditioning hence min min min hence min next note partitions set realizations define alternative cost note implicitly dependent ounding lternate osts next bound terms cavg problem parameters ultimately show general instances cavg instances cavg sufficient prove bound element element define prove general instances certifying instances simplify exposition define additional notation fix let let let let conditional expected benefit selecting concatenative pseudopolicy immediately upon seeing see definition construction partitions set realizations since unique edge execution arborescence policy passes expected reward trace policy runs furthermore conditioned pay taking expectation obtain next need subclaim relates recall definition definition lemma fix let proof fix claim strong adaptive submodularity see note partitions set realizations thus given assumption hence dividing yields completes proof recall wish using let set max lemma general instances instances proof lemma adaptive submodularity also hence hence note bound latter sum constructing unbiased estimate taking expectation estimate unbiased estimator based random trace sampled conditioned sampled probability probability seeing conditioned selected exactly hence next consider random sequence partial realizations denote assume without loss generality policy selects item give zero reward expectation let random variable adaptive submodularity also next note maximum possible reward minimum increment minimum probability realization hence implies using conclude exp hence hence general instances argument similar except minimum increment reward determined realization hence case utting together show obtain result general instances result instances strictly analogous max fact greedy policy assumption combining yields max expected price pays per unit expected reward instant reaches expected reward max max since partitions set realizations next apply lemma bound cavg follows cavg max max max completes proof general instances mentioned result instances strictly analogous merely different bounds used relate cavg aside proving lemma elegant proof present due feldman lemma suppose adaptive submodular strongly adaptive monotone respect exists fix policy covers every realization let defined proof theorem cavg proof recall use alternative charging scheme compute expected cost policy observes selects charge based outcome specifically charge proportionally gain expected reward measured dom formally upon observing selecting algorithm charged dom dom straightforward show using basic fact pevent union finitely many disjoint events random variable hence expectation charging scheme charges policy exactly cavg moreover rate charges upon observing selecting exactly per unit expected gain let charging scheme policy pays obtain additional expected reward given already true realization let partial realization execution trace immediately obtains reward expectation formally maximal element dom defined total alternative cost compute expected cost cavg expectation taken respect obtain claimed equality exchanging integral expectation operators cavg justify operator exchange several ways example apply tonelli theorem since prices always next consider cost generalize theorem incorporating arbitrary item costs theorem suppose adaptive monotone adaptive submodular respect let value implies let minimum probability realization let optimal policy minimizing cost cwc guaranteeing every realization covered let greedy policy respect item costs finally let maximum possible expected reward cwc cwc let apply proof let greedy policy let cwc theorem parameters yield favg favg favg since covers every realization assumption favg rearranging terms yields favg since favg favg adaptive monotonicity follows favg definition covered favg thus favg implies favg meaning covers every realization next claim cost sufficient show final item executed cost realization prove follows facts greedy policy covers every realization cost data dependent bound lemma page guarantees max suppose dom would like say maxe supposing true item cost must hence selected greedy policy upon observing thus final item executed cost realization next show maxe towards end note lemma implies max dom prove maxe suffices show dom proving quite straightforward strongly adaptive monotone given adaptive monotone requires additional effort fix let policy selects items arbitrary order let dom apply lemma obtain favg favg note since know averaging argument together implies since arbitrary partial realization dom arbitrary fix dom let dom settings implies thus maxe thus greedy policy never select item cost exceeding cwc hence cwc cwc cwc completes proof lemma fix adaptive monotone submodular objective policy dom proof augment new policy follows run completion let partial realization consisting states observed proceed select remaining items order otherwise terminate inequality repeated application adaptive monotonicity equality construction described result feige implies polynomial time approximation algorithm instances adaptive stochastic min cost cover unless dtime log log show related result general instances lemma every constant polynomial time approximation algorithm general instances adaptive stochastic min cost cover either average case objective cavg objective cwc unless dtime log log proof offer reduction set cover problem fix set cover instance ground set sets sets fix positive integers let set cost item one partition disjoint equally sized subsets construct realization let set states null hence null knowledge true realization revealed selecting items use uniform distribution realizations finally objective number elements cover sets since every realization consistent every possible partial realization hence dom dom objective function original set cover instance since submodular adaptive submodular likewise since monotone strongly adaptive monotone cover realization must obtain maximum possible value realizations means selecting collection sets conversely clearly covers hence instance adaptive stochastic min cost cover either average case objective cavg objective cwc equivalent original set cover instance therefore result feige implies polynomial time algorithm obtaining approximation adaptive stochastic min cost cover unless dtime log log objective section prove theorem appears page case items arbitrary costs proof resembles analogous proof streeter golovin submodular cover problem like proof ultimately derives extremely elegant performance analysis greedy algorithm set cover due feige objective function generalized arbitrary cost items uses strict place definition favg prove greedy policy achieves objective policies require following lemma lemma fix greedy policy adaptive monotone submodular function let favg favg policy nonnegative integers favg favg proof fix adaptive monotonicity favg favg next aim prove favg favg sufficient complete towards end fix partial realization form consider equals expected marginal benefit portion conditioned lemma allows bound max expectations taken internal randomness note expectation taken internal randomness hence follows maxe definition greedy policy obtains least max expected marginal benefit per unit cost step immediately following observation next take appropriate convex combination previous inequality different values let random partial realization distributed taking expectation yields favg favg favg favg multiplying substituting favg favg conclude ksi favg favg immediately yields concludes proof figure illustration inequality using lemma together geometric argument developed feige prove theorem proof theorem let maximum possible expected reward tation taken let greedy policy define favg define favg let let let favg claim favg favg clearly holds empty policy otherwise always select item contributes zero marginal benefit namely item already played previously hence greedy policy never expected marginal benefit select items negative favg favg lemma favg avg therefore similar reasons favg favg favg favg sequence adaptive monotonicity adaptive submodularity imply informally otherwise favg favg optimal policy must sacrificing immediate rewards time exchange greater returns later shown strategy optimal adaptive submodularity hold monotonicity imply see figure left hand bound right hand side simplifies pside lower proving favg proof approximation hardness absence adaptive submodularity provide proof theorem whose statement appears page proof theorem construct hard instance based following intuition make algorithm treasure hunting set locations treasure one locations algorithm gets unit reward finds zero reward otherwise maps consisting cluster bits purporting indicate treasure map stored weak way querying bits map reveals nothing says treasure moreover one maps fake puzzle indicating map correct one indicating treasure location formally fake map one probabilistically independent location treasure conditioned puzzle see definition page symbol table favg cavg cwc ground set items individual item states item may outcomes selecting item individual realization function items states partial realization typically encoding current set observations partial mapping items states random realization random partial realization respectively consistency relation means dom probability distribution realizations conditional distribution realizations policy maps partial realizations items set items selected run realization conditional expected marginal benefit conditioned dom dom conditional expected marginal benefit policy conditioned dom dom shorthand budget cost selected item sets truncated policy see definition page unit costs definition page strictly truncated policy see definition page laxly truncated policy see definition page policies concatenated together see definition page objective function type unless stated otherwise average benefit favg item costs extended sets via average cost policy cavg cost policy cwc cost policy favg conditional average policy cost approximation factor greedy optimization greedy policy benefit quota often coverage gap sup implies indicator proposition equals one true zero false table important symbols notations used article instance three types items encodes treasure encodes maps encodes puzzle specified outcomes binary identify items bit indices accordingly say value bit independently conditional distribution given deterministic specified objective function linear defined follows describe puzzle compute perm mod suitable random modqp matrix suitable prime integer perm permanent exploit theorem feige lund show exist constants randomized polynomial time algorithm compute perm mod mod correctly probability uniformly random prime superpolynomial encode puzzle fix prime use bits sample nearly uniformly random follows matrix let rep pij define base representation note rep matrices entries define inverse encoding interprets bits integer computes mod otherwise zero matrix latter event occurs probability case simply suppose algorithm consideration finds treasure gets unit reward adds expected reward let assume drawn uniformly aturandom next consider maps partition maps consisting items map partition items groups bits bits group encode bit binary string let xor binary strings interpreted integer using fixed encoding say points location treasure priori uniformly distributed particular realization define set location treasure realization label ensure note random variable distributed uniformly random note still holds condition realizations set items map case still least one group whose bits remain completely unobserved consider optimal policy budget items pick clearly reward however given budget computationally unconstrained policy exhaustively sample solve puzzle compute read correct map exhaustively sample decode map compute get treasure pick thereby obtaining reward one give upper bound expected reward randomized polynomial time algorithm budget items assuming fix small constant set suppose give realizations free also replace budget items budget specifically map items additional budget specifically treasure locations obviously help noted selects less bits map indicated distribution conditioned realizations still uniform course knowledge useless getting reward hence try maps attempt find note randomized algorithm given random drawn always outputs set integers size use construct randomized algorithm given outputs integer simply running first algorithm selecting random item find distribution treasure location uniform given knowledge hence budget treasure locations earn expected reward armed observations theorem work feige lund complexity theoretic assumptions infer since next note straightforward algebra shows order ensure suffices choose thus complexity theoretic assumptions polynomial time randomized algorithm budget achieves value obtained optimal policy budget approximation ratio
| 2 |
simultaneous orthogonal planarity patrizio steven sabine giordano giuseppe peter philipp jan fabian ignaz aug germany angelini germany konstanz university germany roma tre university italy dalozzo gdb university sydney australia peter hagen germany charles university czech republic honza karlsruhe institute technology germany rutter abstract introduce study problem given planar graphs maximum degree vertex set admit orthosefe assignment vertices grid points edges paths grid edges distinct graphs assigned path assignment induces planar orthogonal drawing graphs show problem even shared graph hamiltonian cycle sunflower intersection even shared graph consists cycle isolated vertices whereas problem solvable union graph maximum degree five shared graph biconnected shared graph biconnected sunflower intersection show every positive instance orthosefe three bends per edge introduction input simultaneous embedding problem consists several graphs vertex set fixed drawing style simultaneous embedding problem asks whether exist drawings respectively drawing style restrictions coincide problem widely studied setting topological planar drawings vertices represented points edges represented research initiated bertinoro workshop graph drawing research partially supported dfg grant miur project amanda prot grant czech science foundation gacr dfg grant pairwise jordan arcs endpoints problem called simultaneous embedding fixed edges short number input graphs known even restricted case sunflower instances every pair graphs shares set edges even set induces star hand complexity still open recently efficient algorithms restricted instances presented namely shared graph biconnected collection disjoint cycles iii every connected component either subcubic biconnected biconnected connected connected input graphs maximum degree see survey overview planar drawings simultaneous embedding problem called simultaneous geometric embedding known even two graphs besides simultaneous intersection representation interval graphs permutation chordal graphs recently simultaneous embedding paradigm applied fundamental drawing styles namely simultaneous level planar drawings rac drawings continue line research studying simultaneous embeddings planar orthogonal drawing style vertices assigned grid points edges paths grid connecting endpoints accordance existing naming scheme define problem testing whether input graphs admit simultaneous planar orthogonal drawing drawing exists call orthosefe note necessary condition maximum degree order obtain planar orthogonal drawings hence remainder paper assume instances property instances property least shared graph connected problem solved efficiently however instances admit sefe orthosefe see fig unless mentioned otherwise instances consider sunflower notice instances always sunflower let instance define shared graph resp union graph graph resp vertex set whose edge set intersection resp union ones also call edges shared edges call edges exclusive edges definitions shared graph shared edges exclusive edges naturally extend sunflower instances value one main issue decide vertices shared graph represented note planar topological drawings vertices require decisions exists single cyclic order incident edges case orthogonal drawings however two choices vertex either drawn straight incident two angles fig negative instance shared edges black exclusive edges red blue red edges require angles different sides thus blue edge drawn note given drawing examples side assignments exclusive edges incident vertices orthogonality constraints satisfied violated bent incident one angle one angle vertex shared graph neighbors two exclusive edges say incident embedded side path uvw must bent turn implies also every exclusive edge incident embedded side uvw way two input graphs interact via vertices difficulty controlling interaction marks main difference study interaction isolation focus instances shared graph cycle paper note instances trivial provided input graphs planar contributions outline section provide notation show existence orthosefe instance described combinatorial embedding problem section show even shared graph cycle even shared graph consists cycle plus isolated vertices contrasts situation cases polynomially solvable section show efficiently solvable shared graph cycle union graph maximum degree finally section extend result case shared graph biconnected union graph still maximum degree moreover show positive instance whose shared graph biconnected admits orthosefe three bends per edge close concluding remarks open questions section full proofs found appendix preliminaries extensively make use problem instance consists formula variables clauses task find nae truth signment truth assignment clause contains true false literal known graph bipartite graph whose vertices variables clauses whose edges represent membership variable clause problem planar restriction instances whose clause graph planar planar solved efficiently embedding constraints let instance sefe collection embeddings restrictions note literature sefe often defined collection drawings rather collection embeddings however two definitions equivalent sefe realizable orthosefe needs satisfy two additional conditions first let vertex degree neighbors embedding exist two exclusive edges incident embedded side path uvw exclusive edge incident must embedded side path uvw second let vertex degree exclusive edges incident must appear two edges around call orthogonality constraints see fig theorem instance orthosefe admits sefe satisfying orthogonality constraints case shared graph cycle give simpler version constraints theorem prove useful remainder paper jordan curve theorem planar drawing cycle divides plane bounded unbounded region inside outside call sides problem assign exclusive edges either two sides following two conditions fulfilled planarity constraints two exclusive edges graph must drawn different sides endvertices alternate along orthogonality constraints let vertex adjacent two exclusive edges graph side exclusive edges incident graphs must side note reformulation general orthogonality constraints orthogonality constraints also imply different sides graph contains two exclusive edges incident must different sides next theorem follows theorem following two observations first sunflower instance whose shared graph cycle collection embeddings sefe second planarity constraints necessary sufficient existence embedding theorem instance whose shared graph cycle orthosefe exists assignment exclusive edges two sides satisfying planarity orthogonality constraints yaj ybj ycj sji uji vij wij zij rij tji fig clause gadget top gadget vij bottom solid edges belong gadgets dotted edges optional dashed edges transmission edges illustration instance focused clause black edges belong shared graph red blue green edges exclusive edges respectively hardness results show instances sunflower intersection even shared graph cycle even shared graph consists cycle isolated vertices theorem even instances sunflower intersection shared graph cycle input graphs outerplanar maximum degree proof sketch membership directly follows theorem prove show reduction problem positive variant clause consists exactly three unnegated literals let variables let clauses formula positive show construct equivalent instance outerplanar graphs maximum degree refer exclusive edges red blue green respectively refer fig clause create clause gadget fig top variable clause create gadget vij fig bottom observe dotted green edge wij rij gadget part vij occur otherwise green edge wij yxj connecting wij one three vertices yaj ybj ycj dashed stubs clause gadget observe three edges per clause realized way exist planarity constraints pairs fig gadgets incident edges contain edges respectively gadgets ordered indicated fig gadgets vij always precede clause gadget odd gadgets vnj appear order otherwise appear reversed order vnj finally vij connected edge wij blue odd red even call edges transmission edges assume admits orthosefe planarity constraints orthogonality constraints guarantee three properties edge uji vij due inside fact planarity constraints two green edges incident wij lie side hence orthogonality constraints two transmission edges incident wij also lie side call truth edge variable three green edges lie side namely two red edges clause gadget must lie opposite sides interplay planarity orthogonality constraints subgraph induced vertices hence edges lie side orthogonality constraints either satisfied iii clause edge lies side truth edge due planarity constraints two edges edge waj yaj analogously edge edge lies side truth edge hence setting true false truth edge inside outside yields truth assignment satisfies proof direction based fact assigning truth edges either two sides according assignment also implies unique side assignment remaining exclusive edges satisfies orthogonality planarity constraints easy see outerplanar graphs maximum degree reduction extended following describe modify construction theorem show hardness keep edges gadgets clause gadgets remain composed edges belonging two graphs replace transmission edge transmission path composed alternating green red edges starting ending red edge transformation allows paths traverse transmission edges edges without introducing crossings edges color easy see properties described proof theorem assignments exclusive edges two sides also hold constructed instance transmission paths take role transmission edges theorem even instances shared graph consists cycle set isolated vertices fig instance satisfying properties lemma edges belonging components different line styles graph polygons components graph shared graph cycle section give algorithm instances whose shared graph cycle whose union graph maximum degree theorem order obtain result present efficient algorithm restricted instances lemma give series transformations lemma reduce instance properties one solved algorithm lemma lemma instances shared graph cycle outerplanar graph maximum degree proof algorithm based reduction planar first note since outerplanar exist two edges alternating along hence planarity constraints define auxiliary graph vertex set edges corresponding pairs edges alternating along see fig may assume bipartite since would meet planarity constraints otherwise let set connected components component fix partition independent sets possibly case singleton note assignment exclusive edges meets planarity constraints every edges lie one side edges lie side draw cycle circle plane component let polygon inscribed whose corners endvertices edges corresponding vertices refer fig contains one vertex one edge consider digon segment connecting vertices edge least two vertices let open along sides contain corners inner points fig depict making sides slightly concave one easily show two components polygons may share corners inner points hence graph obtained placing vertex inside polygon making adjacent corner adding edges planar see fig construct formula variables naesatisfiable admits assignment meeting planarity orthogonality constraints encoding truth assignment true edges inside edges outside false reverse holds every assignment satisfying planarity constraints defines sense let exclusive edge let exclusive edges incident respectively assume four edges exist cases simpler let component containing edge eiu define literal eiu eiu interpretation truth assignment edge eiu inside true assignment meet orthogonality constraints say true must assigned inside well would cause problem false hence orthogonality constraints described clauses reduce introduce new variable edge replace clause two clauses planar drawing graph see figs resulting formula obtained planar drawing placing variable point vertex lies placing variable point edge iii placing clauses edge points vertices lie respectively drawing edges corresponding edges implies planar hence test polynomial time next two lemmas show use lemma test polynomial time instance cycle vertex degree either lemma let instance whose shared graph cycle maximum degree possible construct polynomial time equivalent instance whose shared graph cycle outerplanar maximum degree proof sketch construct equivalent instance cycle maximum degree number pairs edges alternate along smaller number pairs edges alternate along repeatedly applying transformation yields equivalent instance satisfying requirements lemma consider two edges appear order along cycle path contains minimal length outerplanar edges always exist fig illustrates construction eav eaw ebw eaw ebv eav ebw ebv fig instances left right proof lemma edges black exclusive edges red blue fig illustration transformation proof lemma reduce number vertices incident two exclusive edges edges right take role edges left respectively thus orthogonality constraints equivalent choice fact maximum degree exclusive edge one endpoint set vertices one observe orthosefe edges edges must side must different sides concluded orthosefe orthosefe proof next lemma based replacement illustrated fig afterwards combine results present main result section lemma let instance whose shared graph cycle whose union graph maximum degree possible construct polynomial time equivalent instance whose shared graph cycle graph maximum degree theorem solved polynomial time instances whose shared graph cycle whose union graph maximum degree shared graph biconnected study instances whose shared graph biconnected theorem give turing reduction instances whose shared graph biconnected instances whose shared graph cycle theorem give algorithm given positive instance shared graph biconnected together sefe satisfying orthogonality constraints constructs orthosefe three bends per edge start turing reduction develop algorithm takes input instance whose shared graph biconnected produces set instances whose shared graphs cycles output positive instance instances positive reduction based sefe testing algorithm instances whose shared graph biconnected seen generalized unrooted version one angelini first describe preprocessing step afterwards give outline approach present turing reduction two steps assume familiarity formal definitions see appendix lemma let instance whose shared graph biconnected possible construct polynomial time equivalent instance whose shared graph biconnected endpoint exclusive edge degree shared graph continue brief outline algorithm first algorithm computes shared graph avoid special cases augmented adding two virtual edges adjacent necessary conditions embeddings fixed flip following necessary conditions afterwards traversing global formula produced whose satisfying assignments correspond choices flips result sefe refine approach show choose flips independently allows reduce separate instance whose shared graph cycle describe algorithm detail consider node part skel either vertex skel virtual edge skel represents subgraph exclusive edge attachment part skel vertex endpoint virtual edge whose corresponding subgraph contains endpoint exclusive edge important endpoints different parts skel hard see obtain sefe embedding skeleton skel node chosen exclusive edge parts containing attachments share face shown embedding choice satisfies conditions possibly flipping used obtain sefe theorem proof modify order exclusive edges around vertices therefore applies well let let virtual edge skel subgraph represented corresponding neighbor attachment respect interior vertex incident important edge attachment skel fig skeleton corresponding virtual edge expanded show skeleton replacing cycle replacing path vertices green boxes necessary condition embedding attachment respect must incident face incident virtual edge twin skel representing clockwise circular order together poles fixed reversal lemma purpose avoiding crossings skel thus replace virtual edge represent cycle containing attachments respect poles order keep important edges altogether results instance sefe modeling requirements skel see figs lemma let instance whose shared graph biconnected admits orthosefe instances admit orthosefe next transform given instance equivalent instance whose shared graph cycle let cycles corresponding neighbor obtain instance replace cycle poles path first contains two special vertices followed clockwise path excluding endpoints four special vertices counterclockwise path excluding endpoints finally two special vertices followed addition existing exclusive edges note remove vertices add exclusive edges exclusive edges see fig reduction together next lemma implies main result lemma admits orthosefe theorem shared graph biconnected polynomialtime turing reducible shared graph cycle also reduction increase maximum degree union graph corollary solved polynomial time instances whose shared graph biconnected whose union graph maximum degree around around around fig constructing drawing three bends per edge observe previous results hard also obtain sefe satisfying orthogonality constraints exists show construct orthogonal geometric realizations sefe theorem let positive instance whose shared graph biconnected exists orthosefe every edge three bends proof sketch assume sefe satisfying orthogonality constraints given adopt method biedl kant draw vertices increasing respect shared graph choose face left outer face union graph edges bend near incident vertices drawn vertically otherwise fig indicates ports assigned make sure edge may leave vertex bottom incident neighbor lower index thus exactly three bends edge one bend around two bends around conclusions future work work introduced studied problem realizing sefe orthogonal drawing style problem already even instances efficiently tested sefe presented testing algorithm instances consisting two graphs whose shared graph biconnected whose union graph maximum degree also shown positive instance whose shared graph biconnected realized three bends per edge conclude paper presenting lemma together theorem shows suffices focus restricted family instances solve problem instances whose shared graph biconnected lemma let instance whose shared graph cycle equivalent instance shared graph cycle graph outerplanar iii two vertices adjacent constructed polynomial time references angelini battista frati patrignani rutter testing simultaneous embeddability two graphs whose intersection biconnected connected graph discrete algorithms angelini lozzo battista frati patrignani rutter beyond level planarity editors lncs springer appear angelini lozzo neuwirth advancements sefe partitioned book embedding problems theor comput argyriou bekos kaufmann symvonis geometric rac simultaneous drawings graphs graph algorithms auslander parter embedding graphs sphere math battista tamassia maintenance triconnected components algorithmica battista tamassia planarity testing siam bekos van dijk kindermann wolff simultaneous drawing planar graphs crossings bends graph algorithms biedl kant better heuristic orthogonal graph drawings comput karrer rutter simultaneous embedding edge orderings relative positions cutvertices wismath wolff editors volume lncs pages springer karrer rutter simultaneous embedding edge orderings relative positions cutvertices arxiv kobourov rutter simultaneous embedding planar graphs tamassia editor handbook graph drawing visualization crc press rutter disconnectivity relative positions simultaneous embeddings comput rutter simultaneous applications constrained embedding problems acm trans brandes eager raman editors esa volume lncs pages springer battista tamassia graph algorithms paterson editor icalp volume lncs pages springer gassner percan schaefer schulz simultaneous geometric graph embeddings hong nishizeki quan editors volume lncs pages springer gutwenger mutzel linear time implementation marks editor volume lncs pages springer haeupler jampani lubiw testing simultaneous planarity common graph graph algorithms jampani lubiw simultaneous interval graphs cheong chwa park editors isaac volume lncs pages jampani lubiw simultaneous representation problem chordal comparability permutation graphs graph algorithms schulz intersection graphs simultaneous embedding fixed edges graph algorithms moret planar acm sigact news moret theory computation papadimitriou computational complexity academic internet schaefer toward theory planarity planarity variants graph algorithms schaefer complexity satisfiability problems lipton burkhard savitch friedman aho editors proceedings annual acm symposium theory computing stoc pages acm shih kuo unifying maximum cut minimum cut planar graph ieee trans computers tamassia embedding graph grid minimum number bends siam appendix definitions appendix section already discussed assign exclusive edges either two sides formalise assignment means function resp edge lies left resp right according arbitrary orientation connectivity graph connected path two vertices cutvertex vertex whose removal disconnects graph separating pair pair vertices whose removal disconnects graph connected graph biconnected cutvertex biconnected graph separating pair consider two special pole vertices family constructed fashion similar graphs namely edge poles let stgraph poles let planar graph two designated adjacent vertices edges call skeleton composition edges called virtual edges edge parent edge poles skeleton compose poles remove edge replace removing identifying poles endpoints fact allow three types compositions series composition skeleton cycle length parallel composition consists two vertices connected parallel edges rigid composition every biconnected planar graph edge graph poles much way graphs gives rise composition tree describing obtained single edges nodes corresponding edges series parallel rigid compositions graph respectively obtain composition tree add additional root qnode representing edge associate node skeleton composition denote skel skeleton consists two endpoints edge represented one real one virtual edge representing rest graph node pertinent graph pert subgraph represented subtree root virtual edge skeleton skel expansion graph pertinent graph pert neighbor corresponding considering rooted respect edge originally introduced battista tamassia unique smallest decomposition tree using different edge composition corresponds rerooting node representing thus makes sense say size linear computed linear time planar embeddings correspond bijectively planar embeddings skeletons choices orderings parallel edges embeddings skeletons unique flip considering rooted assume embedding root edge incident outer face equivalent parent edge incident outer face skeleton remark planar embedding poles node incident outer face pert hence following consider embeddings pertinent graphs poles lying face omitted sketched proofs section theorem instance orthosefe admits sefe satisfying orthogonality constraints proof part let embedding determined sefe observe orthogonality constraints vertex define whether degree vertex drawn straight bent face incident degree vertex assigned angle hard see planar orthogonal drawing embedding satisfying requirements constructed draw exclusive edges orthogonal polylines inside face determined sefe fact exclusive edges drawn without introducing crossings descends fact planar embedding part let vertex orthogonality constraints satisfied exactly two neighbors need assign port two exclusive edges graph one edges one side path uvw port least one exclusive edge side path uvw degree need assign port exclusive edge pair edges port exclusive edge different pair edges hence cases need least five ports possible grid omitted sketched proofs section theorem even instances sunflower intersection shared graph cycle outerplanar graphs maximum degree proof membership directly follows theorem since assignment certificate problem easily verified polynomial time satisfy planarity orthogonality constraints prove problem show reduction npcomplete problem positive variant clause consists exactly three unnegated literals see fig let variables let clauses formula positive show construct equivalent instance refer fig assume without loss generality literals clause xja xjb xjc odd otherwise gadget vij variable belonging clause subgraph defined follows gadget vji contains path sji uji wij vij zij rij tji belonging edges uji vij wij zij belonging see fig clause gadget clause subgraph defined follows gadget contains path yaj ybj ycj belonging edges belonging edges belonging see fig initialize union vij identify vertex tji vertex odd identify vertex vertex sji otherwise identify vertex tjn vertex vertex vertex odd identify vertex vertex vertex vertex otherwise complete construction add exclusive edges follows add edge wij odd otherwise call edges transmission edges add edge wij yij edge wij rij otherwise clearly construction instance completed polynomial time graph cycle already observed also transmission edges alternate along since gadgets appear along order vnj odd order vnj otherwise also transmission edge alternates edges edges alternate construction hence outerplanar fact maximum degree also directly follows construction given positive instance positive show positive instance given satisfying truth assignment true false denotes set variables construct assignment exclusive edges two sides satisfying planarity orthogonality constraints set exclusive edge incident wij true otherwise set uji vij true uji vij otherwise clause xja xjb xjc set false otherwise set false otherwise set false otherwise finally clause xja xjb xjc consider literal otherwise let suppose set set false set set otherwise suppose set set false set set otherwise show satisfies planarity constraints first observe planarity constraints edges trivially satisfied since outerplanar edges pairs edges alternate along uji vij wij zij pairs waj yaj wbj ybj wcj ycj edges incident dummy vertices however easy verify assigns alternating edges different sides show satisfies orthogonality constraints every vertex vertices except wij true since one incident exclusive edge vertices wij true since edges incident wij assigned side construction vertex distinguish two cases based whether exists case let without loss generality case shown analogously construction hence orthogonality constraints satisfied prove also satisfied suffices show two edges incident assigned different sides given degree degree namely due fact truth assignment hence second case hence since vertices degree degree degree implies orthogonality constraints satisfied suppose positive instance let corresponding assignment exclusive edges sides show construct truth assignment satisfies set true start proving edges incident wij assigned side observe two edges incident wij alternate edge uji vij along hence assigned side planarity constraints hence orthogonality constraints wij exclusive edges incident wij lie side wij zij since two vertices wij connected transmission edge either statement follows property allows focus clause separately let xja xjb xjc clause show xja xjb xjc hold first show namely planarity constraints orthogonality constraints statement follows second hold since orthogonality constraints implies waj yaj wbj ybj wcj ycj hold hence waj zaj wbj zbj wcj zcj hold since edges incident wij assigned side concludes proof xja xjb xjc hold easy see reduction performed polynomial time extended subdividing two edges additional graph introducing exclusive edge vertices belonging omitted sketched proofs section lemma let instance cycle maximum degree possible construct polynomial time equivalent instance cycle outerplanar graph maximum degree proof describe construct equivalent instance cycle maximum degree number pairs edges alternate along smaller number pairs edges alternate along note repeatedly performing transformation eventually yields equivalent instance satisfying requirements lemma consider two edges appear order along cycle path contains minimal length outerplanar edges always exist initialize replace path path follows refer fig let sets vertices path contains vertices dummy vertex vertices dummy vertex eav eaw ebw eaw ebv eav ebw ebv fig instances proof lemma edges shared graph black exclusive edges red blue three dummy vertices four dummy vertices three dummy vertices three dummy vertices vertices finally note contains vertices plus set dummy vertices describe exclusive edges initialize add edges also add edges finally replace edge incident edge edge incident edge proving statement observe important property used following namely exists exclusive edge hence endpoint one fact exists edge connecting vertex since vertices already incident edges respectively since maximum degree also exists exclusive edge connecting vertex vertex since case would alternate hence contradicting minimality path finally existence exclusive edge connecting vertex vertex would immediately make instance negative since would planar prove satisfies required properties first graph cycle construction second maximum degree since every vertex incident edges dummy vertices degree iii dummy vertices degree third number pairs alternating edges smaller fact edge alternate edge since incident exclusive edge edge alternate edge since exists exclusive edge endpoint one iii pairs edges alternate along also alternate along except edges alternate along along prove equivalent suppose admits orthosefe theorem determines assignment exclusive edges two sides satisfying planarity orthogonality constraints show construct assignment exclusive edges two sides satisfying constraints exclusive edge set also set exclusive edge set also edge resp incident resp set resp set set planarity constraints edges satisfied since pair edges alternate along also alternate since assignment construction prove planarity constraints edges satisfied edges incident dummy vertex true reason edges edge incident true since since alternates edge along edge alternates edge along incident analogous arguments hold edge incident finally fact planarity constraints edge incident two dummy vertices satisfied easily verified recall prove orthogonality constraints satisfied every vertex vertices true since satisfied since every exclusive edge incident vertices construction vertex true since vertex true since vertex true since vertex true since vertex assume exist two exclusive edges eaw ebw incident case exists one none trivial since eaw eaw ebw ebw since orthogonality constraints satisfied orthogonality constraints satisfied analogously orthogonality constraints edges eav ebv edge satisfied since constraints edges eav ebv satisfied since vertices degree concludes proof satisfies orthogonality constraints suppose admits orthosefe let corresponding assignment exclusive edges two sides show construct assignment exclusive edges two sides satisfying planarity orthogonality constraints exclusive edge set exclusive edge set also edge resp incident resp set resp prove planarity constraints edges satisfied pair exclusive edges true since alternate along alternate along construction pair true following reason planarity constraints hence orthogonality constraints vertex analogously since planarity constraints hence prove planarity constraints edges satisfied pair exclusive edges neither incident either true since alternate along alternate along construction edge incident true since since alternates edge along edge alternates edge along incident analogous arguments hold edge incident prove orthogonality constraints satisfied every vertex vertices true since satisfied since every exclusive edge incident vertices construction order prove constraints satisfied also first argue namely planarity constraints hence similarly hence using longer chain alternating edges get thus finally orthogonality constraints get since conclude equality follows symmetrically prove orthogonality constraints satisfied vertex assume exist two exclusive edges eaw ebw incident case exists one none trivial since eaw eaw ebw ebw since orthogonality constraints eaw ebw satisfied orthogonality constraints eaw ebw satisfied analogously orthogonality constraints edges eav ebv edge satisfied since constraints eav ebv satisfied concludes proof lemma lemma let instance cycle vertex degree either possible construct polynomial time equivalent instance cycle graph maximum degree proof describe construct equivalent instance cycle vertex degree either fig instances proof lemma edges shared graph black exclusive edges red blue number vertices smaller number vertices note repeatedly performing transformation eventually yields equivalent instance satisfying requirements lemma consider vertex exists two edges incident assume without loss generality appear order along suppose exists edge incident case simpler describe construction case vertices appear order along cases analogous initialize refer fig replace path composed dummy vertices describe exclusive edges set contains exclusive edges incident also contains edges finally contains edges edges prove satisfies required properties first graph cycle construction second degree vertices resp resp dummy vertices degree hence every vertex degree either also number vertices smaller number vertices since prove equivalent suppose admits orthosefe let corresponding assignment exclusive edges two sides exists theorem show construct assignment exclusive edges two sides satisfying constraints exclusive edge incident set also set finally set set set prove planarity constraints edges satisfied note construction edge alternate edge along also edges alternate along edge edge alternates edge along edge edge alternates along finally two edges incident dummy vertex alternate along also alternate along described cases planarity constraints satisfied since satisfied prove planarity constraints edges satisfied note construction edges alternate edge incident dummy vertex along easy verify satisfies planarity constraints among edges also edge alternates construction edge alternates edge along edge alternates along finally two edges incident dummy vertex alternate along also alternate along cases planarity constraints satisfied since satisfied prove orthogonality constraints satisfied every vertex vertices true since satisfied since edges incident assignment vertices true since degree vertex true since similar arguments apply vertices finally vertex true since incident iii satisfies orthogonality constraints concludes proof satisfies orthogonality constraints suppose admits orthosefe let corresponding assignment exclusive edges two sides show construct assignment exclusive edges two sides satisfying planarity orthogonality constraints exclusive edge incident set also set prove planarity constraints edges satisfied consider pair edges graph alternate along none incident also alternate along hence planarity constraints satisfied since satisfied otherwise assume incident note incident since alternate along edge edge edge alternates along hence planarity constraints edges satisfied since satisfied finally prove orthogonality constraints satisfied vertices true since satisfied since every exclusive edge incident vertices construction vertex true since edge incident different holds since since orthogonality constraints satisfied analogous arguments hold vertices prove true first argue namely planarity constraints get since alternate hence orthogonality constraints get since construction conclude equalities follow symmetrically hence orthogonality constraints satisfied since satisfied concludes proof theorem solved polynomial time instances whose shared graph cycle whose union graph maximum degree proof first apply lemma obtain equivalent instance cycle graph maximum degree apply lemma obtain equivalent instance cycle outerplanar graph maximum degree finally apply lemma test polynomial time whether hence positive instance omitted sketched proofs section lemma let instance whose shared graph biconnected possible construct polynomial time equivalent instance whose shared graph biconnected endpoint exclusive edge degree shared graph proof start simplification step removes certain edges exclusive edge edge endpoints adjacent skeleton shared graph neither incident exclusive edges isolated let isolated edges respectively claim instance admits orthosefe part clear since simply remove isolated edges orthosefe obtain orthosefe conversely angelini show edges reinserted sefe thus also orthosefe without crossings planarity constraints satisfied since edges isolated also orthogonal constraints trivially satisfied obtain orthosefe finishes proof claim fig moving exclusive edges vertex degree shared graph new vertex degree shared graph following assume preprocessed way hence contain isolated edges consider exclusive edge say degree shared graph assume edge incident every orthosefe edge embedded face incident describe determine edge later perform following transformation subdivide three vertices add edge replace also exists unique exclusive edge see fig call resulting stance difficult see admits orthosefe admits orthosefe contract vertices onto obtain orthosefe note orthogonal constraint satisfied since triangle ensures exclusive edges incident embedded face hence conversely given orthosefe due orthogonality constraints exclusive edges incident embedded face hence replacement carried locally without creating crossings note transformation fewer endpoints exclusive edges degree shared graph iteratively apply transformation obtain instance remains show always exists suitable edge let denote shared graph since degree exactly one node whose skeleton contains degree skel note either first assume consider position inside skel either vertex skel contained virtual edge skel since share two faces incident virtual edge incident choose unique edge incident contained subgraph represented second assume endpoint pole contained virtual edge skel proceed previous case see fig assume vertex skel edge since isolated due simplification step beginning exists exclusive edge incident since edge endpoint contained subgraph represented virtual edge skel follows every planar embedding edge embedded face incident orthogonality constraints vertex shared also embedded face incident orthosefe thus choose unique edge incident contained subgraph represented lemma let instance shared graph biconnected admits orthosefe instances admit orthosefe proof hard see obtained removing vertices edges suppressing subdivision vertices thus admits orthosefe conversely assume admits orthosefe recall fixed reference embedding skeleton shared graph flip fix flips reference embeddings follows neighbor represented virtual edge skel consider flips cycle orthosefe respect ordering attachments subgraph represented reference embedding used label edge label otherwise label finally choose arbitrary root augmented fix reference embedding skeleton skel choose reference embedding product labels unique path flip otherwise denote planar embedding obtained way remains determine embeddings suitably flipping given orthosefes assume embeddings obtained removing vertices edges contracting edges determine embeddings follows recall every vertex incident exclusive edges degree shared graph vertex incident exclusive edges consider unique whose skeleton contains choose edge ordering given orthosefe claim results orthosefe refer figs first observe orthogonality constraints satisfied since edge ordering vertex chosen according one given orthosefes remains show embeddings also satisfy planarity constraints due construction embeddings exclusive edges embedded faces otherwise would observe crossings skeletons augmented consider two exclusive edges graph cross since cross exists node augmented two edges endpoints different parts skel four parts containing endpoints distinct parts containing endpoints edges alternate around face skel contradicts planarity corresponding input graph thus case least two attachments contained virtual edge skel let augmented corresponding clearly skel endpoints two edges distinct parts skel follows endpoints two edges alternate around two faces corresponding two faces skel construction contradicts assumption given drawing orthosefe lemma admits orthosefe proof simply show terms embeddings path replacing behaves first observe edge ensures exclusive edges incident clockwise embedded side path similarly ensures exclusive edges incident counterclockwise embedded side path moreover since endpoints edges alternate along embedded different sides thus exclusive edges incident clockwise counterclockwise uvpath embedded side similarly exclusive edges ensure exclusive edges incident clockwise one side exclusive edges incident counterclockwise side finally due alternation edges must embedded side orthogonality constraint edge must also embedded side thus side likewise ensures exclusive edges incident clockwise embedded side likewise incident counterclockwise theorem let positive instance whose shared graph biconnected exists orthosefe every edge three bends proof assume cyclic order edges union graph around vertex given induces planar embedding assign incident edges around vertex four ports one edge assigned port adopt method biedl kant first compute linear time shared graph label vertices edge shared graph edges shared graph choose face left outer face union graph draw union graph adding vertices order appear respecting given order edges around vertex edges bend near incident vertices drawn vertically otherwise draw edges around indicated fig incident edges might actually indicate several exclusive edges one graph around around around fig constructing drawing three bends per edge edge may leave bottom incident neighbor lower index ports might host several exclusive edges even one vertex lower index one vertex higher index special cases occur ordering around four exclusive edges two distinct graphs must assigned two consecutive ports particular edge leaving vertex lower index might bend twice around see two small circles fig finally edges around placed edge enters left thus exactly three bends see fig edge one bend around endvertex lower index two bends around endvertex higher index omitted sketched proofs section lemma let instance whose shared graph cycle possible construct polynomial time equivalent instance shared graph cycle graph outerplanar iii two vertices adjacent proof reduction works two steps first step construct stance satisfying properties iii equivalent second step construct final instance equivalent also satisfies property first step show construct instance equivalent cycle number vertices degree satisfying condition property iii smaller number vertices degree satisfying condition repeatedly performing transformation eventually yields required instance consider vertex degree satisfying condition property iii let two exclusive edges incident assume appear order along cases analogous initialize refer fig replace path composed dummy vertices vertex dummy vertices note contains vertices fig illustrations proof lemma plus set dummy vertices describe exclusive edges set contains exclusive edges except also contains edges finally contains edges prove satisfies required properties first graph cycle construction second number vertices degree satisfying condition property iii smaller number vertices fact vertex degree satisfies required condition satisfies condition hand vertex satisfy condition hypothesis satisfies condition since degree path along containing contains dummy vertices incident exclusive edge construction prove equivalent suppose admits orthosefe let corresponding assignment exclusive edges two sides exists theorem show construct assignment exclusive edges two sides satisfying constraints exclusive edge set also set exclusive edge set also set set prove planarity constraints edges satisfied note construction edges alternate edge along also edges alternate edge edge alternates edge along edge edge alternates along finally two edges different alternate along also alternate along described cases planarity constraints satisfied since satisfied prove planarity constraints edges satisfied note construction edges alternate edge incident dummy vertex along alternate along alternate along hence planarity constraints edges satisfied since satisfied hand easily verified planarity constraints satisfied also edges incident dummy vertices prove orthogonality constraints satisfied every vertex vertices true since satisfied since edges incident vertices assignment vertex true since satisfied since since edges incident assignment analogously true since satisfied since since edges assignment true since satisfied since since since edges assignment true since true since true since true since since dummy vertices degree concludes proof satisfies orthogonality constraints suppose admits orthosefe let corresponding assignment exclusive edges two sides show construct assignment exclusive edges two sides satisfying planarity orthogonality constraints exclusive edge set also set finally exclusive edge set prove planarity constraints edges satisfied note alternate since incident vertex also edge edge alternates edge along edge edge alternates along finally two edges different alternate along also alternate along cases planarity constraints satisfied since satisfied planarity constraints edges satisfied since two edges alternate along alternate along since planarity constraints satisfied finally prove orthogonality constraints satisfied every vertex vertices true since satisfied since every exclusive edge incident vertices construction prove true also first argue planarity constraints get since belong sequence alternating edges hence orthogonality constraints get since construction conclude equality follows symmetrically hence orthogonality constraints satisfied since satisfied concludes proof satisfies properties iii equivalent order construct instance equivalent also satisfies property observe proof lemma easily tended applied lemma fact holds instances satisfying property property stronger iii namely degree stronger condition however used ensure exists exclusive edge endpoint one refer fig particular used ensure exists edge connecting vertex however possible prove property iii already sufficient ensure absence edges namely suppose exists edge connecting vertex vertex cases analogous implies degree since also adjacent however path cycle containing also contains either since alternate along contradiction property iii since incident exclusive edge namely concludes proof lemma
| 8 |
throughput maximization wireless powered communication networks mar lifeng xie jie rui zhang paper studies unmanned aerial vehicle uav wireless powered communication network wpcn uav dispatched mobile access point serve set ground users periodically uav employs radio frequency wireless power transfer wpt charge users downlink users use harvested energy send independent information uav uplink unlike conventional wpcn fixed aps wpcn exploit mobility uav via trajectory design jointly wireless resource allocation optimization maximize system throughput particular aim maximize uplink common minimum throughput among ground users finite uav flight period subject maximum speed constraint users energy neutrality constraints resulted problem thus difficult solved optimally tackle challenge first consider ideal case without uav maximum speed constraint obtain optimal solution relaxed problem optimal solution shows uav successively hover finite number ground locations downlink wpt well ground users uplink communication next consider general problem uav maximum speed constraint based solution first propose efficient successive trajectory design jointly downlink uplink wireless resource allocation propose locally optimal solution applying techniques alternating optimization successive convex programming scp numerical results show proposed wpcn achieves significant throughput gains conventional wpcn index aerial vehicle uav wireless powered communication network wpcn wireless power transfer wpt trajectory optimization resource allocation ntroduction adio frequency wireless power transfer wpt emerged promising solution provide convenient reliable energy supply iot devices sensors identification rfid tags compared wpt based inductive coupling magnetic resonant coupling wpt via radiation able operate much longer range charge multiple wireless devices wds simultaneously even moving densely deployed yet transceivers significantly reduced form factor part paper presented ieee vehicular technology conference porto portugal june xie school information engineering guangdong university technology guangzhou china jiexu corresponding author zhang department electrical computer engineering national university singapore singapore elezhang fig illustration wpcn general two major applications wpt wireless communications namely simultaneous wireless information power transfer swipt wireless powered communication network wpcn unify wpt wireless information transfer wit joint design framework downlink wpt wit opposite downlink wpt uplink wit transmission directions respectively particular wpcn enables dedicated wireless charging information collection massive iot devices thus significantly enhances operation range throughput traditional wireless communications however conventional wpcns access points aps usually deployed fixed locations changed deployed conventional wpcn fixed aps faces several challenges first due severe propagation loss signals distance wpt efficiency generally low distance becomes large next conventional wpcn suffers doubly problem wds receive lower energy downlink wpt need use higher transmit power uplink wit achieve rate nearby wds doubly problem result severe user fairness issue among wds geographically distributed large area overcome issues various approaches proposed literature adaptive time power allocation beamforming user cooperation however prior works focused wireless resource allocation designs enhance performances wpcns fixed aps contrast paper propose alternative solution based new unmanned aerial vehicle uav wpcn architecture uavs employed mobile aps uavs found abundant applications cargo delivery aerial surveillance filming industrial iot recently wireless communications attracted substantial research interests due advantages flexible deployment strong los channels ground users controllable mobility example uavs utilized mobile relays help information exchange ground users mobile base stations bss help enhance wireless coverage network capacity ground mobile users furthermore wpt proposed uavs used mobile energy transmitters charge wds ground exploiting fully controllable mobility uav properly adjust location time trajectory reduce distances target ground users thus improving efficiency wit wpt motivated wireless communications well wpt paper pursues unified study wpcn shown fig specifically uav following periodic trajectory dispatched mobile charge set ground users downlink via wpt users use harvested energy send independent information uav uplink investigate optimally exploit uav mobility via trajectory design jointly wireless resource allocation maximize uplink data throughput multiuser wpcn fair manner end maximize uplink common minimum throughput among ground users given uav flight period optimizing trajectory jointly downlink uplink transmission resource allocations wpt wit respectively subject uav maximum speed users energy neutrality constraints however due complex data throughput harvested energy functions terms coupled uav trajectory resource allocation design variables formulated problem nonconvex thus difficult solved optimally tackle difficulty first consider ideal case without considering uav maximum speed constraint show strong duality holds problem lagrange dual problem thus solved optimally via lagrange dual method optimal solution shows uav successively hover finite number ground locations downlink wpt well ground users uplink wit optimal hovering duration wireless resource allocation location next address general problem uav maximum speed constraint considered based solution relaxed problem first propose heuristic successive trajectory design jointly downlink uplink resource allocations find efficient suboptimal solution proposed solution also shown asymptotically optimal uav flight period becomes infinitely large addition propose alternating optimization based algorithm obtain locally optimal solution optimizes wireless resource allocations uav trajectory alternating manner via convex optimization successive convex programming scp techniques respectively employing successive trajectory uav initial trajectory algorithm iteratively refines wireless resource allocations uav trajectory improve uplink common throughput ground users convergence finally present numerical results validate performance proposed wpcn shown joint trajectory wireless resource allocation design significantly improves uplink common throughput compared conventional wpcn fixed location worth noting another line related research employs ground moving vehicles wirelessly charge collect information ground sensors however different uav freely fly airspace ground vehicle move following constrained path twodimensional plane furthermore unlike uavs strong los links ground users wireless channels ground vehicles users usually suffer severe fading thus limiting performance wit wpt joint trajectory design wireless resource allocation wpcn new study different ground moving vehicles investigated literature best knowledge remainder paper organized follows section presents system model wpcn formulates uplink common throughput maximization problem interest section iii considers ideal case without uav maximum speed constraint presents optimal solution relaxed problem section section present two efficient solutions general problem uav maximum speed constraint considered section provides numerical results validate effectiveness proposed designs finally section vii concludes paper ystem odel roblem ormulation shown fig consider wpcn uav dispatched periodically charge set ground users via wpt downlink user uses harvested energy send independent information uav uplink suppose user fixed location ground cartesian coordinate system defined horizontal coordinate user users locations assumed known uav trajectory design transmission resource allocation focus one particular flight period uav denoted finite duration second uav flies horizontally fixed altitude meter given time instant let denote location uav projected horizontal plane accordingly distance uav user given denotes euclidean norm vector denoting uav maximum speed vmax vmax denote respect respectively note assume uav freely choose initial location final location performance optimization consider wireless channels uav ground users dominated los links case path loss model practically assumed similarly accordingly channel power gain uav user time instant given denotes channel power gain reference distance consider multiple access tdma transmission protocol downlink wpt users uplink wit different users implemented frequency band orthogonal time instants time instant use indicators denote transmission mode use indicate downlink wpt mode uav transmits energy charge users simultaneously use represent uplink wit mode user user sends information uav using harvested energy tdma protocol employed follows first consider downlink wpt mode time instant suppose uav adopts constant transmit power downlink wpt mode accordingly harvested power user given denotes current energy conversion efficiency energy harvester therefore total harvested energy user period duration given next consider wit mode user time instant let denote transmit power user uplink wit uav accordingly achievable data rate user uav time instant given note practice energy conversion efficiency generally depends received power level signal waveform purpose exposition consider simplified constant energy conversion efficiency assuming receiver operates linear regime conversion nevertheless design principles paper also extendable scenario energy conversion efficiency however left future work denotes noise power information receiver uav reference ratio snr therefore average achievable rate throughput user period given note purpose exposition consider energy consumption ground user mainly due transmit power uplink wit case total energy consumption user order achieve operation wpcn consider energy neutrality constraint user user energy consumption uplink wit exceed energy harvested downlink wpt result following energy neutrality constraints users work objective maximize uplink common throughput among users subject uav maximum speed constraint users energy neutrality constraints decision variables include uav trajectory transmission mode transmit power uplink wit result problem formulated max min vmax denotes uav maximum speed constraint observed problem objective function constraints due complicated rate energy functions respect coupled variables well binary constraints therefore optimization problem furthermore contains infinite number optimization variables continuous time reasons problem difficult solved optimally tackle problem section iii first consider ideal note assume beginning period user sufficient energy storage storage sufficiently large capacity case long energy neutrality constraints satisfied period stringent energy causality constraints energy harvesting wireless communications automatically ensured thus users sustain operation without energy outage case ignoring uav maximum speed constraint solve relaxed problem follows max min note problem also corresponds practical scenario uav flight duration sufficiently large given finite vmax flying time uav becomes negligible compared hovering time see section iii details section section propose efficient algorithms solve general problem uav maximum speed constraint based optimal solution obtained relaxed problem iii ptimal olution roblem section consider problem introducing auxiliary variable problem equivalently expressed max although problem still one easily show satisfies condition therefore strong duality holds lagrange dual problem result optimally solve using lagrange dual method let denote dual variables associated constraints respectively notational convenience define partial lagrangian lagrange dual function max lemma order dual function upperpbounded must hold proof suppose settingpr therefore must hold order bounded lemma proved based lemma dual problem problem given min notational convenience let denote set specified constraints strong duality holds solve equivalently solving following first obtain solving problem given solve finding optimal minimize obtaining solving problem given first consider problem given evident problem decomposed following subproblems max max problem consists infinite number subproblems corresponding one time instant notepthat optimal value problem always zero see lemma case optimal solution problem chosen arbitrary real number therefore need focus problem subproblems identical different time instants drop index notational convenience denote optimal solution problem total feasible choices due constraints following solve problem first obtaining maximum objective value corresponding optimal feasible comparing obtain optimal first consider case problem max optimal solution given arg max corresponds set optimal hovering locations downlink wpt denoting number optimal solutions problem problem solve using exhaustive search region note optimal solution problem arbitrarily choose one obtaining dual function accordingly optimal value problem given next consider one case problem max note objective function problem concave respect therefore problem convex checking kkt conditions optimal solution max therefore corresponding optimal value problem comparing optimal values following proposition proof straightforward thus omitted proposition optimal solution problem obtained considering following two cases uav operates downlink wpt mode generally otherwise denote arg uav operates uplink wit mode user note two optimal values equal corresponding solutions optimal problem based proposition problem solved thus function obtained finding optimal solve next search minimize solving since dual problem always convex general use subgradient based methods ellipsoid method obtain optimal denoted note objective function subgradient respect chosen simplicity constructing optimal primal solution hand remains construct optimal opt opt primal solution denoted opt proceeding following proposition proposition must hold opt opt opt optimal proof see appendix combining propositions follows optimal dual solution problem opt total number optimal solutions among optimal solutions given downlink wpt solutions given uplink wit one user case need among optimal solutions construct optimal primal solution opt specifically notice solutions opt correspond hovering locations downlink wpt uav transmits constant power hand kth solution corresponds uav hovers user location uplink wit opt opt user transmits let denote hovering durations opt location respectively case solve following uplink common throughput maximization problem obtain optimal hovering durations time sharing max opt opt opt opt opt note problem linear program solved standard convex optimization techniques optimal solution denoted accordingly divide whole period opt first opt denoted downlink wpt next denoted uplink wit users result following proposition proof omitted brevity proposition optimal solution thus given follows opt uav hovers location downlink wpt opt opt opt opt table lgorithm olving roblem obtain maximize problem via exhaustive search region obtain given using proposition compute subgradients update using ellipsoid method converge prescribed accuracy set obtain optimal solution problem based proposition initialization given ellipsoid containing center point positive definite matrix characterizes size repeat ground users locations optimal hovering locaitons wit optimal hovering locations wpt fig optimal hovering locations wpt wit wpcn opt opt opt optimal uplink common throughput given ropt denoting optimal solution obtained summary present overall algorithm solving algorithm table refer solution solution remark worth noting similar solutions proposed multiuser wpt system multiuser communication system tdma transmission uav flight period becomes sufficiently long uav successively hovers given set locations maximize users minimum received energy uav successively hovers user maximize minimum throughput users matter fact derived solution problem proposition unifies results opt consists two sets hovering locations ones wpt ones wit nevertheless note opt hovering locations problem wpt generally different designed based different objective functions communication throughput versus harvested energy remark gain insights interesting consider problem special case users without loss generality assume two users located denotes distance two users due symmetric setup shown opt optimal hovering locations wpt problem actually identical optimal hovering locations maximize two users minimum harvested energy wpt system follows optimal hovering locations downlink wpt critically dependent uav flying altitude opt uav hovers user user sends information uav uplink opt opt opt opt qopt ground users locations optimal hovering locations wit optimal hovering location wpt fig optimal hovering locations wpt wit wpcn users distance particular opt hoveringqlocations wpt opt hovering location right middle point two users combining optimal hovering locations wpt wit evident uav needs hover four locations efficient wpcn shown example fig uav needs hover three locations shown example fig roposed olution roblem uccessive rajectory section considers problem uav maximum speed constraint considered first present successive trajectory motivated solution relaxed problem opt uav sequentially visits hovering locations efficient wpt wit respectively next flying trajectory design duration hovering location transmission resource allocation discretizing time period finally discuss case uav flight duration small visit hovering locations trajectory transmission resource allocations redesigned successive uav trajectory proposed successive trajectory design opt uav sequentially visits optimal hovering locations obtained downlink wpt uplink wit opt notational convenience denote hovering opt opt opt locations qopt opt order maximize time efficient wpt wit uav flies among hovering locations using maximum speed vmax uav aims minimize flying time equivalently minimizing opt traveling path among locations towards end define set binary variables opt indicates uav flies fly opt opt hovering location hovering location hence traveling path minimization problem becomes termining minimize provided locations visited opt kqopt denotes distance qopt qopt note shown flying distance minimization similar traveling salesman problem tsp following differences standard tsp requires salesman uav paper return origin city initial hovering location visiting cities hovering locations flying distance minimization problem interest requirement since initial final hovering locations optimized shown transform traveling distance minimization problem standard tsp follows first add dummy hovering opt location namely hovering location opt whose distances existing hovering locations opt note dummy hovering location virtual node exist physically obtain desirable traveling path solving opt standard tsp problem hovering locations removing two edges associated dummy location result use permutation opt set denote obtained traveling path hovering location first visited followed opt hovering location last denote traveling distance traveling duration hovering opt location qopt hovering location tfly respectively opt hence total traveling distance duration given dfly tfly dfly respectively denote obtained tfly flying trajectory note practice uav flight duration may exactly equal traveling distance tfly tfly need determine hovering durations opt locations tfly uav opt sufficient time visit hovering locations thus need redesign uav trajectory order satisfy duration requirements present complete successive trajectory two cases sections respectively together transmission resource allocations hovering durations transmission resource allocation optimization tfly let denote theopttime duration uav hover opt opt location wpt denote time duration uav hover user qopt wit furthermore opt define opt opt opt accordingly divide time duration opt denoted defined tfly tfly opt tfly tfly opt even therefore within odd uav hover hovering location qopt even uav fly hovering location hovering location uav maximum speed vmax whose location opt therefore successive trajectory finally obtained hovering durations equivalently optimization variables determined next obtained successive trajectory maximize uplink common throughput users optimizing transmission mode power allocation jointly hovering durations towards end first discuss transmission policy wpcn first consider opt uav hovers qopt transmission policy based optimal solution section iii particular opt denote convenience case uav hovers duration uav works downlink wpt mode employing transmit power opt opt hand denote opt case uav hovers user duration user works uplink wit mode transmit information uav whole employing certain transmit power denoted opt qhover combining odd uplink throughput harvested energy user respectively given hover qhover opt opt opt next consider even total duration tfly discretize slots duration tfly slot location uav assumed constant denoted qfly furthermore order handle binary transmission mode indicator consider transmission modes time shared within one slot dividing slot without loss generality first duration uav works downlink wpt mode transmit power duration user works uplink wit mode byp using transmit fly power qfly follows combining slots even subperiods uplink throughput harvested energy user respectively given fly fly fly fly ekfly qfly qfly qfly based uplink common throughput maximization problem reformulated follows optimization variables qhover qfly max qhover rfly fly qfly qhover fly fly fly qhover qfly fly fly note although problem due coupling variables qhover fortunately via change variables introducing ekhover qhover transform problem convex optimization problem thus solved via standard convex optimization techniques combining optimal solution together successive trajectory solution finally found remark worth noting proposed successive trajectory design asymptotically optimal total flying time tfly becomes negligible compared total hovering time tfly case obtained uplink common throughput approaches optimal value serves upper bound trajectory redesign transmission resource allocation tfly next consider case tfly tfly uav flying trajectory based tsp solution longer feasible since duration sufficient opt uav visit hovering locations overcome problem first find solution sufficiently small uav hover one single location reconstruct modified uav trajectory case tfly first uav hover one single fixed location let denote hovering location uav problem max min note given uav hovering location easy show problem equivalent common throughput maximization problem conventional wpcn case transmission resource allocation obtained optimally performing tdma protocol together joint time power allocation details found therefore optimal problem obtained via exhaustive search together joint time power allocation given denote obtained optimal problem qfix qfix hand reconstruct trajectory problem follows previously obtfly tained traveling path case tfly linearly towards center point xfix yfix resulting total flying distance equals vmax accordingly zoom trajectory follows xfix yfix denotes linear scaling factor note redesigned trajectory reduces hovering one single fixed location qfix tfly redesigned trajectory becomes identical tsptfly based trajectory table lgorithm olving roblem solve problem algorithm find opt opt optimal hovering locations opt add dummy hovering location namely opt hovering location set distances existing hovering locations tfly obtain desirable traveling path solving standard opt tsp problem hovering locations remove two edges associated dummy location tfly denotes total flying time tfly find optimal hovering time allocation transmission resource allocation solving problem accordingly obtain corresponding trajectory section otherwise tfly obtain trajectory based accordingly find optimal transmission resource allocation divide slot first duration downlink wpt duration user uplink transmit power wit accordingly uplink achievable data rate harvested energy user slot respectively given rbk accordingly problem reformulated max min rbk case successive trajectory modified accordingly transmission resource allocation obtained similarly discretizing time slots employing tdma within slot wpt wit respectively details transmission resource allocation omitted convenience summary present overall algorithm successive trajectory solving algorithm table cases tfly tfly remark worth discussing special case users provide design insights mentioned remark optimal solution case general three four hovering locations located line connecting two users therefore corresponding successive trajectory always flies line two users total flying duration visiting hovering locations tfly denoting distance two users due symmetry two users shown via contradiction case tfly successive trajectory correspondingly obtained transmission resource allocation indeed globally optimal solution problem nevertheless result hold general scenario users lternating ptimization based olution roblem section propose alternative solution problem based technique alternating optimization optimizes uav trajectory transmission resource allocation alternating manner towards locally optimal solution towards end reformulate problem discretizing whole period duration time slots duration finite number note duration chosen sufficiently small assume uav location approximately unchanged slot denoted similarly problem vmax constraints correspond discretized version uav maximum speed constraint following optimize uav trajectory transmission resource allocation respectively assuming one given first optimize uav trajectory given case problem convex optimization problem respect rate function objective function energy function rhs tackle issue propose efficient algorithm using scp technique updates uav trajectory iterative manner transforming problem convex approximate problem suppose denotes initial uav trajectory corresponds obtained uav trajectory iteration lower bounds rbk following lemma lemma given uav trajectory follows rbk rbk rbk rbk rbk proof see appendix based lemma iteration optimize replacing rbk problem spective lower bounds rbk respectively obtained uav trajectory previous iteration specifically uav trajectory updated arg max min rbk note problem function rbk concave respect constraints convex result problem convex optimization problem thus optimally solved standard convex optimization techniques interior point method furthermore shown lemma objective function problem serves lower bound problem therefore iteration objective function problem achieved monotonically increases problem finite optimal value trajectory design converge locally optimal solution next optimize transmission resource allocation given uav trajectory although problem similarly problem transform convex optimization problem via change variables thus solved via standard convex optimization techniques finally optimize uav trajectory via based scp technique transmission resource allocation via convex optimization technique alternating manner alternating optimization ensure objective function monotonically increase optimal value bounded alternating optimization approach eventually converge locally optimal solution note performance approach solving critically depends initial point iteration paper choose successive trajectory based solution section initial point case approach section always achieve common throughput smaller successive common throughput defining euler number notice rbk concave functions respect inequalities tight successive trajectory trajectory hovering static hovering duration fig uplink common throughput versus flight duration case users jectory based design validated numerical results next section umerical esults section present numerical results validate performance proposed joint trajectory transmission resource allocation design compared following banchmark scheme static hovering uav hovers fixed location whole period case uplink common throughput maximization problem reduced problem solved efficiently via exhaustive search solving transmission resource allocation given simulation uav flies fixed altitude receiver noise power uav set dbm channel power gain reference distance set energy harvesting efficiency set maximum speed uav vmax transmit power uav dbm first consider special case users fig shows uplink common throughput case versus flight duration distance two users set observed successive trajectory performance scpbased trajectory consistent remark shows successive trajectory indeed optimal tfly result case trajectory improve performance furthermore observed proposed successive trajectory trajectory achieve higher common throughput statichovering benchmark performance gain becomes substantial becomes larger last two proposed designs observed approach performance upper bound solution consistent remark next consider system ground users randomly distributed within area shown fig illustration fig also shows optimal hovering locations problem successive common throughput trajectory successive trajectory users location optimal hovering locations wpt successive trajectory trajectory hovering static hovering successive trajectory trajectory harvested energy user fig harvested energy users successive trajectory trajectory average achievable rate duration fig system setup simulation various trajectories obtained fig uplink common throughput versus flight duration corresponding transmission resource allocation efficiently balance communication rates among users generally may lead unbalanced energy harvesting distributed users shows difference design versus multiuser wpt system harvested energy users designed identical fig shows uplink common throughput users versus flight duration observed proposed successive trajectory trajectory jointly corresponding optimized transmission resource allocation achieve higher common throughput benchmark performance gain becomes substantial becomes larger particular trajectory outperforms successive trajectory especially flight duration small furthermore sufficiently large successive trajectory trajectory observed approach performance upper bound solution uav maximum speed constraint ignored consistent remark vii onclusion user fig average achievable rate users trajectory well trajectory setup observed opt optimal hovering locations wpt opt total optimal hovering locations problem fig fig show harvested energy ground user observed fig proposed designs harvested energy values different users generally different contrast observed fig achievable rates users order maximize common throughput inferred figs trajectory design paper investigate common throughput maximization problem new wpcn setup jointly optimizing uav trajectory transmission resource allocation downlink wpt uplink wit subject uav maximum speed constraint users energy neutrality constraints solve challenging problem first consider ideal case without uav maximum speed constraint solve relaxed problem optimally optimal solution shows uav successively hover two sets optimal ground locations downlink wpt uplink wit respectively next based optimal solution relaxed problem propose successive trajectory trajectory solve problem uav maximum speed constraint considered numerical results showed proposed wpcn achieves performance flight period sufficiently large significantly enhances uplink common throughput performance conventional wpcn static located even optimal location effectively resolving doubly fairness issue ppendix proof proposition suppose terms identical optimal general cases discussed following first case suppose smaller opt opt opt one follows proposition wpcn work downlink wpt mode throughout whole period duration case users harvest energy uav thus leading zero common throughput solution thus optimal one cases suppose opt opt opt one smaller terms opt opt opt follows proposition wpcn work uplink wit mode user throughout whole period duration case throughput user zero thus resulting zero common throughput solution thus optimal well combining cases order common throughput must hold opt opt opt opt opt opt therefore proposition proved proof lemma define convex respect taylor expansion convex function global function values given follows equivalently given substituting follow respectively furthermore note equality holds therefore equality holds therefore lemma proved eferences xie zhang throughput maximization wireless powered communication networks appear proc ieee vtc spring online available https wang niyato kim han wireless charging technologies fundamentals standards network applications ieee commun surveys vol zeng clerckx zhang communications signals design wireless power transmission ieee trans vol may zhang wireless powered communication opportunities challenges ieee commun vol apr zeng zhang wireless powered communication networks overview ieee wireless vol apr zhang throughput maximization wireless powered communication networks ieee trans wireless vol zhang placement optimization energy information access points wireless powered communication networks ieee trans wireless vol mar zhang energy beamforming feedback ieee trans signal vol liu zhang chua wireless powered communication energy beamforming ieee trans vol liu zhang multiuser miso beamforming simultaneous wireless information power transfer ieee trans signal vol zhang general design framework mimo wireless energy transfer limited feedback ieee trans signal vol may che duan zhang multiantenna wireless powered communication energy information transfer ieee commun vol zhang user cooperation wireless powered communication networks proc ieee globecom chen rebelatto filho vucetic cooperative communications ieee trans signal vol apr zeng zhang lim wireless communications unmanned aerial vehicles opportunities challenges ieee commun vol may zeng zhang lim throughput maximization uavenabled mobile relaying systems ieee trans vol lyu zeng zhang lim placement optimization mobile base stations ieee commun vol mar zhan zeng zhang data collection uav enabled wireless sensor network appear ieee wireless commun online available https yanikomeroglu efficient placement aerial base station next generation cellular networks proc ieee icc mozaffari saad bennis debbah efficient deployment multiple unmanned aerial vehicles optimal wireless coverage ieee commun vol zeng zhang joint trajectory communication design enabled wireless networks ieee trans wireless vol mar zhang capacity characterization uavenabled broadcast online available https qiu zhang capacity multicast channel joint trajectory design power allocation appear proc ieee icc online available https chen esrafilian gesbert mitra efficient algorithms channel reconstruction communications proc ieee globecom workshop mozaffari saad bennis debbah unmanned aerial vehicle underlaid communications performance tradeoffs ieee trans wireless vol jun zeng zhang wireless power transfer trajectory design energy optimization submitted ieee trans wireless commun online available https zeng zhang wireless power transfer trajectory design energy region characterization proc ieee globecom workshop zeng zhang joint trajectory communication design multiple access proc ieee globecom xie shi hou sherali making sensor networks immortal approach wireless power transfer trans vol shu yousefi cheng chen shin velocity control mobile charging wireless rechargeable sensor networks ieee trans mobile vol jul zhang chen near optimal data gathering rechargeable sensor networks mobile sink ieee trans mobile vol jun boshkovska zlatanov schober practical energy harvesting model resource allocation swipt systems ieee commun vol clerckx bayguzina waveform design wireless power transfer ieee trans signal vol zhang cui general utility optimization framework energy harvesting based wireless communications ieee commun vol apr lui dual methods nonconvex spectrum optimization multicarrier ieee trans vol jul lawler lenstra kan shmoys traveling salesman problem guided tour combinatorial optimization wiley boyd vandenberghe convex optimization cambridge cambridge univ press mar
| 7 |
automatic segmentation spine using convolutional neural networks via redundant generation class labels nov malinda vaniaa dawit deukhee leea center bionics korea institute science technology seoul republic korea division science technology kist school korea university science technology seoul republic korea korea advanced institute science technology daejeon republic korea abstract significant increase number people suffering spine problems automatic image segmentation spine obtained computed tomography image important diagnosing spine conditions performing surgery surgery systems spine complex anatomy consists vertebrae intervertebral disks spinal cord connecting ribs result spinal surgeon faced challenge needing robust algorithm segment create model spine study developed automatic segmentation method segment spine compared segmentation results reference segmentations obtained experts developed fully automatic approach spine segmentation based hybrid method method combines convolutional neural network cnn fully convolutional network fcn utilizes class redundancy soft constraint greatly improve segmentation results proposed method found significantly enhance accuracy segmentation results system processing time comparison based measurements dice coefficient jaccard index volumetric similarity sensitivity specificity precision segmentation segmentation accuracy matthews correlation coefficient mean surface distance hausdorff distance global consistency error experimented images patients experimental results demonstrated efficiency proposed method keywords automatic segmentation computed tomography spine segmentation cnn fcn corresponding author email address dyklee deukhee lee authors contributed equally work preprint submitted journal compuational design engineering december introduction technology plays important role defining surgery performed today surgery cas surgeon uses surgical navigation system navigate instrument relation anatomy patient system uses medical images computed tomography images patients extract relevant information create model patient model manipulated easily surgeon provide views angle depth within volume thus surgeon thoroughly assess situation establish accurate diagnosis approach utilized spinal diagnosis therapy support systems important element cas image segmentation process used construct accurate model patient image segmentation important extracting information image segmentation process subdivides image constituent parts objects depending problem solved segmentation stopped region interest specific application isolated one difficult tasks process autonomous segmentation method step determines eventual success failure image analysis organ visualization critical aspect today medical imaging modalities generate high resolutions large number images examined manually drives development efficient robust image analysis methods medical imaging automated image segmentation could increase precision eliminating subjectivity clinician also saves tremendous time effort eliminating exhaustive process results hardly repeatable automatic spine segmentation process used generate anatomically correct models challenges associated use challenges attributed anatomic complexity spine vertebrae intervertebral disks spinal cord connecting ribs etc image noise data images contain noise low intensity spongy bones softer bones partial volume effect many methods proposed alleviate challenges recent years recent spine segmentation research categorized two main approaches free estimation methods trainable methods methods require explicit model segmentations include following classical region growing watershed active contours methods however trainable methods central assumption structures repetitive geometry therefore utilizethe repetitive geometry probabilistic representation aimed toward explaining variation shape organ segmenting image uses information kang utilized adaptive thresholding combined region growing conduct bone segmentation mastmeyer used region growing method capable detecting disks vertebrae sambucetti proposed active contour segmentation basis construct bone volume methods require expert human intervention manual adjustment parameter settings several distinct steps several automatic methods proposed vertebral column segmentation images methods consist two steps identification spine separate individual segmentation spine vertebrae yao used watershed segmentation directed graph search methods locate vertebral body surfaces method performed several datasets leakages occurred cases furthermore identification segmentation vertebrae carried klinder used deformable model mesh adaptation disadvantages method lie dependency tremendous parameter setup recent advances medical image segmentation techniques employ machine learning techniques increase segmentation accuracy gradually reduce human intervention huang constructed vertebrae detectors using adaboost liu used edge descriptors vertebrae detection glocker detected vertebrae shapes labeled using model trained supervised classification forest however method required selecting appropriate feature relying priori knowledge spine shape therefore method less applicable general varying image data study propose utilization class redundancy combined improved hybrid convolutional neural network cnn fully convolutional network fcn methods overcome drawbacks previous methods provide practical solution present fully automatic approach spine segmentation based hybrid method cnn fcn following main contributions propose efficient hybrid training scheme utilizing mask sampled image segments analyze behavior adapting class imbalance segmentation problem hand demonstrate capabilities system using class redundancy soft constraints greatly improves segmentation results efficiency proposed method demonstrated experimental results study organized follows section introduce histogrambased segmentation next explain cnn describe proposed method detail section presents experimental results analysis evaluate method database cases compare results automatic methods order draw reliable conclusions section concludes study discusses results future work methodology level set segmentation let given data represented following function number sliced images volume data segmenting data preprocessed includes morphological image processing image processing contrast adjustment data preprocessing multiphase segmentation performed method first introduced used many previous works segmentation model deals histogram preprocessed data uses adaptive global maximum clustering agmc automatically obtain number significant local maxima histogram using clustering procedure provides number distinct regions corresponding subdomains segmentation performed using function label different regions data data obtained region interest chosen differently assigning different label accordance property desirable objects detailed segmentation desired region carried using active contour model suggested model performs segmentation based combination local global intensity enable divide object surrounded weak boundaries distinguish adjacent objects boundaries method generally yields good result several shortcomings first tedious original data pass various segmentation phases order obtain final desired result second setting optimal value parameters used segmentation models easy task third dependent specific dataset different spine datasets different numbers distinct regions based multiphase segmentation hence manual selection labels necessary different datasets parameters also different different datasets result segmentation model would potentially eliminate downsides current method necessary convolutional neural networks cnn type network utilizes topology analyze data use local receptive fields weight sharing subsampling mechanisms cnns proved successful various supervised tasks image classification object recognition image segmentation tasks models using cnns trained using image dataset associated certain class label regard perform spine segmentation using networks first transform data image dataset analyzed networks training frame ground truth figure preparing dataset take pixel inside box given training frame form patch size around pixel part spine ground truth label patch spine label otherwise labelled background label repeat procedure pixels box taking sliding interval test data also prepared similar manner called patching preparing training testing data spine data volumetric data processed frame frame however frame data necessary training network structure spine symmetric across interval frames disk order prepare label training data first segmented training frames using level set segmentation method discussed section also performed manually using software programs slicer ground truth obtained training data prepared using patching task involves segmenting spine located certain area frame need process entire frame computational simplicity formed rectangular box around area spine training frame note box size big enough tolerate spatial variation spine structure across frames data patching take pixel inside rectangular box given frame form sized patch around juxtaposition pixel ground truth part spine patch labeled class otherwise labeled class taking certain stride size repeated procedure across pixels inside box prepare training images patches padding required form patches boundary pixels box respective distances edges given frame boundaries box longer two classes training data class contains training patches obtained training frames testing data also prepared manner used rectangular box testing frames prepared test images patches using patching method testing model expected classify certain patch part spine based result reconstruct ground truth frame spine words segment spine fig scenario segmentation spine treated image classification task two classes results method shown experimental section although segmentation result encouraging accurate particularly boundaries spine shown qualitative evaluation section work found using two classes certain downsides first imbalance number training images two classes prepared patching spatial area spine figure testing frame given frame small compared area containing rib background training patches labeled hence model eventually learns background spine problem related pixel intensity testing frame contains parts rib see fig model likely classify part spine label pixel intensity two similar data moreover two classes trained model uncertain class certain patch belongs chance commit error proposed method redundant generation class labels order address problems propose new way preparing training data proposed method involves generating redundant class labels masking spine structure training frame ground truth training frames fig pixel value spine pixel value background generate first redundant class masking area spine different pixel value instance class important training model used mechanism punishing model accurately distinguish spine surrounding environment similar manner generate classes continuing masking different pixel values associated different class label see fig enables obtain proportional area different classes ground truth general proposed method following advantages proportional amount training patches prepared class model properly learns segment boundaries spine within given frame masking applied along spine boundary class class masking class class figure masking generate redundant classes masking spine ground truth given training frame work used four classes classification class represents spine class represents background two classes redundant first redundant class class generated masking ground truth pixel value second masking class done pixel value masking thickness chosen balance data distinct class anything inside rectangular box part spine specific label example testing frame contains parts rib model able identify parts labels avoid segmenting rib along spine trained properly probability model makes errors reduced classes model architecture work used architecture shown fig implemented simple neural architecture cnn layers fully connected layers compared deep networks used medical image segmentation cnn layers network works input image patch size first convolution local receptive field size outputs convolutional feature map layers neurons used boundary pixels convolution hence layer feature map size input image rectified linear unit relu activation layer computes element wise follows layer later max pooling layer size subsamples spatial size input patch output stage size second convolution also local receptive field size however outputs convolutional feature map layers neurons layer followed relu activation layer max pooling layer size output stage size figure convolutional network architecture output second stage flattened fed fully connected layer neurons layer relu activation layer following layer dropout applied every time input presented neural network drops different set neurons probability reduces network dependency presence particular neurons second fully connected layer neurons uses output previous stage input layer also relu activation layer dropout also implemented similar manner previous layer final layer softmax layer neurons layer outputs probabilistic prediction array classes used note even though treated spine segmentation classification problem one class important labeling spine three classes irrelevant redundant experimental design results several factors affected diversity medical image data imaging modalities type machine used radiation dose scan time patients order prove robustness method tested algorithm diverse array patient datasets data sources tested method several public datasets obtained spineweb website gangnam severance hospital datasets scan data pixel images patients validate segmentation result used ground truth gangnam severance hospital dataset contained pixel images thicknesses act gold standard project reason used data spineweb ensure diversity real clinical data respect patients imaging machine parameter selection training computational time patch size set sliding interval pixel network implemented python tensorflow experiments performed geforce gtx ram training stage cpu implementation compared network took approximately training approximately min process one image however stages completed faster network compared training network took approximately segmentation processing time took approximately min image training process performed patches obtained images frames data obtain large training dataset patches used instead entire image train network addition patches represent local structures higher quality level corresponding label patches spine data selected randomly training dataset ensure fairness analysis results testing dataset used different dataset used training set fairness necessary order show even system never seen input data produce good segmentation results based previous training data number measurement metrics used evaluation segmentation computed segmentation result metrics chosen quantitative analysis divided similarity classic distance measurements similarity metrics included dice coefficient jaccard index volumetric similarity classic measurements used sensitivity specificity precision segmentation segmentation accuracy distance measurement used mean surface distance msd hausdorff distance global consistency error gce also used matthews correlation coefficient mcc work prepared training testing dataset using method discussed section used simple cnn model discussed section spine segmentation also used prepared data rather deep neural network comparison purpose addition two different methods cnn two classes compared method method level set method level set method level set method level set figure qualitative comparison method cnn level set model experimental results qualitative evaluation selected representative slices results testing set fig presents results obtained using different methods results demonstrate proposed method segments accurately methods quantitative evaluation manually labeled images subject used gold standards results segmentation method converted binary images voxel resolution image dimensions query image following description measures segmentation result indicated gold standard similarity metrics dice coefficient measures extent spatial overlap two binary images values range overlap perfect agreement study values obtained using equation jaccard index jaccard index jaccard coefficient used measure spatial overlap intersection divided size union two label sets expressed shown equation obtained dice measure equation iii volumetric similarity defined absolute volume difference divided sum compared volumes obtained equation demonstrated table method shows greater improvement segmentation results compared methods based score obtain similarity ground truth result also supported improvement volumetric similarity table similarity measurements four different segmentation algorithms method method cnn level set mean jaccard mean index volumetric mean classic measurements utilize confusion matrix perform classic measurements utilizing four variables true positive false positive true negative false negative pixels correctly segmented spine ground truth algorithm pixels classified spine ground truth classified spine algorithm falsely segmented pixels classified spine ground truth algorithm correctly detected background pixels classified spine ground truth classified spine algorithm falsely detected background sensitivity sensitivity measures portion positive pixels ground truth also identified positive algorithm evaluated used check algorithm sensitivity detecting proper spine pixels sensitivity obtained equation sensitivity specificity specificity measures portion negative pixels ground truth also identified negative algorithm evaluated checks sensitive algorithm detection correct background pixels metric obtained equation specif icity iii segmentation segmentation characterize segmentation result obtained using equations compliments gold standard segmentation results respectively similarity table classic measurements four different segmentation algorithms method method cnn level set sensitivity mean specificity mean segmentation mean method method cnn level set segmentation mean accuracy mean mcc mean accuracy accuracy defined equation accuracy matthew correlations coefficient mcc mcc introduced matthews gives summary performance segmentation algorithm mcc analyzes segmentation result ground truth two sets takes account compute correlation coefficient ranges complete disagreement complete agreement value zero shows segmentation correlated ground truth mcc defined shown equation method shows greatest improvement sensitivity specificity categories sensitivity specificity accuracy similar method results also showed significant improvement categories compared classic cnn method results also supported improvement mcc showed segmentation results achieved good agreement ground truth distance measurements table distance measurements four different segmentation algorithms method method cnn level set msd mean mean gce mean mean surface distance msd msd mean absolute values surface distance surface voxels value obtained equation sdsg dsg metrics attempts estimate error surfaces using distances surface voxels define surface distance ith surface voxel dsg distance closest voxel surface distance values calculated described gerig distance transform image value voxel euclidean distance millimeters nearest surface voxel values surface voxels hausdorff distance measures distance ground truth surface segmented surface compute use surface models generated compute msd smaller indicates better segmentation accuracy metric defined equation max max min iii global consistency error gce gce defined error measurement two segmentations error averaged voxels given equation gce min total voxel set difference error voxel defined based distance metrics methods obtained better results classic cnn level set methods showed improvement achieved good results based gce distance metrics gce distribution proposed method close zero indicates low error segmentation result conclusion discussion study new approach medical image segmentation proposed approach uses class redundancy soft constraint cnn architecture proposed method achieved competitive result compared several widely used medical image segmentation methods proposed algorithm tested real medical data evaluated similarity metrics dice coefficient jaccard index volumetric similarity sensitivity specificity precision segmentation segmentation accuracy matthews correlation coefficient mean surface distance hausdorff distance global consistency error similarity metrics generated results respectively results demonstrate effectiveness method conclusion main contribution study presentation new approach prepare data use simple cnn model improve accuracy segmentation results experimental results quantitatively qualitatively showed proposed method improves accuracy corrects error segmentation exhibits better segmentation performance compared conventional methods results also showed high specificity sensitivity high overlap low distance manual annotation proposed method future intend assess method using deeper network broad training data low contrast data improve performance hope eventually capable handling broader range medical data another possible direction investigate improvements computation time training stage optimization currently patches spine data takes train using cpu processing time takes min image acknowledgment research supported project funded ministry smes startups mss korea project references references kumar analysis medical image processing applications healthcare industry international journal computer technology applications shoham lieberman benzel zehavi zilberstein roffman bruskin fridlander joskowicz knoller robotic assisted spinal surgery concept clinical practice computer aided surgery pham prince current methods medical image segmentation annual review biomedical engineering elsevier bankman handbook medical image processing analysis harris andreasen cizadlo bockholt magnotta arndt improving tissue classification mri threedimensional multispectral discriminant analysis method automated training class selection journal computer assisted tomography klinder ostermann ehm franz kneser lorenz automated vertebra detection identification segmentation images medical image analysis krcah szekely blanc fully automatic fast segmentation femur bone images shape prior proceedings ieee international symposium biomedical imaging hardisty gordon agarwal skrinskas whyne quantitative characterization metastatic disease spine part semiautomated segmentation using deformable registration level set method medical physics gordon hardisty skrinskas whyne automated atlasbased segmentation metastatic spine journal bone joint surgery kang engelke kalender new accurate precise segmentation method skeletal structures volumetric data ieee transactions medical imaging mastmeyer engelke fuchs kalender hierarchical segmentation method definition vertebral body coordinate systems qct lumbar spine medical image analysis sambuceti brignone marini fiz morbelli buschiazzo campi piva massone piana frassoni estimating whole asset humans computational approach integrated imaging european journal nuclear medicine molecular imaging yao oconnor summers automated spinal column extraction partitioning proceedings ieee international symposium biomedical imaging klinder ostermann ehm franz kneser lorenz automated vertebra detection identification segmentation images medical image analysis huang chu lai novak vertebra detection iterative segmentation spinal mri ieee transactions medical imaging hierarchical segmentation identification thoracic vertebra using edge detection coarsetofine deformable model computer vision image understanding glocker zikic konukoglu haynor criminisi vertebrae localization pathological spine via dense classification sparse annotations proceedings international conference medical computing computer assisted intervention kim kang segmentation without supervision adaptive global maximum clustering ieee transactions image processing kim kim park lee automatic segmentation leg bones using active contours conf proc ieee eng med biol vania kim lee automatic multisegmentation abdominal organs level set weighted global local forces student conference isc ieee embs international lecun boser denker henderson howard hubbard jackel backpropagation applied handwritten zip code recognition neural computation krizhevsky ilya hinton imagenet classification deep convolutional neural networks advances neural information processing systems long shelhamer darrell fully convolutional networks semantic segmentation proceedings ieee conference computer vision pattern recognition ronneberger fischer brox convolutional networks biomedical image segmentation medical image computing computerassisted intervention miccai taha hanbury metrics evaluating medical image segmentation analysis selection tool bmc medical imaging bmc medical imaging udupa leblanc zhuge imielinska schmidt currie hirsch woodburn framework evaluating image segmentation algorithms computerized medical imaging graphics gerig jomier chakos valmet new validation tool assessing improving object segmentation proceedings international conference medical image computing intervention lashari ibrahim framework medical images classification using soft set procedia technology yang zheng nofal deasy naqa techniques software tool multimodality medical image segmentation journal radiation oncology informatics goodfellow bengio courville deep learning mit press
| 1 |
experience report developing servo web browser engine using rust may brian anderson lars bergstrom david herman josh matthews keegan mcallister jack moffitt simon sapin manish goregaokar indian institute technology bombay manishg mozilla research banderson larsberg dherman jdm kmcallister jack ssapin abstract modern web browsers internet explorer firefox chrome opera safari core rendering engine written language choice made affords systems programmer complete control underlying hardware features memory use provides transparent compilation model servo project started mozilla research build new web browser engine preserves capabilities browser engines also takes advantage recent trends parallel hardware use new language rust provides similar level control underlying system builds many concepts familiar functional programming community forming novelty useful safe systems programming language paper show language affine type system regions many syntactic features familiar functional language programmers successfully used build systems software also outline several pitfalls encountered along way describe potential areas future research introduction heart modern web browser browser engine code responsible loading processing evaluating rendering web content three major browser engine families engine internet explorer webkit web engine chrome chr opera ope safari saf gecko engine firefox fir engines core many millions lines code use enabled browsers achieve excellent sequential performance single web page mobile devices lower processor speed many processors browsers provide level interactivity desktop processors informal inspection critical security bugs gecko determined roughly bugs use free range access related integer overflow split errors tracing values javascript heap code errors related dynamically compiled code servo ser new web browser engine designed address major environment architectural changes last decade goal servo project produce browser enables new applications authored web platform run safety better performance better power usage current browsers address safety issues using new systems programming language rust rus parallelism power scale across wide variety hardware building either appropriate part web platform additionally improving concurrency reducing simultaneous access data structures using architecture components javascript engine rendering engine paints graphics screen servo currently lines rust code implements enough web platform render process many pages though still far cry million lines code mozilla firefox browser associated libraries however believe implemented enough web platform provide early report successes failures open problems remaining servo point view programming languages runtime research experience report discuss design architecture modern web browser engine show modern programming language techniques many originated functional programming community address design constraints also touch ongoing challenges areas research would welcome additional community input browsers modern web browsers load static pages also handle pages similar complexity native applications application suites google games based copyright notice appear preprint option removed https html css parsing dom styling flow tree display lists layout rendering layers compositing final output script script figure processing stages intermediate representations browser engine unreal modern browsers ability handle much simple static pages figure shows steps processing site naming specific servo browser similar steps used modern parsing html css url identifies resource load resource usually consists html parsed typically turned document object model dom tree programming languages standpoint several interesting aspects parser design html first though specification allows browser abort parse practice browsers follow recovery algorithms described specification precisely even illformed html handled interoperable way across browsers second due presence script tag token stream modified operation example example injects open tag header comment blocks works modern browsers rendering elements appear screen computed elements rendered painted memory buffers directly graphics surfaces compositing set memory buffers graphical surfaces called layers transformed composited together form final image presentation layerization used optimize interactive transformations like scrolling certain animations html script title scripting whether timers script blocks html user interactions event handlers javascript code may execute point parsing layout painting afterwards display scripts modify dom tree may require rerunning layout painting passes order update output modern browsers use form dirty bit marking attempt minimize recalculations process script commented rust rust statically typed systems programming language heavily inspired families languages rus like family languages provides developer fine control memory layout predictable performance unlike programs rust programs memory safe default allowing unsafe operations blocks rust features type system inspired region systems work mlkit project especially implemented cyclone language unlike related ownership system singularity rust allows programs transfer ownership also temporarily borrow owned values significantly reducing number region ownership annotations required large programs ownership model encourages immutability default allowing controlled mutation owned values complete documentation language reference rust available http requires parsing pause javascript code run completion since resource loading large factor latency loading many webpages particularly mobile modern parsers also perform speculative token stream scanning prefetch resources likely required layout flow tree processed produce set display list items list items actual graphical elements text runs etc final positions order elements displayed styling constructing dom browser uses styling information linked css files html compute styled tree flows fragments called flow tree servo process create many flows previously existed dom example list item styled associated counter glyph https http https parsing http site reddit cnn main owned pointer integer let mut data box servo thread servo threads table performance servo mozilla gecko rendering engine layout portion common sites times milliseconds lower numbers better thread move data data error accessing moved value print data relatively simple rules ownership rust enables foolproof task parallelism also data parallelism partitioning vectors lending mutable references properly scoped threads rust concurrency abstractions entirely implemented libraries though many advanced concurrent patterns implemented safe rust usually encapsulated figure code compile attempts access mutable state two threads main immutable borrowed pointer integer let data servo crucial test servo performance servo must least fast browsers similar tasks succeed even provides additional memory safety table shows preliminary comparison performance layout stage described section rendering several web sites mozilla firefox gecko engine compared servo taken modern macbook pro remainder section cover specific areas servo design implementation make use rust impacts limitations features thread println data print data figure safely reading immutable state two threads rust syntax rust struct enum types similar standard record types datatypes well pattern matching types associated language features provide two large benefits servo traditional browsers written first creating new abstractions intermediate representations syntactically easy little pressure tack additional fields classes simply avoid creating large number new header implementation files importantly pattern matching static dispatch typically faster virtual function call class hierarchy virtual functions inmemory storage cost associated virtual function tables sometimes many thousands importantly incur indirect function call costs browser implementations transform code either use final specifier wherever possible specialize code way avoid cost rust also attempted stay close familiar syntax require full fidelity easy porting programs languages approach worked well rust prevented complexity arose cyclone attempts build safe language required minimal porting effort even complicated code main heap allocated integer protected mutex let data arc mutex let thread move print figure safely mutating state two threads gecko ownership concurrency rust type system provides strong guarantees memory aliasing rust code memory safe even concurrent multithreaded environments beyond rust also ensures freedom concurrent programs data operated distinct threads also distinct rust ownership model data owned two threads time example code figure generates static error compiler first thread spawned ownership data transferred closure associated thread longer available original thread hand immutable value figure borrowed shared multiple threads long threads outlive scope data even mutable values shared long owned type preserves invariant mutable memory unaliased mutex figure compilation strategy many statically typed implementations polymorphic languages standard new jersey aml used compilation strategy optimizes representations data types polymorphic code monomorphically used defaults less efficient style otherwise order share code strategy reduces code size leads unpredictable performance code changes codebase either add new instantiation polymorphic function given type https language rust html javascript modular compilation setting expose polymorphic function externally change performance code local change made monomorphization mlton cfjw instead instantiates polymorphic code block types applied providing predictable output code developers cost code duplication strategy used virtually compilers implement templates proven within systems programming rust also follows approach although improves ergonomics templates embedding serialized generic function asts within compiled binaries rust also chooses fairly large default compilation unit size rust crate subject compilation optimized unit crate may comprise hundreds modules provide namespacing abstraction module dependencies within crate allowed cyclic large compilation unit size slows compilation especially diminishes ability build code parallel however enabled write rust code easily matches sequential speed analog without requiring servo developers become compiler experts servo contains modules within crates table lines code servo code cases servo uses wrapper code call code rust directly reach approach large problem functions defeats many places code crafted ensure code inlined caller resulting degraded performance intend fix inlining taking advantage fact rustc produce output llvm intermediate representation llv subject optimization demonstrated capability small scale yet deployed within servo libraries abstractions many languages provide abstractions threading parallelism concurrency rust provides functionality addresses concerns designed thin wrappers underlying services order provide predictable fast implementation works across platforms therefore much like modern browsers servo contains many specialized implementations library functions tuned specific cases web browsers example special small vectors allow instantiation default inline size use cases create many thousands vectors nearly none elements case removing extra pointer indirection particularly values less pointer size significant space savings also library tuned work dom flow trees process styling layout described section hope code might useful projects well though fairly today concurrency available rust form channels separation reader writer ends channel separation allows rust enforce constraint simplifying improving performance implementation one supports multiple readers structured entire servo browser engine series threads communicate channels avoiding unsafe explicitly shared global memory single case reading properties flow tree script operation whose performance crucially tested many browser benchmarks major challenge encountered approach one heard designers large concurrent systems reasoning whether protocols make progress threads eventually terminate manual quite challenging particularly presence arbitrary thread failures memory management described section rust affine type system ensures every value used one result fact two years since servo development encountered zero memory bugs safe rust code given bugs make large portion security vulnerabilities modern browsers believe even additional work required get rust code pass type checker initially justified one area future improvement related allocations owned rust today simply wrap raw pointers unsafe blocks need use custom memory allocator interoperate spidermonkey javascript engine gecko implemented wrapper types compiler plugins restrict incorrect uses foreign values still source bugs one largest areas unsafe code additionally rust ownership model assumes single owner piece data however many data structures follow model order provide multiple traversal apis without favoring performance one example list contains back pointer previous element aid traversals opposite direction many optimized hashtable implementations also access items linked list keys values servo use unsafe code implement data structures form though typically able provide safe interface users lines code language interoperability rust nearly complete interoperability code exposing code using code programs support allowed smoothly integrate many browser libraries critical bootstrapping browser without rewriting libraries immediately graphics rendering code javascript engine font selection code etc table shows breakdown current lines rust code including generated code handles interfacing javascript engine code table also includes test code though majority code html javascript two limitations language interoperability pose challenges servo today first rust currently expose functions code second rust compile macros rust provides hygienic macro system macros defined using declarative syntax macro system proven invaluable defined one hundred macros throughout servo associated libraries example html tokenizer rules shown figure written domain specific language closely matches format html https html tokenization rust community share lint checks found future plans include refining safety checks garbage collected values flagging invalid ownership transference introducing checks constructs nonoptimal terms performance memory usage match states loop match self self self tagopen self error emit self emit figure incremental html tokenizer rules written succinct form using macros macro invocations form identifier code javascript engines dynamically produce native code intended execute efficiently interpreted strategy unfortunately area large source security bugs bugs come two sources first potential correctness issues many optimizations valid certain conditions calling code environment hold ensuring specialized code called conditions hold second dynamically producing compiling native code patching memory respecting invariants required javascript runtime garbage collector barriers free registers also challenge macro handles incremental tokenization state machine pause time await input next character available macro cause early return function contains macro invocation careful use control flow together overall expressionoriented style makes servo html tokenizer unusually succinct comprehensible rust compiler also load compiler plugins written rust perform syntactic transformations beyond capabilities hygienic macros compiler plugins use unstable internal apis maintenance burden high compared macros nevertheless servo uses procedural macros number purposes including building perfect hash maps compile interning string literals trace hooks despite exposure internal compiler apis deep integration tooling makes procedural macros attractive alternative traditional systems metaprogramming tools preprocessors code generators open problems work discussed many challenges browser design current progress many interesting open problems integer still open problem provide optimized code checks overflow underflow without incurring significant performance penalties current plan rust checking integer ranges servo run debug builds test suite may miss scenarios occur optimized builds represented test suite unsafe code correctness today write unsafe code rust limited validation memory lifetimes type safety within code block however many uses unsafe code translations either pointer lifetimes data representations annotated inferred rust interested additional annotations would help prove basic properties unsafe code even annotations require theorem prover ilp solver check static analysis compiler plugins also provide lint use infrastructure compiler warnings allows safety style checks integrate deeply compiler lint plugins traverse typechecked abstract syntax tree ast within lexical scope way warnings lint plugins provide essential guarantees within servo dom objects managed javascript garbage collector must add roots dom object wish access rust code interaction written well outside scope rust guarantees bridge gap lint plugins capabilities enable safer correct interface spidermonkey garbage collector example enforce compile time tracing phase garbage collection rust objects visible report contained pointers values avoiding threat incorrectly collecting reachable values furthermore restrict ways wrappers around spidermonkey pointers manipulated thus turning potential runtime memory leaks ownership semantic api mismatches static compiler errors instead lint warning terminology suggests checks may catch possible mistakes extensions type system easily guarantee soundness rather lint plugins lightweight way catch mistakes deemed particularly common damaging practice plugins ordinary libraries members incremental computation mentioned section modern browsers use combination dirty bit marking incremental recomputation heuristics avoid reprocessing full page mutatation performed unfortunately heuristics frequently source performance differences browsers also source correctness bugs library provided form self adjusting computation suited incremental recomputation visible part page perhaps based adapton approach seems promising related browser research zoomm browser effort qualcomm research build parallel browser also focused multicore mobile devices browser includes many features yet implemented servo particularly around resource fetching also wrote javascript engine done servo zoomm share extremely concurrent architecture script layout rendering user interface operating concurrently one another order maximize interactivity user parallel layout one major area investigated zoomm browser focus servo major difference servo implemented https http https rust whereas zoomm written similarly modern browser engines ras bodik group university california berkeley worked parallel browsing project funded part mozilla research focused improving parallelism layout instead approach parallel layout focuses multiple parallel tree traversals modeled subset css using attribute grammars showed significant speedups system reimplementation safari algorithms used approach due questions whether possible use attribute grammars accurately model web implemented today support new features added servo uses similar css selector matching algorithm leroy efficient data representation polymorphic languages deransart eds programming language implementation logic programming vol lecture notes computer science springer leroy objective caml system release april available http llv llvm compiler infrastructure http meyerovich bodik fast parallel webpage layout proceedings international conference world wide web www raleigh north carolina usa acm milner tofte harper macqueen definition standard revised mit press cambridge mai tang king cascaval montesinos case parallelizing web pages proceedings usenix conference hot topics parallelism hotpar berkeley usenix association ope opera web browser http reppy cml concurrent language pldi acm june acknowledgments servo project started mozilla research benefitted contributions samsung igalia hundreds volunteer community members contributions amount half commits project grateful partners volunteers efforts make project success rus rust language http saf apple safari web browser http ser servo web browser engine https tofte birkedal region inference algorithm acm trans program lang july web webkit open source project http org weeks whole program compilation mlton invited talk workshop september wang lin zhong chishtie web browsers slow smartphones proceedings workshop mobile computing systems applications hotmobile phoenix arizona acm references arora blumofe plaxton thread scheduling multiprogrammed multiprocessors appel macqueen standard new jersey plip vol lncs new york august cfjw cejtin fluet jagannathan weeks mlton standard compiler available http org cascaval fowler piekarski reshadi robatmili weber bhavsar zoomm parallel web browser engine multicore mobile devices proceedings acm sigplan symposium principles practice parallel programming ppopp shenzhen china acm chr google chrome web browser http fir mozilla firefox web browser http grossman morrisett jim hicks wang cheney memory management cyclone proceedings acm sigplan conference programming language design implementation pldi berlin germany acm hunt larus abadi aiken barham fahndrich hawblitzel hodson levi murphy steensgaard tarditi wobber zill overview singularity project technical report microsoft research october hammer phang hicks foster adapton composable incremental computation proceedings acm sigplan conference programming language design implementation pldi edinburgh united kingdom acm microsoft internet explorer web browser http kohlbecker wand deriving syntactic transformations specifications proceedings acm symposium principles programming languages popl munich west germany acm
| 6 |
positivity denominator vectors cluster algebras dec peigen cao fang abstract paper prove positivity denominator vectors holds skewsymmetric cluster algebra introduction cluster algebras introduced fomin zelevinsky motivation create common framework phenomena occurring connection total positivity canonical bases cluster algebra rank subalgebra ambient field generated certain combinatorially defined generators cluster variables grouped overlapping clusters one important features cluster algebras laurent phenomenon thus denominator vectors cluster variables defined following phenomenon section fomin zelevinsky conjectured conjecture conjecture positivity denominator vectors given denominator vector cluster cluster algebra cluster variable respect sequel always write denominator vectors briefly positivity affirmed following cases cluster algebras rank theorem cluster algebras arising surfaces see iii cluster algebras finite type see caldero keller proved weak version conjecture cluster algebra seed acyclic cluster variable paper give positive answer conjecture cluster algebras theorem positivity holds cluster algebra paper organized follows section basic definitions notations introduced section proof theorem given preliminaries recall semifield abelian multiplicative group endowed binary operation auxiliary addition commutative associative distributive respect multiplication let rop free abelian group generated finite set index min rop define addition rop uai ubi semifield called tropical semifield mathematics subject classification keywords cluster algebra denominator vector positivity date version december peigen cao fang multiplicative group semifield multiplication hence group ring domain take ambient field field rational functions independent variables coefficients integer matrix bij called bij general called exists positive integer diagonal matrix matrix encode sign pattern matrix entries directed graph vertices directed edges bij matrix matrix called acyclic oriented cycles definition seed triplet called cluster free generating set called cluster variables subset called coefficients iii bij integer matrix called exchange matrix seed called acyclic acyclic denote max definition let seed define mutation seed new triple given bjk otherwise bik bik bkj otherwise seen also seed definition cluster pattern assignment seed every vertex tree edge always denote btij definition let cluster pattern cluster algebra associated given cluster pattern field generated cluster variables cluster pattern coefficients rop corresponding cluster algebra said cluster algebra geometric type cluster pattern coefficients rop exists seed corresponding cluster algebra called cluster algebra principal coefficients theorem let cluster algebra seed theorem laurent phenomenon cluster variable sum laurent monomials coefficients positivity denominator vectors cluster algebras positive laurent phenomenon cluster variable sum laurent monomials coefficients denote xai let cluster algebra seed laurent phenomenon cluster variable subset let minimal exponent form form appearing expansion xdn vector called denominator vector briefly cluster variable respect proposition denote standard basis vectors vectors uniquely determined initial conditions together recurrence relations max btik btik btik proposition know notion independent choice coefficient system studying cluster variable focus cluster algebras geometric type proposition proposition let cluster algebra cluster variable two seeds suppose respective respectively proof convenience readers give sketch proof btik thus know btik btik tik bik bik laurent phenomenon expansion respect following form xdn replacing equation one equation get expansion respect seen peigen cao fang proof theorem let cluster algebra cluster variable denote set clusters cluster variable two vertices let distance tree cluster define distance dist min following theorem theorem theorem proposition let cluster algebra geometric type cluster let cluster variable cluster containing dist let unique sequence relating seeds denote last two directions clearly let edges direction written moreover unique exists laurent monomial appearing expansion variable nonnegative exponent iii exist thus form unique lemma keep notations theorem exists laurent monomial laurent expansion respect appears nonnegative exponent proof theorem exists laurent monomial appearing expansion variable exponent know tjq substituting equality obtain laurent expansion respect easy see laurent expansion exists laurent monomial appear laurent monomial nonnegative exponent give proof theorem proof proposition know notion independent choice coefficient system hence assume cluster algebra geometric type dist let cluster cluster variable show respect let dist since tree unique sequence linking positivity denominator vectors cluster algebras choice know dist xtj prove induction dist clearly dist inductive assumption assume dist dist dist dist know inductive assumption proposition get component component equal thus order show need show component nonnegative prove two cases case case proof case positive laurent phenomenon expansion respect form subset component negative exponent must positive know btjk obtain laurent expansion substituting equation respect exponent laurent monomial appearing obtained laurent expansion negative contradicts lemma thus component nonnegative preparation proof case consider maximal rank two mutation subsequence directions end sequence connects vertex thus following two cases let first case let second case easy see first case second case components lemma first case respect nonnegative second case components respect nonnegative proof components cluster variable respect negative say xte since clusters xta xtr obtained cluster sequences mutations using two directions theorem know hence xte least common cluster variables theorem theorem know xte xte xte thus dist max dist xte dist xte dist xte hand sequence know max dist xte dist xte dist xte peigen cao fang thus obtain max dist xte dist xte dist xte result xte xtm contradicts xte xtm components respect nonnegative argument dist max dist xte dist xte dist xte hand max dist xte dist xte dist xte contradicts components respect nonnegative return proof theorem proof case applying theorem iii vertex respect directions case case get either lemma component nonnegative acknowledgements project supported national natural science foundation china references caldero chapoton schiffler quivers relations arising clusters case trans amer math soc caldero keller triangulated categories cluster algebras annales sci norm sup cao conjectures generalized cluster algebras via cluster formula pattern algebra ceballos pilaud denominator vectors compatibility degrees cluster algebras finite type trans amer math soc fomin shapiro thurston cluster algebras triangulated surfaces cluster complexes acta fomin zelevinsky cluster algebras foundations amer math soc fomin zelevinsky cluster algebras coefficients compos math gekhtman shapiro vainshtein properties exchange graph cluster algebra math res lett gross hacking keel kontsevich canonical bases cluster algebras lee schiffler positivity cluster algebras annals mathematics reading stella recursions dualities pacific math peigen cao department mathematics zhejiang university yuquan campus hangzhou zhejiang address peigencao positivity denominator vectors cluster algebras fang department mathematics zhejiang university yuquan campus hangzhou zhejiang address fangli
| 0 |
horie oimc upper limit huffman code length jpeg compression kenichi horie division olympus imaging corporation ishikawa hachioji tokyo japan preprint oimc revised version abstract strategy computing upper limits huffman codes block jpeg baseline coding developed method based geometric interpretation dct calculated limits close maximum proposed strategy adapted transform coding methods mpeg video compressions calculate close upper code length limits respective processing blocks keywords bit rate requirement discrete cosine transforms huffman codes image coding jpeg upper bound bit length horie oimc introduction jpeg baseline coding far widely implemented method coding still images statistical behavior dct coefficients correlations runs zeros sizes coefficients studied resulted jpeg huffman coding furthermore distribution dct coefficients also investigated detail statistical behaviors jpeg well known little known mathematical bounds particular interest huffman codes exhibit large variations thereby determine important part total jpeg file mathematical study upper limits huffman academic interest stems needs consumer electronics digital cameras employ jpeg baseline sequential coding knowledge close upper limits block jpeg image file important assigning economic buffer memory spaces various stages image encoding digital camera furthermore digital camera usually displays number images fitting rest memory space ensure number jpeg images indeed stored quantization tables jpeg image optimized keep file size close targeted value however rate control achieved iterating quantization huffman encoding bit counting coded data length sophisticated file size prediction schemes require extensive processing alternatively rate control avoided calculating number images based average jpeg file size standard variation however statistical estimation method reliable especially repetitive shooting mode wherein successive images often statistically correlated knowledge close upper limit jpeg file size rate control scheme necessary anymore least use greatly reduced additionally reliability statistical methods calculating number images significantly improved taking upper limit account jpeg file may formatted according jfif exif consists huffman horie oimc codes huffman codes huffman quantization tables portions including header close upper limits even maximum portions except huffman codes easily calculated hand extremely difficult calculate close limit huffman one reason difficulty huffman codes assigned combination extends several dct coefficients rather individual coefficients subtle reason little known space dct coefficients functional behavior space work clarify geometrical aspect dct coefficients derive computational method calculating close upper limit huffman block calculation method derived dependence scaling example quantization tables annex jpeg specification although jpeg case described work general principles method applicable transform coding methods particular approaches adapted calculate close upper limits vlc mpeg mpeg proposed ideas might applicable even jpeg image compressions jpeg baseline coding geometry dct coefficients introduce terms definitions used work thereby briefly review aspects jpeg baseline encoding process also add mathematical comments observations necessary introducing developing calculation strategy jpeg baseline coding color image consists example one luminance two chrominance matrices pixel data matrices sectioned blocks pixel block pixel values shifted unsigned integers range signed integers range let denote shifted pixel values horie oimc pixel block viewed vector space set possible vectors furnishes cube coordinate origin cube turn contained ball around coordinate origin radius let cos discrete cosine transform dct matrix dct coefficients pixel block given expression fuv matrix notation reads superscript denotes transpose matrix among dct coefficients fuv coefficient times average value pixel data within pixel block called coefficient remaining coefficients called coefficients call fuv dct configuration simply vector although dct introduced literature mapping image domain spatial frequency domain interpret dct simply orthogonal mapping acting upon vectors within space see use fact dct matrix orthogonal matrix derived naturally noting column vectors eigenvectors real symmetric second difference matrix automatically ensures column vectors orthogonal order verify orthogonal feature dct transformation show natural inner product space preserved dct inner product two general vectors given trace wherein vectors assumed take form matrix let image vectors dct using orthogonality dct matrix invariance trace cyclic permutation matrices follows vkk vwt since dct orthogonal rotation rotates cube another cube leaves ball unchanged call configuration space definition contains possible dct configurations subset since dct configuration fuv lies ball coefficients satisfy relation particular coefficients must lie range focusing coefficients subsequent horie oimc tions need characterizations coefficients respect ball proposition regardless coefficient value coefficients fulfill following strict inequality ball condition proof wrong dct configuration would exist property fuv coefficient must necessarily zero lies sphere surface radius ball since dct rotation inverse image vector must also lie sphere however due nature cube sum squared amplitudes pixel data become square radius unless pixel data equal implies image vector flat spatial frequency dct coefficients would become zero coefficient would value contradicting assumption proving assertion inequality means geometrically dct configurations lie equator sphere defined set two equations fuv thus dct configurations lie ball equator taken away ball henceforth call ball simply ball expressing condition configuration fuv lies configuration space easy task rotated cube symmetric respect coordinate axes instead easier apply inverse dct fuv express inverse vector lies cube precisely inverse vector must lie integer coordinate lattice within cube fuv cube condition horie oimc fuv integer integer condition dct coefficients quantized quantization table quv quantized dct coefficients calculated duv fuv quv denotes integer portion one may also use rounding function instead due ball condition coefficients become quantized coefficient values constrained lie interval table luminance quantization table table chrominance quantization table jpeg baseline coding allows control file size varying quantization table frequently employed method consists multiplying scale factor example quantization tables jpeg specification obtain scaled quantization tables quv max smaller scale factor finer quantization larger jpeg file size quantization factors become one whereas original example quantization tables whose largest quantization factor work consider scale factors quantization quantized coefficients ordered zigzag scan order represented runlength size amplitude let denote quantized dct coefficients zigzag scan order jpeg specification index zigzag ordering coefficient follows also use notations fuv quv horie oimc unquantized coefficients quantization values respectively zigzag ordering nonzero quantized coefficient runlength consecutive number possibly zero coefficients precede zigzag sequence determined combination represented duplet integer numbers symbol together amplitude runlength size number bits used encode amplitude encoding method jpeg greater extension duplet interpreted succession zeros three consecutive extensions may precede duplet total runlength last run zeros includes last coefficient special duplet assigned run zeros indicate eob end block terminates block last coefficient zero eob symbol assigned work size smallest integer satisfying definition usual definition additionally introduces zero size implies quantized coefficient fact zero jpeg encoding process sequence quantized coefficients represented sequence pairs symbols amplitudes ascending order zigzag scan huffman codes huff length bits assigned symbol whereas amplitude encoded bits eob bits long luminance chrominance coefficients throughout work use typical huffman tables annex jpeg specification let len denote bits sequence coefficients represented pairs symbols amplitudes ascending zigzag scan order huffman codes huff since symbol already determines pair define len pair coefficient size preceded zeros relation len len len huff holds example zero runlength sequence chrominance coefficients results len len len huff len see table since horie oimc depends amplitudes respective sizes also use simplified notation len sequence coefficients coefficient sizes might zero furthermore len denotes luminance chrominance configuration typical huffman tables structured len len see tables therefore len configuration increases quantized amplitudes increase runlengths increase particular one might expect least general increases unquantized amplitudes increase configuration maximum would lie boundary configuration space precisely following fact proposition maximum configuration exists configuration boundary furthermore maximum configuration lies always boundary distance away change proof assume maximum configuration certainly coefficients become zero quantization least one coefficient fuv remains quantization already boundary gradually enlarge amplitude fuv hit boundary thereby obtain configuration boundary process len become smaller due mentioned monotony structure len also become larger since assumed maximum configuration thus proves assertion since arguments depend specific geometry statements true consider configurations ball discussed coding coefficients briefly review coefficients coefficients encoded separately coefficients take values range quantized coefficient encoded difference term previous block horie oimc order difference represented amplitude size size huffman coded amplitude encoded bits maximum amplitude difference two quantized coefficients given quantization value iii general strategy dct configuration certainly exceed times maximum huffman table plus length eob code maximum length huffman codes bits maximum size eob bits long thus crude upper bound bits given besides obviously high therefore useless number derived reference quantization table contrary case coefficients coefficients encoded context combination runlength size underlying geometry configuration space rather complicated make difficult find close upper limit codes let alone find maximum configuration order develop strategy deriving close upper limits simplify problem note coefficients disentangled coefficient cube condition integer condition contrast ball condition makes reference coefficient certainly much simpler system inequalities reasons drop cube condition integer condition work ball condition alone thus instead considering configuration space allow configurations within entire ball horie oimc need calculate upper limit entire ball since upper limit certainly also upper limit subset particular original configuration space defined integer condition fact need calculate upper limit configurations surface ball since shown previous section len takes maximum value surface reason henceforth consider configurations sphere follows characterize configuration sphere coefficients zigzag scan order view ball condition coefficient fixed sign square root fuv let arbitrary configuration let quantization values zigzag scan order len depends coefficient respective size quantized value smallest integer property therefore coefficient replaced smallest possible value zero without changing replacements huffman codes remain codes amplitudes change replaced coefficient smaller equal original coefficient amplitude configuration still satisfies ball condition reads follows note configuration still lies sphere although coefficient may changed order calculate upper limit configurations need calculate upper limit configurations smallest coefficients configurations shall called reduced configurations unless specified otherwise following configurations reduced configurations horie oimc outline strategy calculating upper limit first choose reference configuration sphere reference configuration large coefficient sizes compare arbitrary configuration introducing set operations replace coefficient sizes operations generally correspond jpeg symbols single operation may replace coefficients produce run zeros followed coefficient quantized size rather comparing terms individual coefficients compare terms symbols method enables direct calculation gain loss induced operation since huffman codes assigned symbols depending operations positive operations enlarge certain coefficient sizes generally lead longer whereas operations negative operations reduce coefficient sizes even introduce runs zeros generally lessen since already large coefficient sizes limited number positive operations possible general operations positive since otherwise ball condition would violated instead positive operations must accompanied negative operations order fufill ball condition derive interdependence rule positive negative operations ball condition carefully choosing reference introducing conditions interdependence rule made restrictive combinations positive negative operations allowed one key features method fact limited number combinations upper bound gain calculated simple table method finally upper bound added reference configuration yield upper limit configurations sphere order specify outlined strategy helpful straight away introduce appropriate reference configuration together necessary conditions lead restrictive interdependence rule end consider ball condition suppose moment quantization factors powers coefficients configuration either zero powers since reduced configuration introduce unquantized size horie oimc coefficient defining setting indices let positions zigzag order property likewise let positions ball condition converted following inequality inequality implies constraint restricts numbers size size indices combinations pair particular right hand side sum copies whereas left hand side sums indices hence conclude least terms left hand side must smaller terms respective sizes less maximum value expression observations suggest natural choice reference configuration namely set coefficients value size coefficients replaced size coefficients positive operations least coefficients must replaced smaller sized coefficients negative operations altogether coefficients replaced size coefficients amplitude coefficients replaced size coefficients since otherwise ball condition violated matter small coefficients might necessary condition restrictive interdependence rule quantization values powers order derive rule arbitrary quantization table replace quantization table defined follows horie oimc log allowed make replacement since need find upper limit quantization table rather original quantization table true view following proposition proposition sphere maximum larger equal maximum original quantization table proof prove assertion let maximum configuration based quantization table satisfying ball condition define new configuration since new configuration still fulfills ball condition wherein replaced construction quantizing yields result quantizing based definition maximum greater equal proves assertion assertion also true entire ball might true rotated cube difficulties involved reasoning visualized without going details inequalities let configuration near vertex cube cube rotated reducing coefficient amplitudes order define move entirely within cube however rotated cube like reductions amplitudes might move along coordinate direction outside cube new configuration may although disprove assertion see straightforward introduce concept quantization table without enlarging configuration space ball summary arguments set forth section proposition simplifies problem finding upper limit following way horie oimc theorem given quantization table dct configuration configuration space less equal maximum reduced configurations sphere using quantization table based therefore upper limit reduced configurations also upper limit dct configurations introduction quantization table via mandatory deriving restrictive interdependence rule however general quantization factors seems much work needed derive ball condition convenient interdependence rule example quantized coefficient sizes reference may defined log square term comes close however since reference values may constant positions difficult derive simple interdependence rule manner shown rest work restrict quantization table restriction leave study general case future work unquantized sizes determine coefficients configuration quantization quantization table amounts reducing sizes exponents quantization factors quantized sizes given max since coefficients chosen minimal zero zero detailed strategy calculating upper limit following outline strategy let reference configuration coefficients constant value size quantized sizes given max since largest quantization factor work largest quantization factor thus quantized sizes always larger horie oimc equal particular coefficients quantization let arbitrary configuration unquantized sizes quantized sizes max define following operations generally correspond symbols depending size target configuration different operation replaces size reference configuration size replace replace simultaneously sizes replace simultaneously sizes replace simultaneously replace performing replace proposition given reduced configuration quantization table set tions uniquely defined proof suffices show coefficient replaced corresponding coefficient exactly one operations consider several cases coefficient size either zero preceded zeros three cases may arise see none operations applied replaced horie oimc replaced hand preceded run zeros three cases may arise preceeding coefficient sizes replaced zeros change coefficient sizes replaced respective sizes coefficient sizes replaced zeros whereupon coefficient replaced assume zero either member zero run followed coefficient member zero run including last zigzag coefficient first case replaced zero either whereas latter case replaced zero summary coefficient correctly replaced respective coefficient one one operation following outline strategy examine changes induced operations end introduce notion local individual ference defined change per affected position operation replaces sizes indices positions leaves sizes indices unaffected local change defined len len quantized sizes positions corresponding replaced quantized sizes unaffected quantized sizes positions assign affected positions difference differences positions together yield total change operation example replaces coefficients simultaneously replaces coefficients since leave size unaffected proposition operations reduce negative operations spective local differences negative zero operations increase positive operations whereby local differences strictly positive horie oimc proof consider operation succession reduces unquantized size lower size induces change quantized size smaller size size since otherwise must zero would applied noted earlier runlength sizes relation len len holds local difference given len len set simplify notation thus negative operation reduces first note example quantization tables jpeg specification preceding quantization factors zigzag scan order always smaller twice current quantization factor scaling quantization factors according still weaker relation exponents quantization factors defined according deduce relation using fact quantization factors integers quantized sizes reference configuration relation implies follows len len len len len tables reveal right hand side expression always greater equal len greater len since summary len len shown let len len local difference assigned positions set differs size position changed repeating arguments set forth see reduced remains let len len denote local difference assigned positions consider inequality len len true since quantized sizes reference configuration greater equal luminance horie oimc cients equality len len whereas len len chrominance coefficients thus always reduces except luminance coefficients case change let len len local difference assigned affected positions enlarges size either possible cases since unquantized size target configuration two sizes available unquantized size let len len local gain case similarly let len len local gain case induces gain len len similar case two cases may occur let len len local gain similarly let len len local gain proof introduced notations local losses gains note indices assigned change uniquely determine respective operation example implies applied coefficient zigzag scan position quantized size configuration given depend quantized size losses indexed without size represented sum local differences positions affected operations help indices local differences express follows len len wherein horie oimc sum counts local gains induced positive operations case unquantized size whereas sum counts local gains case unquantized size indicated indices assume terms terms thus configuration assumed posses positions unquantized size positions unquantized size due ball condition must exist least positions unquantized size less position must created one negative operations either position within zero run position size smaller local losses positions summarized sum contributions four lines side equation sums losses generated negative operations respectively estimate upper limit expression end observe sum local losses generated operations uniquely defined particular configuration therefore loss certainly greater equal sum defined sum smallest local losses among possible local losses created possible operations target configurations precisely let set local losses possible pairs copies possible triplets copies possible pairs copies positions define loss function sum smallest values relation holds horie oimc similar fashion let set local gains positions possible pairs target configurations likewise let set local gains positions possible pairs target configurations define gain function sum largest values set gain function sum largest values clearly relations hold summary obtain inequality len len second term side certainly greater maximum expression among possible combinations pair thus proved following theorem given arbitrary reduced configuration satisfies inequality len len max formula represents upper limit reduced configurations sphere using quantization table theorem upper limit dct configurations len readily calculated using quantization values order calculate maximum term determine largest local gains set largest local gains set smallest local losses set principle local changes determined applying operations possible parameter values horie oimc possible positions calculating induced changes loss possible positions possible sizes may calculate loss values cases cases calculate accounted fact cases cases cases cases adding numbers obtain calculations easily done computer relatively small number tions sufficient completely determine loss gain functions hand consider combinations operations possible configurations calculations changes would impossible turns combinations parameter values need considered determine loss gain functions required ranges tables reveal len len len len large values using relations calculations large runlength values sizes may skipped way possible calculate upper limit value hand see calculation example next section easily improve upper limit see let assume quantized coefficient sizes value since sizes local losses set value say altogether copies since underlying operations affects two positions among smallest values loss function would add times hand positions available assigning copies superfluous words configuration times local loss eliminating superfluous copies set obtain subset practical application difficult find eliminate superfluous copies may delete ies obtain subset term summarizes local losses real configuration must still greater equal sum smallest local losses within subset wherein superfluous copies eliminated since horie oimc subset sum smallest values greater equal obtain relation arguing similar fashion eliminate least superfluous counts local gains sets obtain respective subsets let sums smallest local gains respectively relations valid summary obtain new possibly lower upper limit len max improve upper limit note upper limit needs calculated maximum configurations obviously sum maximum configuration must greater equal sum smallest local losses sible maximum configurations therefore eliminate subset losses arise maximum configuration obtain improved loss tion property similarly eliminating least gains occur maximum configuration sets obtain smaller gain functions properties summary obtain new possibly lower upper limit find local changes occur maximum configuration consider operations operations generate runs zeros say let symbol quantization len assume compare another possible ficients positions wherein replaced value preceding zeros replaced value replacements within configuration allowed since replaced configuration still satisfies ball condition due identity view relation exponents quantization factors new quantized sizes greater horie oimc equal followed len smaller replaced original sequence occur maximum configuration since replaceable longer tables summarize len zero run combinations shaded cells tables appear maximum configuration table huffman size chrominance coefficients size runlength table huffman size luminance coefficients size runlength calculation results demonstrate calculation upper limit chrominance coefficients using unscaled quantization table values given zigzag scan order quantization table calculated according exponents given horie oimc positions quantized sizes reference given remaining positions help table obtain len set local losses positions value len len smaller choice leads greater loss since know already set contains copies losses greater equal need considered anymore always example replacement quantized sizes yields local loss len len len another example len len len position position len len len choices simple lookup table yields result replaced zero obtain len len coefficients replace zeros larger local losses positions example len len similarly obtain result positions runlength obtain example len len combinations likewise summary smallest local losses one copy copies loss function given consider unquantized size coefficients largest local gains times value gain function given similarly unquantized size obtain gain function total function zero equals maximum lies upper limit chrominance coefficients given horie oimc length reference configuration len result means reference fact maximum configuration upper limits scale factors luminance coefficients calculated similar fashion table iii shows calculation results variety scale factors table iii upper limits bits different scale factors scale factor luminance chrominance discussion results table iii much smaller crude upper bound order illustrate closeness result case consider image block table table pixel values example block dct quantization configuration positions size positions size would yield bits coefficients luminance bits chrominance results table iii close must even closer corresponding unknown maximum despite enlargement configuration space ball derivation strategy used features specific quantization table huffman codes jpeg specification without features nice properties horie oimc calculation lost proposed method adapted generalized cope quantization table huffman codes particular transformations dct may handled well slightly adjusting ball ball condition furthermore outlined general strategy adapted dct coefficients mpeg video compressions use quantization vlc coding methods similar jpeg baseline coding although coding methods transform coefficients significantly different video compressions jpeg still image compression general ideas work might useful investigating behaviors compression methods details left future work references information technology digital compression coding continuous tone still images requirements guidelines iso jpeg international standard recommendation pennebaker mitchell jpeg still image data compression standard new york van nostrand reinhold kingsbury january image coding connexions web online available http smoot rowe study dct coefficient distributions proc spie symp electronic imaging vol san jose eude grisel cherifi debrie distribution dct coefficients proc ieee int conf assp adelaide australia april miller smidth coleman data compression using feedforward quantization estimator patent bruna smith vella naccari jpeg rate control algorithm multimedia proc ieee int symp consumer electronics reading horie oimc ohta ikuma electronic still camera method operating patent application publication may hamilton jpeg file interchange format version microsystems http digital still camera image file format proposal exif version mar jeida electronic still camera working group strang discrete cosine transform siam review vol
| 5 |
asynchronous distributed optimization via randomized dual proximal gradient jun ivano notarnicola giuseppe notarstefano abstract paper consider distributed optimization problems cost function separable sum possibly functions sharing common variable split strongly convex term convex one second term typically used encode constraints regularize solution propose class distributed optimization algorithms based proximal gradient methods applied dual problem show choosing suitable primal variable copies dual problem separable written terms conjugate functions dual variables stacked blocks associated computing nodes first show weighted proximal gradient dual function leads synchronous distributed algorithm local dual proximal gradient updates node main paper contribution develop asynchronous versions algorithm node updates triggered local timers without global iteration counter algorithms shown proper randomized proximal gradient updates dual function ntroduction several estimation learning decision control problems arising networks involve distributed solution constrained optimization problem typically computing processors partial knowledge problem portion cost function subset constraints need cooperate order compute global solution whole problem key challenge designing distributed optimization algorithms ivano notarnicola giuseppe notarstefano department engineering del salento via monteroni lecce italy result part project received funding european research council erc european union horizon research innovation programme grant agreement march draft networks communication among nodes possibly asynchronous early references distributed optimization algorithms involved primal dual subgradient methods alternating direction method multipliers admm designed synchronous communication protocols fixed graphs recently versions algorithmic ideas proposed cope realistic network scenarios consensus strategy proposed solve unconstrained convex optimization problems asynchronous symmetric gossip communications primal synchronous algorithm called extra proposed solve smooth unconstrained optimization problems authors propose accelerated distributed gradient methods unconstrained optimization problems symmetric networks connected average order deal directed graph topologies algorithm average consensus combined primal subgradient method order solve unconstrained convex optimization problems paper extends algorithm online distributed optimization directed networks novel class distributed algorithms proposed fixed graphs conditions exponential convergence provided distributed primal method proposed solve balanced communication graphs optimization problems separable cost function including local differentiable components common nondifferentiable term experiments dual averaging algorithm run separable problems common constraint directed networks general constrained convex optimization problems authors propose distributed random projection algorithm used multiple agents connected balanced network author proposes primal randomized descent methods minimizing convex optimization problems linearly coupled constraints networks asynchronous distributed method proposed separable constrained optimization problem algorithm shown converge rate iteration counter admm approach proposed general framework thus yielding continuum algorithms ranging fully centralized fully distributed method called proposed solve separable convex optimization problems cost function written sum smooth march draft term successive updates proposed solve separable optimization problems parallel setting another class algorithms exploits exchange active constraints among network nodes solve constrained optimization problems idea combined dual decomposition methods solve robust convex optimization problems via polyhedral approximations algorithms work asynchronous direct unreliable communication contribution paper twofold first fixed graph topology develop distributed optimization algorithm based centralized dual proximal gradient idea introduced minimize separable strongly convex cost function proposed distributed algorithm based proper choice primal constraints suitably separating constraints gives rise dual problem separable structure expressed terms local conjugate functions thus proximal gradient applied dual problem turns distributed algorithm node updates primal variable local minimization dual variables suitable local proximal gradient step algorithm inherits convergence properties centralized one thus exhibits iteration counter rate convergence objective value point algorithm accelerated nesterov scheme thus obtaining convergence rate second main contribution propose asynchronous algorithm symmetric eventtriggered communication protocol communication node idle mode local timer triggers idle continuously collects messages neighboring nodes awake needed updates primal variable local timer triggers updates local primal dual variables broadcasts neighboring nodes mild assumptions local timers whole algorithm results uniform random choice one active node per iteration using property showing dual variables stacked separate blocks able prove distributed algorithm corresponds proximal gradient one proposed performed dual problem specifically able show dual variables handled single node represent single variableblock local update triggered node turns proximal gradient step node local result algorithm march draft inherits convergence properties proximal gradient important property distributed algorithm solve fairly general optimization problems including composite cost functions local constraints key distinctive feature algorithm analysis combination duality theory methods properties proximal operator applied conjugate functions summarize algorithms compare literature following way works handle constrained optimization use different methodologies primal approaches used also local constraints regularization terms handled simultaneously proximal operator used handle common cost function known agents directly primal problem algorithm uses idea similar one use paper works directly primal problem handle local constraints make use proximal operators paper propose flexible dual approach take account local constraints regularization terms problem similar one considered paper differently approach dual method algorithms proposed references difference results different algorithms well different requirements cost functions moreover compared algorithms able use constant local locally computed beginning iterations algorithm run asynchronously without coordination step regard propose algorithmic formulation asynchronous protocol explicitly relies local timers need global iteration counter update laws paper organized follows section optimization problem section iii derive equivalent dual problem amenable distributed computation distributed algorithms introduced section analyzed section finally section highlight flexibility proposed algorithms showing important optimization scenarios addressed corroborate discussion numerical computations notation given closed nonempty convex set indicator function defined otherwise let conjugate function defined supx let closed proper convex function positive scalar proximal operator march draft defined argminx also introduce generalized version proximal operator given positive definite matrix define proxw argmin roblem network structure consider following optimization problem min proper closed strongly convex extended functions strong convexity parameter proper closed convex extended functions note split may depend problem structure intuitively easy minimization step performed division free operations easy expression proximal operator remark problems setup require differentiable thus one also consider strongly convex function given fij fij nonempty collection strongly convex functions since work dual problem introduce next standard assumption guarantees dual feasible equivalent primal strong duality assumption constraint qualification intersection relative interior dom relative interior dom want optimization problem solved distributed way network peer processors without central coordinator processor local memory local computation capability exchange information neighboring nodes assume communication occur among nodes neighbors given fixed undirected connected graph set edges edge models fact node exchange information denote march draft set neighbors node fixed graph cardinality section propose distributed algorithms three communication protocols namely consider synchronous protocol neighboring nodes graph communicate according common clock two asynchronous ones respectively nodebased nodes become active according local timers exploit sparsity graph introduce copies coherence consensus constraint optimization problem equivalently rewritten min subj connectedness guarantees equivalence since propose distributed dual algorithms next derive dual problem characterize properties iii ual problem derivation derive dual version problem allow design distributed dual proximal gradient algorithms obtain desired separable structure dual problem equivalent formulation problem adding new variables min subj let lagrangian primal problem given march draft respectively vectors lagrange multipliers last line separated terms since undirected lagrangian rearranged dual function min min min min min used separability lagrangian respect using definition conjugate function given notation paragraph dual function expressed march draft dual problem consists maximizing dual function respect dual variables max assumption dual problem feasible strong duality holds equivalently solved get solution order compact notation problem stack dual variables vector whose associated neighbor thus changing sign cost function dual problem restated min istributed dual proximal algorithms section derive distributed optimization algorithms based dual proximal gradient methods synchronous distributed dual proximal gradient begin deriving synchronous algorithm fixed graph assume nodes share common clock time instant every node communicates neighbors graph defined section updates local variables first provide informal description distributed optimization algorithm node stores set local dual variables updated local proximal gradient step primal variable updated local minimization node uses properly chosen local proximal gradient step updated primal dual values exchanged neighboring nodes local dual variables node initialized local update node distributed algorithm given algorithm march draft algorithm distributed dual proximal gradient processor states initialization argminxi evolution receive update update prox receive compute argmin remark order start algorithm preliminary communication step needed node receives neighbor compute convexity parameter set clear analysis section step avoided nodes agree set know bound also worth noting differently algorithms general point run distributed dual proximal gradient algorithm nodes need common clock also worth noting clear analysis section set local node needs know number nodes network next sections present two asynchronous distributed algorithms overcome limitations march draft asynchronous distributed dual proximal gradient next propose asynchronous algorithm consider asynchronous protocol node concept time defined local timer randomly independently nodes triggers awake two triggering events node idle mode continuously receives messages neighboring nodes needed runs auxiliary computation trigger occurs switches awake mode updates local primal dual variables transmits updated information neighbors formally triggering process modeled means local clock randomly generated waiting time long node idle mode node switches awake mode running local computation resets draws new realization random variable make following assumption local waiting times assumption exponential local timers waiting times consecutive triggering events random variables exponential distribution node wakes updates local dual variables local proximal gradient step primal variable local minimization local stepsize proximal gradient step node denoted order highlight difference updated old variables node awake phase denote updated ones respectively node idle receives messages awake neighbors dual variable received computes updated value broadcasts neighbors worth noting algorithm asynchronous common clock synchronous version asynchronous algorithm section present variant asynchronous distributed dual proximal gradient edge becomes active uniformly random rather node words assume timers associated edges rather nodes waiting time tij extracted edge march draft algorithm asynchronous distributed dual proximal gradient processor states initialization argmin set get realization evolution idle receive received compute broadcast argmin awake awake update broadcast update prox compute broadcast argmin set get new realization idle processor states initialization stay except timers notice assuming nodes common waiting time tij consistently local timer waiting time tij satisfies assumption algorithm report evolution modified scenario algorithm dual variable associated constraint updated every march draft time edge becomes active otherwise would updated often variables thus identify one special neighbor update edge active algorithm formulation algorithm evolution idle tij nothing awake awake send receive update send update compute argmin set deal new realization tij idle nalysis distributed algorithms start analysis first introduce extended version centralized proximal gradient methods weighted proximal gradient methods consider following optimization problem min convex functions decompose decision variable consistently decompose space subspaces follows let column permutation identity matrix let decomposition submatrices thus vector uniquely written viceversa let problem satisfy following assumptions march draft assumption block lipschitz continuity gradient block lipschitz continuous positive constants rni holds ksi block component assumption separability function decomp posed rni proper closed convex extended function assumption feasibility set minimizers problem deterministic descent first show standard proximal gradient algorithm generalized using weighted proximal operator following line proof prove generalized proximal gradient iteration given proxw argmin converges objective value optimal solution rate order extend proof theorem need use generalized version lemma end use result nesterov given recalled following lemma completeness lemma generalized descent lemma let assumption hold diag satisfies nli tighter conditions one given found march draft theorem let assumption hold let sequence generated iteration applied problem diag nli initial condition minimizer problem proof theorem proven following arguments theorem using lemma place lemma randomized block coordinate descent next present randomized version weighted proximal gradient proposed algorithm uniform coordinate descent composite functions ucdc algorithm algorithm ucdc initialization choose probability compute argmin vit vit lit yit update uit convergence result ucdc given theorem reported completeness theorem theorem let assumptions hold let denote optimal cost problem exists random sequence generated ucdc algorithm applied problem march draft holds initial condition target confidence analysis synchronous algorithm start section recalling useful properties conjugate functions useful convergence analysis proposed distributed algorithms lemma let closed strictly convex function conjugate function argmax argmin moreover strongly convex convexity parameter lipschitz continuous lipschitz constant given next lemma establish important properties problem useful analyze proposed distributed dual proximal algorithms lemma let consistently notation problem appendix problem satisfies assumption block lipschitz continuity block lipschitz constants given assumption separability assumption feasibility proof proof split blocks one assumption block lipschitz continuity show gradient block lipschitz continuous block component march draft associated neighbor given equal lemma lipschitz continuous lipschitz constants respectively thus also lipschitz continuous constant lij using euclidean lipschitz continuous constant lij similarly gradient respect lipschitz continuous constant finally conclude lipschitz continuous constant separability definition component block thus denoting follows immediately feasibility assumption convexity condition strong duality holds dual problem feasible admits least minimizer next recall proximal operators function conjugate related lemma moreau decomposition let closed convex function conjugate lemma extended moreau decomposition let closed convex function conjugate holds march draft proof let moreau decomposition lemma holds proxh prove result simply need compute terms first definition conjugate function obtain using definition proximal operator standard algebraic properties minimization holds true proof follows next lemma shows weighted proximal operator split local proximal operators independently carried single node lemma let let diagonal weight matrix diag proximal operator evaluated given prox proof let variable block structure defined using definition weighted proximal operator separability norm function argmin argmin kui kvi march draft minimization splits component giving argmin argmin argmin argmin kvn proof follows definition proximal operator ready show convergence distributed dual proximal gradient introduced algorithm theorem let proper closed strongly convex extended function strong convexity parameter let proper convex extended function suppose algorithm local chosen nli given sequence generated distributed dual proximal gradient algorithm satisfies minimizer initial condition diag proof prove theorem proceed two steps first lemma problem satisfies assumptions theorem thus deterministic weighted proximal gradient solves problem thus need show distributed dual proximal gradient algorithm weighted proximal gradient scheme weighted proximal gradient algorithm applied problem given march draft lemma given lipschitz constant block thus using hypothesis nli apply theorem ensures convergence rate objective value order disclose distributed update rule first split two steps compute explicitly component equations focusing considering diagonal write block component explicit update associated neighbor explicit update denoting lemma holds argmin march draft thus rewrite terms obtaining finally last step consists applying rule order highlight distributed update rewrite fashion proxd applying lemma lemma obtain prox prox prox proof follows remark nesterov acceleration include nesterov extrapolation step algorithm accelerates algorithm details attaining faster convergence rate objective value order implement acceleration node needs store copy dual variables previous iteration thus update law would changed following represents nesterov overshoot parameter march draft analysis asynchronous algorithm order analyze algorithm start recalling properties exponential random variables let sequence identifying generic tth triggered node assumption implies uniform process alphabet triggering induce iteration distributed optimization algorithm universal discrete time indicating iteration algorithm thus external global perspective described local asynchronous updates result algorithmic evolution iteration one node wakes randomly uniformly independently previous iterations variable used statement proof theorem however want stress iteration counter need known agents theorem let proper closed strongly convex extended function strong convexity parameter let proper convex extended function suppose algorithm local chosen sequence generated asynchronous distributed dual proximal gradient algorithm converges objective value high probability initial condition target confidence exists holds optimal cost problem proof prove theorem proceed two steps first show apply uniform coordinate descent composite functions algorithm solve problem second show applied problem algorithm gives iterates asynchronous distributed dual proximal gradient first part follows immediately lemma asserts problem satisfies assumptions theorem algorithm solves march draft next show two algorithms update first lemma given lipschitz constant block thus rest proof following set maximum allowable value clearly convergence preserved smaller stepsize used consistently notation algorithm let denote selected index iteration thus argminsi vit sit defined written terms proximal gradient update applied block component fact definition function argmin lit ksk git yit order apply formal definition proximal gradient step add constant term introduce change variable given yit obtaining argmin yields yit lit thus update fact changes component yit updated yit yit prox yit lit ones remain unchanged following steps proof theorem split update gradient proximal steps gradient step given yit march draft respectively proximal operator step turns prox yit applying lemma block rewrite proxlit git lit component given lit lit argmin used property lemma assumption sequence nodes becomes active according uniform distribution node triggering associated iteration algorithm given update node active network performs update dual variables yit order perform local update selected node needs know updated information last triggering regards neighbors dual variables nit broadcast nit last time become active regarding primal variables nit situation little tricky indeed nit may changed past due either one neighbors become active cases broadcast updated dual variable either become active received idle updated dual variable one neighbors march draft remark differently synchronous algorithm asynchronous version nodes need know number nodes order set local fact node set parameter knowing convexity parameters remark strongly convex separable penalty term added dual function becomes strongly convex stronger result theorem applies linear convergence high probability guaranteed note strong convexity dual function obtained primal function lipschitz continuous gradient chapter theorem analysis asynchronous algorithm convergence algorithm relies essentially arguments theorem different block partition fact split variable blocks number edges notice since dual variables need associated subset edges thus variable split blocks yij given yij yij otherwise algorithm updated neighbor becomes active otivating optimization scenarios numerical computations constrained optimization first concrete setup consider separable constrained convex optimization problem min subj strongly convex function closed convex set problem structure widely investigated distributed optimization shown literature review notice pointed discussion introduction assume strong convexity need assume smoothness march draft rewrite problem transforming constraints additional terms objective function using indicator functions ixi associated min ixi since convex set ixi convex function thus problem mapped distributed setup setting ixi treating local constraints way perform local unconstrained minimization step computing local feasibility entrusted proximal operator fact proximal operator convex indicator function reduces standard euclidean projection proxix argmin remark considering quadratic costs benefit greatly numerical point view fact unconstrained quadratic program solved via efficient methods often result algorithms possibly precomputations implemented arithmetic see details attractive feature setup one conveniently decide rewrite local constraints formulation suggested include local constraint also reasonable include constraint consider indicator function definition define otherwise thus identically zero still convex strategy results algorithm basically asynchronous distributed dual decomposition algorithm notice choice recursive feasibility obtained provided local algorithm solves minimization interior point fashion two extreme scenarios one could also consider possibilities indeed could case one benefit splitting local constraint two distinct contributions way indicator function positive orthant march draft could included allowing simpler constrained local minimization step constraint could mapped second term izi choice leads simple observation leaving equal zero seems waste degree freedom could differently exploited introducing regularization term regularized constrained optimization highlighted previous paragraph flexibility algorithmic framework allows handle together local constraints also regularization cost regularize solution useful technique many applications sparse design robust estimation statistics support vector machine svm machine learning total variation reconstruction signal processing geophysics compressed sensing problems cost loss function representing predictions based theoretical model mismatch real data next focus widespread choice loss function least square cost giving rise following optimization problem min kai typical challenge arising regression problems due fact problem often standard algorithms easily incur phenomena viable technique prevent consists adding regularization cost usual choices also referred tikhonov regularization ridge regression leads called lasso least absolute shrinkage selection operator problem min kai positive scalar cases distributed estimation one may interested solution bounded given box leaving reduced subspace gives rise called constrained lasso problem see march draft discussed setup simultaneously manage constrained regularized problem constrained lasso first way map problem setup defining otherwise setting proximal operator admits analytic solution well known soft thresholding operator applied vector component gives vector whose component thresholds components modulus greater see alternatively may include constraint regularization term obtaining unconstrained local minimization node choice particularly appealing constraint box case proximal operator becomes saturated version operator depicted figure fig saturated operator march draft numerical tests section provide numerical example showing effectiveness proposed algorithms test proposed distributed algorithms constrained lasso optimization problem min kai decision variable represent respectively data matrix labels associated examples assigned node inequality meant randomly generate lasso data following idea suggested element generated perturbing true solution xtrue around half nonzero entries additive noise matrix vector normalized respect number local samples node box bounds set regularization parameter match problem distributed framework introduce copies decision variable consistently define local functions costs box defined let regularization term local parameter initialize zero dual variables use expression smallest eigenvalue consider undirected connected graph parameter connecting nodes run synchronous asynchronous algorithms underlying graph stop difference current dual cost optimal value drops threshold figure shows difference dual cost iteration optimal value logarithmic scale particular rates convergence synchronous left asynchronous right algorithms shown asynchronous algorithm normalize iteration counter respect number agents march draft fig evolution cost error logarithmic scale synchronous left asynchronous right distributed algorithms concentrate asynchronous algorithm figure plot evolution three components primal variables horizontal dotted lines represent optimal solution worth noting optimal solution first component slightly constraint boundary second component equal zero third component constraint boundary optimal solution intuitively explained simultaneous influence box constraints restrict set admissible values regularization term enforces sparsity vector inset first iterations subset selected nodes highlighted order better show transient constant behavior due gossip update effect box constraint component component fig component component evolution three components primal variables asynchronous distributed dual proximal gradient march draft specifically first component seen temporary solution agents hits boundary iterations one hits boundary iteration iteration converges feasible optimal value second components always inside box constraint converge zero third components start inside box finite number iterations hit boundary vii onclusions paper proposed class distributed optimization algorithms based dual proximal gradient methods solve constrained optimization problems separable objective functions main idea construct suitable separable dual problem via proper choice primal constraints solve proximal gradient algorithms thanks separable structure dual problem terms local conjugate functions weighted proximal gradient update results distributed algorithm node performs local minimization primal variable local proximal gradient update dual variables main contribution two asynchronous distributed algorithms proposed local updates node triggered local timer without relying global iteration counter asynchronous algorithms shown special instances randomized proximal gradient method dual problem convergence analysis based proper combination duality theory methods properties proximal operators eferences zanella varagnolo cenedese pillonetto schenato asynchronous consensus distributed convex optimization ifac workshop distributed estimation control networked systems shi ling yin extra exact algorithm decentralized consensus optimization siam journal optimization vol jakovetic freitas xavier moura convergence rates distributed gradient methods random networks ieee transactions signal processing vol olshevsky distributed optimization directed graphs vol akbari gharesifard linder distributed online convex optimization directed graphs ieee annual allerton conference communication control computing allerton march draft kia distributed convex optimization via coordination algorithms communication automatica vol chen ozdaglar fast distributed method ieee annual allerton conference communication control computing allerton tsianos lawlor rabbat distributed optimization practical issues applications machine learning ieee annual allerton conference communication control computing allerton lee distributed random projection algorithm convex optimization ieee journal selected topics signal processing vol necoara random coordinate descent algorithms convex optimization networks ieee transactions automatic control vol wei ozdaglar convergence asynchronous distributed alternating direction method multipliers ieee global conference signal information processing globalsip iutzeler bianchi ciblat hachem explicit convergence rate distributed alternating direction method multipliers arxiv preprint bianchi hachem iutzeler stochastic coordinate descent algorithm applications ieee international workshop machine learning signal processing mlsp parallel coordinate descent methods big data optimization mathematical programming facchinei scutari sagratella parallel selective algorithms nonconvex big data optimization ieee transactions signal processing vol notarstefano bullo distributed abstract optimization via constraints consensus theory applications vol notarstefano polyhedral approximation framework convex robust distributed optimization ieee transactions automatic control vol beck teboulle fast iterative algorithm linear inverse problems siam journal imaging sciences vol nesterov gradient methods minimizing composite functions mathematical programming vol iteration complexity randomized descent methods minimizing composite function mathematical programming vol donoghue stathopoulos boyd splitting method optimal control ieee transactions control systems technology vol nesterov efficiency coordinate descent methods optimization problems siam journal optimization vol necoara clipici parallel random coordinate descent method composite minimization convergence analysis error bounds siam journal optimization vol boyd vandenberghe convex optimization cambridge university press beck teboulle fast dual proximal gradient algorithm convex minimization applications operations research letters vol march draft bauschke combettes convex analysis monotone operator theory hilbert spaces springer science business media convex analysis minimization algorithms advanced theory bundle methods grundlehren der mathematischen wissenschaften vol kar moura convergence rate analysis distributed gossip linear parameter estimation fundamental limits tradeoffs ieee journal selected topics signal processing vol james paulson rusmevichientong constrained lasso citeseer tech eis ramadge generalized lasso reducible subspace constrained lasso ieee international conference acoustics speech signal processing icassp parikh boyd proximal algorithms foundations trends optimization vol giuseppe notarstefano assistant professor ricercatore del salento lecce italy since february received laurea degree summa cum laude electronics engineering pisa degree automation operation research padova april visiting scholar university stuttgart university california santa barbara university colorado boulder research interests include distributed optimization motion coordination networks applied nonlinear optimal control modeling trajectory optimization aggressive maneuvering aerial car vehicles serves associate editor conference editorial board ieee control systems society european control conference ifac world congress ieee systems control coordinated team winning international student competition virtual formula recipient erc starting grant ivano notarnicola student engineering complex systems del salento lecce italy since november received laurea degree summa cum laude computer engineering del salento visiting student institute system theory stuttgart germany march june research interests include distributed optimization randomized algorithms march draft
| 3 |
secure video streaming heterogeneous small cell networks untrusted cache helpers jan lin xiang student member ieee derrick wing kwan senior member ieee robert schober fellow ieee vincent wong fellow ieee paper studies secure video streaming cacheenabled small cell networks small cell base stations bss helping video delivery untrusted unfavorably caching improves eavesdropping capability untrusted helpers may intercept cached delivered video files address issue propose joint caching scalable video coding svc video files enable secure cooperative multipleoutput mimo transmission time exploit cache memory trusted untrusted bss improving system performance considering imperfect channel state information csi transmitters formulate robust optimization problem minimize total transmit power required guaranteeing quality service qos secrecy video streaming develop iterative algorithm based modified generalized benders decomposition gbd solve problem optimally caching cooperative transmission policies determined via offline online optimization respectively furthermore inspired optimal algorithm suboptimal algorithm based greedy heuristic proposed simulation results show proposed schemes achieve significant gains power efficiency secrecy performance compared several baseline schemes index layer security untrusted nodes wireless caching mimo optimization resource allocation ntroduction mall cells among promising solutions meeting enormous capacity requirements introduced video streaming applications wireless networks densely deploying base stations bss spectral energy efficiencies wireless communication systems improved significantly however achieve performance gains highcapacity secure backhaul links required transport video files internet small cell bss work supported australian research council discovery early career researcher award funding scheme work schober supported alexander von humboldt professorship program work wong supported natural sciences engineering research council canada part work accepted presentation ieee global commun conf globecom singapore xiang schober institute digital communications university erlangen germany email school electrical engineering telecommunications university new south wales sydney nsw australia email wong department electrical computer engineering university british columbia vancouver canada email vincentw wireless backhauling usually preferred small cells due low cost high flexibility deployment capacity provided wireless backhauling often insufficient deteriorates overall system performance limits maximum number concurrent streaming moreover since wireless transmission susceptible eavesdropping security wireless backhauling fundamental concern wireless networks recently wireless caching proposed viable solution enhance capacity small cell backhauling built upon networking paradigm wireless caching popular contents access points bss close proximity user equipments ues consequently backhaul traffic offloaded reusing cached content caching alternative small cell backhauling first investigated caching shown also substantially reduce average downloading delay besides caching improves energy efficiency wireless backhauling systems shown caching optimized facilitate powerefficient cooperative mimo transmission small cell networks joint caching buffering small cell networks proposed overcome backhaul capacity bottleneck transmission constraint simultaneously enable fast downloading video files coded caching introduced reduces backhaul load exploiting coded multicast transmission simultaneous delivery different files coded caching extended various network scenarios see references therein hand although communication secrecy high importance wireless networks providing security networks employing wireless caching major challenge current video streaming applications youtube netflix mainly rely encryption schemes hypertext transfer protocol secure https ensure communication security however schemes benefits networking vanish encrypted contents uniquely defined user request reused serve user requests reason caching mainly considered content without security restrictions literature overcome limitation physical layer security pls schemes wireless caching proposed pls techniques rely wiretap channel coding instead source encryption content still reused wireless edge secure wireless transmission hence caching pls compatible cooperative mimo transmission shown effective physical layer mechanism increasing secrecy rate video delivery homogeneous cellular networks however secure backhaul cache placement required always guaranteed wireless backhauling practice considering insecure backhaul secure cache placement strategy heterogeneous cellular networks hetnets developed whereby eavesdroppers tapping insecure backhaul prevented obtaining sufficient number coded packets successful recovery video file assuming caching end users authors proposed secure coded multicast scheme relay networks prevent end users external eavesdroppers intercepting nonrequested delivered files respectively however optimistically assumed cache helpers trusted cooperation cache exploited eavesdropping purposes assumptions may unrealistic hetnets particular due distributed network architecture small cell bss untrusted helpers may potential hence may cooperate altruistically examples untrusted helpers include small cell bss easily manipulated third parties eavesdrop premium video streaming services paid users private video files contrast trusted small cell bss deployed owned service provider untrusted small cell bss user data left unprotected prone eavesdropping small cell responsible encrypting decrypting user data forwarding macro intended users respectively moreover different case considered cache memory equipped untrusted helpers unfavorably enhances eavesdropping capability intercept cached delivered video data utilize cached video data side information improve reception two fundamental questions need addressed cache helpers untrusted cooperation untrusted helpers still yield secrecy benefits cache deployed untrusted helpers utilized improve system performance cache cooperate intelligently reap possible performance gains knowledge small cell networks perform passive authentication bss completely exclude untrusted bss participating cooperative transmission however case untrusted used improve system untrusted helpers without caching investigated relay networks shown cooperation relays yields positive secrecy rate even relays untrusted however problem studied paper challenging untrusted helpers cache content enhance eavesdropping capability thus solutions proposed applicable new study needed paper facilitate secure cooperative transmission untrusted cache helpers propose advanced caching scheme combines scalable video coding svc cooperative mimo transmission specifically video file encoded svc subfiles containing independently decodable video information subfiles containing video information decodable base layer successfully decoded caching subfiles across bss baselayer subfiles across trusted bss secure cooperation bss enabled exploiting encoding decoding structure svc thereby large virtual transmit antenna array formed bss cached subfile introduces additional degrees freedom dofs may utilized secure video streaming reap secrecy benefits centralized framework caching delivery optimization adopted hence proposed architecture follows cloud radio access network cran philosophy advocated hetnets cooperative mimo transmission advanced resource allocation conference version paper investigated secure transmission assuming perfect knowledge channels however practice channel state information csi gathered central controller macro imperfect due quantization noise feedback delay deteriorates system performance taken account system design mitigate information leakage trusted bss untrusted bss artificial noise based jamming applied paper literature employed effectively reduce receive ratio sinr eavesdropper without interfering desired users paper consider cooperative transmission trusted bss jamming combat eavesdropping untrusted bss considering untrusted bss imperfect csi jointly optimize caching cooperative data transmission secure powerefficient system design particular robust optimization problem formulated minimization transmit power required secure video streaming imperfect csi main contributions paper summarized follows paper consider passive eavesdroppers remain silent eavesdropping studying case active eavesdroppers jamming spoofing attackers interesting topic future work case considered paper general fact eavesdroppers viewed untrusted helpers zero cache capacity note untrusted bss present threat wireless networks https also facilitate secure cooperative transmission study new secrecy threat small cell networks originating untrusted cache helpers cacheenabled eavesdropping small cell bss facilitate secure cooperative mimo transmission trusted untrusted small cell bss propose secure caching scheme based svc optimize caching cooperative delivery policies minimization transmit power guaranteeing qos communication secrecy imperfect csi show optimal delivery decisions obtained semidefinite programming sdp relaxation probability one mild conditions optimal caching decisions optimal iterative algorithm developed based modified generalized benders decomposition gbd reduce computational complexity suboptimal greedy scheme also proposed simulation results show proposed robust schemes efficiently exploit cache capacities trusted untrusted small cell bss enable powerefficient secure video streaming heterogeneous small cell networks remainder paper organized follows section present system model cooperative secure video streaming presence untrusted cache helpers formulation solution proposed optimization problem provided sections iii respectively section evaluate performance proposed schemes compare several baseline schemes finally section concludes paper notations denote sets real complex numbers respectively denotes real part identity matrix matrices respectively transpose complex conjugate transpose operators respectively denotes vector rank det denote frobenius norm trace rank determinant maximal eigenvalue square matrix respectively expectation operator circularly symmetric complex gaussian distribution denoted mean vector covariance matrix stands distributed diag diagonal matrix diagonal elements given vector represents cardinality set indicate matrix positive semidefinite positive definite respectively finally stands max ystem odel network topology consider downlink wireless video streaming heterogeneous small cell network small cell bss max equipped cache memory size bits distributed coverage area macro see fig convenience list key notations used paper provided table let index refers macro macro connected video server internet via dedicated secure backhaul link optical fiber simplicity notation backhaul macro modeled cache equivalent capacity bits contrast small cell bss connected macro via wireless backhaul links convenience deployment assume bsm equipped antennas total number transmit antennas denoted video server owns library video files indexed streamed ues indexed size file bits employing svc coding utilized wireless video delivery picture expert group mpeg standard video file encoded one subfile enhancementlayer subfiles information embedded enhancement layer used refine information contained previous layers let index set layers size subfile bits base layer decoded independent enhancement layers contrast enhancement layer decoded layers already decoded therefore layers decoded sequential manner due specific encoding decoding structure base layer protected order ensure communication secrecy eavesdropper decode base layer also able decode enhancement layers small cell bss serve helpers macro delivering video files however subset small cell bss untrusted bss may leak cached video data eavesdrop transmitted video data utilizing cached data side information let denote sets ptrusted untrusted bss total number antennas respectively paper assume set untrusted bss known practice untrusted bss may largely small cell bss insufficient security protection easily compromised third parties due eavesdropping intensive processing untrusted bss may consume large power experience long latency even uplink downlink throughputs small hence versus throughput pattern untrusted bss statistically different trusted bss constitute outliers therefore exploiting power delay throughput records bss collected service providers set untrusted bss estimated applying outlier detection methods supervised unsupervised learning techniques considered system time slotted duration time slot smaller channel coherence time consider control caching delivery shown fig video files cache updated every time slots referred one period based historical profiles user preferences csi contrast video delivery decisions determined time slot based actual requests users instantaneous csi users preferences vary much slower scale day day user requests notational simplicity consider system one typical period corresponding time slots indexed table ist otations sets bss trusted bss untrusted bss total number antennas bss trusted bss untrusted bss subset cooperating bss delivery subfile setstime users video files layer subfiles per file time slots request user file set user requests requesting requested file corresponding binary cooperative delivery decisions subfile source symbol subfile serving request time beamforming vectors set sending symbol covariance matrix time cache size achievable rate achievable secrecy rate user decoding capacity untrusted eavesdropping symbol offline caching control user requests coop csi video information online delivery control max sec tific offline caching ois control offline caching control online delivery online delivery control macro control artifi csi time cache user requests untrusted small offline caching control offline caching control online delivery control online delivery control csi user requests time csi user requests capacity constraint max user requests trusted small csi cache user requests ial ise user requests fig system model secure video delivery heterogeneous network trusted untrusted small cell distributed coverage area macro caching delivery control performed two timescales data delivery set bss cooperating delivery subfile denoted mcoop assume requests one file possibly multiple layers file time denote request user file set requests convenience requesting requested file corresponding denoted respectively moreover user may request layers indexed assume frequency flat fading channel video data transmission worst case assume untrusted bss simultaneously eavesdrop video information intended ues participate cooperative delivery cached files time slot caused simultaneous reception transmission frequency denoted let denote joint transmit signal set received signals user untrusted bss denoted cnj respectively given secure video caching delivery cache helpers set untrusted enhancement layers cached bss hence cached subfiles used untrusted bss reconstruct original video files long access subfiles meanwhile bss subfile cached employ cooperative transmission secure delivery subfile ues hand video files uncached small cell bss delivered macro let indicate subfile cached otherwise cache placement satisfy condition channel set user respectively cnm cnm model channels respective receivers term definition accounts fact included furthermore inj complex gaussian noises users bss variance covariance matrix inj respectively source symbols subfile serving request time slot denoted complex gaussian random variables let denote joint beamforming vector transmit symbol cnm individual beamforming vector used time slot joint transmit signal set time slot given superposition coding used superimpose layers intended user herein complex gaussian distributed sent cooperatively trusted bss set proactively interfere reception untrusted bss set assume covariance matrix vth cooperatively injected trusted set require diagonal matrix given diag ensure components correspond untrusted bss equal zero moreover participating cooperative transmission possible requested subfile cached require max max maximum transmit power diagonal matrix given diag holds enforces coop otherwise mcoop ensures maximum transmit power max exceeded constraint form also referred constraint based baselayer subfiles transmitted untrusted bss achievable secrecy rate user employs successive interference cancellation sic receiver subfile decoded first required decoding layers decoding subfile layer previously decoded lower layers first removed received signal interference cancellation process continues layer decoded define interference cancellation coefficient instantaneous achievable rate layer user given residual interference term decoding layer user indicates hand untrusted bss may eavesdrop video information intended users guaranteeing communication secrecy proposed secure delivery scheme designed avoid information leakage even conditions specifically assume fully cancel power hence achieves capacity upper bound layer signal intended user given det inj inj note subfile cached addition sic also utilize cached video data side information suppress interference caused subfile secrecy rate achievable user decoding layer time slot given sec max remark note passive eavesdropper considered networks caching networks cast untrusted cache memory data cached considering eavesdropper participate cooperative transmission video files thus untrusted cache helpers considered paper correspond general eavesdropping model investigated literature iii roblem ormulation section first present adopted imperfect csi model video delivery robust optimization problem formulated minimization total transmit power required video streaming indicates transmission subfile interferes subfile otherwise adopting sic decoding svc video files otherwise practice perfectly canceled residual impairs eavesdropping untrusted bss hence improves communication secrecy however estimating residual selfinterference central controller macro responsible resource allocation may possible hence make assumption zero paper obtained results provide lower bound performance case imperfect selfinterference cancellation qos secrecy constraints note low transmit power desirable minimize interference caused cells reduce network operation cost given cache status cooperative transmission decisions time slot optimized online based instantaneous csi estimates however due time causality computational complexity constraints cache placement period optimized offline based historical user requests csi collected time considered typical period caching optimization problem formulated minimize utp subject max channel state information min max req max beginning time slot csi centralized controller macro computing resource allocation estimates gathered macro denoted respectively general imperfect actual channels given represent respective channel estimation errors caused quantization errors imperfect feedback channels well outdated noisy estimates fact estimation errors may enhanced actions untrusted bss may fully cooperate macro channel estimation feedback specific values known macro model imperfect csi assume possible values lie ellipsoidal uncertainty regions given represent radii uncertainty regions respectively denote orientations uncertainty regions respectively practice values depend channel coherence time adopted channel estimation methods tol utp denotes total transmit power time slot constraint max limits maximal transmit power req guarantees minimum video delivery rate time slot provide qos delivering layer serving request constrains maximum tol data rate leaked untrusted bss set time slot ensure communication secrecy since untrusted bss unable decode enhancement layers without information secrecy ensured imposing delivery subfiles due imperfect csi data rate possible estimation error respective uncertainty sets order facilitate robustness respect communication secrecy robust optimization approach commonly adopted studying pls literature see references therein constraints guarantee minimum achievable secrecy req sec tol delivering rate subfiles request provided problem feasible problem nonlinear program minlp due binary caching decision vector constraints type problem yet since solved offline large timescale adopt global optimization method solve optimally section obtained solution defines performance benchmark suboptimal schemes section caching optimization delivery optimization let caching transmitter beamforming optimization vectors respectively considering control fig caching decision made end every time slots based historical profiles user requests csi assume instantaneous csi estimates given moreover cache status determined end previous time period cooperative transmission policy time optimized online solving following problem example exploiting channel reciprocity time division duplex systems estimated uplink small cell macro bss based pilots emitted ues untrusted bss respectively estimated csi obtained small cell bss fed back macro via interface prediction users future requests based historical user profiles improve cache placement cost increased computational complexity minlp even binary constraints relaxed convex ones problem remains minimize subject utp problem due constraints however show optimally solved employing sdp relaxation section roblem olution section caching problem tackled first show transformed convex minlp sdp relaxation solved optimally iterative algorithm inspired optimal algorithm suboptimal caching scheme developed balance optimality computational complexity moreover show delivery problem optimally efficiently solved optimal caching scheme problem transformation reformulate problem convex minlp transformed convex constraints let beamforming matrix subject rank substituting employing elementary arithmetic operations equivalently reformulated affine inequality constraint jointly convex req req req however continuous set represents infinitely many inequalities hence still intractable optimization overcome issue transformed finite number convex constraints end substitute apply appendix leads auxiliary optimization variable next let auxiliary optimization matrix rank following constraints hold pmax pmax max pmax guarantee otherwise substituting reformulated lmi follows tot tot tot inj tot rtot tot holds due rank fact continuous set represents infinitely many lmis jointly convex tractability transformed finite number convex constraints accomplished exploiting robust quadratic matrix inequality theorem thereby obtain inj auxiliary optimization variable finally defining delivery variable applying transformations original problem equivalently reformulated minimize utp subject max max rank constraint rank dropped due let denote sdp relaxation problem obtained dropping problem convex minlp relaxing binary constraints convex ones arrive convex problem gbd algorithm simple iterative method handle convex minlps section gbd iteration upper lower bounds optimal value generated solving primal subproblem master problem respectively ensure convergence optimality feasibility cuts successively added tighten bounds eliminate infeasible solutions possibly obtained iterations respectively gbd algorithm attractive solving efficiently implemented exploiting structure particular resulting primal subproblem convex problem strong duality holds master problem linear program milp problems easy handle using numerical solvers cvx mosek however gbd algorithm typically suffers slow convergence infeasible solution obtained iteration gbd resulting feasibility cut usually ineffective improving solution problem infeasible gbd algorithm terminates performed exhaustive search possible candidate solutions remedy issue improved gbd proposed problem decomposition proposed modified gbd algorithm applies decomposition problem solves binary caching optimization problem outer layer continuous delivery optimization problem inner layer however coupled via constraints facilitate decomposition perturb sides introducing slack variables respectively let perturbation vector moreover objective function add exact penalty cost function pmax max penalty factor consequently problem decomposes sdp subproblems inner layer one subproblem time slot milp outer layer shown top next page respectively problem referred master problem thereby problems equivalent stated proposition mindt although problem still contains infinite number constraints undetermined functions readily solvable iterative relaxation method explained following optimal iterative solution proposed iterative algorithm given algorithm let iteration index start one constraint defines cutting plane also referred optimality cut number increased sequentially iteration proceeds specifically given dual variables following master problem solved iteration minimize subject fpen proposition problems equivalent feasible sdp subproblem always feasible moreover optimal solution solves master problem problem infeasible optimal solution satisfies problem infeasible proof please refer appendix remark perturbation feasible set extended consequently based proposition infeasible solutions avoided problem feasible identified easily problem infeasible properties facilitate efficient implementation gbd algorithm sequel optimally solve sdp subproblem convex solved interior point methods numerical solvers cvx applicable meanwhile exploiting convexity strong duality simplify lation master problem let lagrange multipliers respectively define lagrangian written similar approach proposed improved gbd algorithm successfully applied accelerate outer approximation algorithm section fpen pmax separable since given problem convex fulfills slater condition following result holds due strong duality max min consequently master problem reformulated minimize subject problem relaxation problem due enlarged feasible set optimal value problem gives lower bound problem relaxation solution denoted optimal problem feasible problem otherwise add another optimality cut feasible set next iteration tighten relaxation process continues obtain sequence lower bounds relaxed solution becomes feasible solves problem optimally problem known infeasible two remarks regarding algorithm order first observed algorithm lines feasibility optimality verified solving sdp subproblem optimal know solving problem line return optimal value owing strong duality problem otherwise gives upper bound optimal value thus keeping lowest upper bound obtained far min minimize utp fpen subject minimize subject given determine set tolerance solve relaxed master problem given dkt skt determine solution update lower bound solution solve sdp subproblem given determine primal dual solutions infeasible exit loop set else update upper bound solution end update iteration index end return optimal solutions else return infeasible problem end line optimality condition satisfied gap lower bound vanishes second computational convenience values iteration intelligently chosen optimal dual solutions problem line case constraint function easily computed explained following proposition proposition let dkt skt optimal primal dual solutions iteration respectively dkt skt also solves minimization problem optimality cut iteration dkt skt arg mindt choosing function reduces affine function given pmax max qfk utp dkt fpen skt proof please refer appendix pmax initialization given solve sdp subproblem max pmax algorithm optimal iterative algorithm solving based proposition relaxed master problem milp solved optimally using numerical solver mosek similar conventional gbd method algorithm converges finite number iterations shown proposition obtained solution globally optimal problem general solution gives lower bound problem however inspecting rank sdp solution problem show sdp relaxation tight proposition algorithm converges finite number iterations moreover assuming channel vectors modeled statistically independent random vectors problems equivalent sense whenever feasible solution also globally optimal probability one optimal beamformer given principal eigenvector proof please refer appendix due perturbation optimality cuts need generated algorithm iteration remark different classical gbd algorithm section feasibility cuts also required exclude infeasible solutions intermediate iterations since optimality cuts successively improve lower bounds algorithm expected converge faster classic gbd algorithm feasible hand even infeasible perturbed problem generally feasible optimality cuts still generated iteratively improve solutions reduce required number iterations high probability computational complexity assume interiorpoint method applied solve sdp subproblems iteration gbd algorithm computational complexity solving sdp subproblem number ues number bss number antennas number svc layers approximated theorem complexity per iteration kln log number iterations log solution accuracy specified numerical solver notation although sdp algorithm suboptimal iterative algorithm solving initialization given solve sdp subproblem given satisfying qki determine end end subproblems solved polynomial time line overall computational complexity algorithm grows size problem milp solver line may incur exponentialtime computational complexity worst case even though likelihood worst case occurs low due employed perturbation remark thus offline cache optimization may feasible practical implementations suboptimal caching scheme systems limited computing resources algorithm may applicable due computational complexity instead suboptimal schemes facilitating better system performance computational complexity may preferable based proposition solved via equivalent convex minlp also evident given reduces sdp solved optimally polynomial time therefore additionally adjusting via greedy iterative search obtain suboptimal scheme algorithm let set files requested set requests caching vector respectively define qkm set binary vectors within distance one qkm besides qkm defines set indices qkm intersection points iteration vector set qki minimizes objective value primal problem chosen new caching vector arg min applying transformation techniques problem section relaxing rank constraint problem reformulated sdp equivalent problem since based proposition solution problem fulfills rank constraint probability one delivery problem solved optimally via sdp subproblem stated corollary corollary sdp subproblem respective instantaneous csi delivery optimization problem equivalent sense solution also optimal whenever feasible proof proof similar propositions omitted brevity therefore delivery optimization problem solved polynomial time computational complexity desirable online implementation moreover delivery optimization incurs signaling overhead kln collecting csi macro distributing optimization results small cell bss remark although total number bss dense small cell networks may large coverage areas small cell bss overlap therefore proposed caching delivery algorithms may applied several small groups small cell bss overlapping coverage areas rather jointly small cell bss considerably reduces computational complexity signaling overhead remark proposed caching delivery optimization framework extended integrate centralized decentralized coded caching end users example multicast codewords cooperatively transmitted subset bss bss cached subfiles required coded multicast transmission design reaps performance gains coded caching cooperative mimo transmission however resulting cache optimization problem would involve large number binary caching variables defined per subfile sdp relaxation delivery optimization problem may yield optimal solution anymore hence extending proposed framework coded caching interesting topic future research time obtained solution ensured feasible moreover often due iterative minimization shown section optimal delivery solution cache vectors within distance one qki iteratively updated successively reduce objective value iteration continues qki becomes unique reduction objective function possible yields solution hence number problem instances solved bounded consequently overall computational complexity algorithm approximated grows polynomially problem size remark adopting greedy heuristic searching set done polynomial imulation esults section evaluate performance proposed optimal suboptimal schemes consider cell radius macro located center cell three small cell bss uniformly distributed within cell number untrusted small cell bss set unless stated otherwise gain insight figs consider small network three small cell bss larger network considered fig macro equipped antennas small cell antennas assume video files duration minutes size table imulation parameters settings mhz mins dbm max dbm bytes delivered ues user requests files independent users let probability distribution requests different files set according zipf distribution particular assuming file popular file ues thepprobability file requested given adopt svc codec video file encoded subfile subfile size minimum streaming rate secrecy rate threshold req req tol subfiles kbps kbps respectively therefore problem feasible secrecy sec streaming rate kbps guaranteed secure uninterrupted video streaming user sec kbps streaming rate req sec subfiles mbps users randomly distributed system based locations bss users path loss calculated using model urban macro scenario fading coefficients independent identically distributed rayleigh random variables employ euclidean spheres modeling uncertainty regions setting meanwhile define maximum normalized channel timation error variances baseline random caching video sub files randomly cached cache capacity reached baseline preference based caching popular sub files cached trusted bss since subfiles important cached higher priority subfiles video file baselines optimal delivery decisions obtained solving problem baseline cooperation untrusted bss video files cached untrusted bss act pure eavesdroppers hence untrusted bss allowed cooperate delivery video files approach adopted cellular networks cache capacity per small cell fig total transmit power secrecy outage probability versus cache capacity different caching delivery schemes respectively unless otherwise specified assume relevant system parameters given table comparison consider two heuristic caching schemes delivery scheme baselines secrecy outage probability parameters system bandwidth duration time slot duration delivery period macro transmit power small transmit power noise power density cache capacity macro optimal suboptimal baseline baseline baseline total transmit power dbm secrecy outage probability optimal caching delivery decisions obtained max problems respectively baseline transmission different proposed schemes macro treats channel accurate optimal caching estimates delivery decisions obtained solving problems respectively setting figs illustrate performance considered caching delivery schemes functions cache capacity herein system performance evaluated online delivery video files problemp sesec crecy outage probability defined pout req tol characterizes likelihood lem infeasible either qos constraint secrecy constraint satisfied observed figs considered schemes larger cache capacity leads lower total transmit power smaller secrecy outage probability larger virtual transmit antenna arrays formed among trusted secrecy outage probability optimal total transmit power dbm average number cooperating bss max max max suboptimal baseline normalized channel estimation error optimal suboptimal baseline fig average number cooperating bss versus cache capacity untrusted bss cooperative beamforming transmission subfiles respectively performance gap optimal scheme baseline particularly high cache capacity regime proposed caching scheme exploit cache resources untrusted helpers delivering subfiles baseline performance gap proposed optimal scheme baselines small small large cache capacities insufficient saturated cooperation medium cache capacities however proposed optimal scheme achieves considerable performance gains due ability exploit information regarding user requests csi cache placement note proposed suboptimal scheme attains good performance regimes despite low computational cost provide insight bss cooperate fig shows average numbers cooperating small cell macro bss transmission subfiles denoted respectively algorithm employed cache optimization recall proposed caching scheme cache subfiles untrusted bss constraint consequently given cache capacity holds interestingly behavior monotonic cache capacity increases small medium cache capacity regime monotonically increase cache capacity since number subfiles cached small cell bss number bss participate cooperative transmission increase example cache capacity per small cell respectively increase cache capacity per small cell respectively large cache capacity regime performance gains saturate available dofs transmission subfiles saturate fig however decrease reaching stationary cooperation topology dofs sufficient optimal caching scheme optimal secrecy outage probability transmission subfiles algorithm employed cache optimization suboptimal baseline fig total transmit power secrecy outage probability versus normalized channel estimation error variance proposed optimal scheme solid line suboptimal scheme dotted line baseline dashed line selects preferred trusted cooperating small cell macro bss based channel conditions cache status instead exploiting bss available cooperation next evaluate robustness proposed schemes channel estimation errors figs show performance proposed schemes baseline functions normalized channel estimation error variances cmax denotes cache capacity per small cell observe compared baseline proposed schemes achieve lower secrecy outage probability cost slightly higher transmit power consumption specifically achieve robustness meeting qos constraint imperfect csi proposed schemes employ wide beams transmitting subfiles may lead information leakage untrusted bss hence ensure communication secrecy constraint proposed schemes also transmit amount degrade reception untrusted bss hand wide beams interference caused total transmit power dbm cache capacity per small cell optimal suboptimal secrecy outage probability legitimate users compensated increasing transmit power beamforming vectors therefore total transmit power increases csi uncertainty increases contrast treating imperfect csi perfect baseline employs narrow transmit beams save transmit power leads highest secrecy outage probability fig impact number trusted untrusted bss secrecy studied figs fig reveals required transmit power increases number untrusted bss total number bss kept constant helpers become untrusted fewer trusted bss available cooperative transmission subfiles time trusted bss transmit larger amount combat increasing number potential eavesdroppers hand base layers cached untrusted bss larger cache capacity utilized transmit subfiles hence transmit power scheme enlarged moderately increases high cache capacity regime increase transmit power even negligible due cooperative transmission subfiles fig secrecy outage probability monotonically decreases cache capacity however increases transmit power secrecy outage probability increased significantly particularly values saturate high levels cache capacities exceeding total number antennas equipped untrusted bss equals total number antennas equipped trusted bss hence available dofs secure transmission subfiles limited irrespective cache capacities cache untrusted small cell bss facilitate cooperative transmission subfiles hence secure delivery subfiles system allocate large amounts power transmission degrade reception quality untrusted bss time users signals mitigate impact interference caused legitimate users finally figs show performance proposed schemes baseline larger networks number users number small cells number untrusted small cell bss satisfy consider system bandwidth mhz ensure despite large number users qos requirements user fulfilled high probability baseline caching decisions determined employing algorithms resulting schemes referred baseline optimal baseline suboptimal respectively performance proposed suboptimal baseline suboptimal schemes shown high computational complexity optimal schemes increases untrusted bss present network untrusted eavesdrop larger number users hence available dofs per delivery secrecy outage probability optimal suboptimal fig total transmit power secrecy outage probability proposed optimal solid line suboptimal dotted line schemes versus cache capacity different numbers untrusted bss layer subfiles reduced likelihood data leakage increased therefore ensure secure video streaming larger larger average transmit power per needed transmission hand exploiting trusted untrusted bss cooperative transmission proposed schemes significantly outperform baseline particularly networks large numbers users large cache capacities onclusion paper secure video streaming investigated small cell networks untrusted small cell bss intercept cached delivered video data svc coding caching jointly exploited facilitate secure cooperative mimo transmission mitigate negative impact untrusted bss exploit secrecy enhancement robust optimization problem formulated optimize caching delivery minimization total transmit power required secure video streaming imperfect csi knowledge average transmit power per dbm secrecy outage probability given problems equivalent perturbation feasible set problem superset feasible set problem thus feasible moreover inequality constraint functions left hand sides constraints bounded max max considering right hand sides feasibility statement part proposition thus always holds next show optimality statement part contradiction assume solves feasible denote objective function besides let optimal solution necessarily holds however since fpen thus contradicts optimality therefore part proved finally prove part obviously implies problem infeasible optimal solution problem satisfies feasible solution form exist since otherwise hold therefore also infeasible completes proof proof proposition fig average transmit power per secrecy outage probability versus cache capacity proposed optimal suboptimal schemes baseline large timescale caching optimization problem solved offline modified gbd algorithm reduce computational complexity suboptimal caching algorithm also studied short timescale delivery optimization problem given cache status solved online sdp simulation results revealed compared several baseline schemes proposed optimal suboptimal schemes significantly enhance secrecy power efficiency video streaming small cell networks long total number antennas trusted bss exceeds untrusted bss ppendix proof proposition begin proof defining auxiliary optimization problem utp fpen minimize subject strong duality holds know dkt skt minimizes lagrangian dkt skt arg mindt lqk since lqk constant also dkt skt arg mindt finally hold separable meanwhile according kkt conditions max qfk qfk pmax pmax therefore min dkt skt utp dkt fpen skt substituting completes proof established proof proposition solution algorithm let generated optimality cut due strong duality respective sdp subproblem solve another iteration would hold lead termination algorithm since line repeat intermediate iterations implies since set finite algorithm converge finite number iterations prove equivalence show solution relaxed problem satisfies rank probability one let lagrange multipliers associated constraints respectively implied lagrangian problem given minimum value obtained since statistically independent req probability one dual problem unbounded consequently primal problem infeasible also contradiction therefore proved finally based rank rank rank req min rank req rank due result follows basic rank inequality rank min rank rank hand since condition rank holds probability one completes proof denote collection terms relevant irrelevant eferences respectively hence dual problem given xiang schober wong secure video streaming heterogeneous small cell networks untrusted min max define chosen ensure feasible moreover strong duality holds sdp subproblem optimal beamformers optimal dual solutions satisfy kkt optimality conditions particular substituting req req next show contradiction holds probability one assume least one nonpositive eigenvalue corresponding eigenvece let tor substituting req necessarily holds hence ensure yet contradicts condition hand cache helpers proc ieee global commun conf globecom singapore wong schober wang key technologies wireless systems cambridge university press cheng guizani han wireless backhaul networks challenges research advances ieee vol rost cloud technologies flexible radio access networks ieee commun vol may paschos land caire debbah wireless caching technical misconceptions business barriers ieee commun chen mozaffari saad yin debbah hong caching sky proactive deployment unmanned aerial vehicles optimized ieee sel areas vol may shanmugam golrezaei dimakis molisch caire femtocaching wireless content delivery distributed caching helpers ieee trans inf theory vol liu yang energy efficiency downlink networks caching base stations ieee sel areas vol apr liu lau opportunistic cooperative mimo video streaming wireless systems ieee trans signal vol xiang islam schober wong wang optimization fast video delivery relaying networks ieee trans veh vol niesen fundamental limits caching ieee trans inf theory vol may decentralized coded caching attains tradeoff trans vol zhang lin wang coded caching arbitrary popularity distributions proc inf theory app workshop ita san diego naderializadeh avestimehr optimality separation caching delivery general cache networks proc int symp inf theory isit aachen germany jun zewail yener coded caching combination networks relays proc int symp inf theory isit aachen germany jun xiang schober wong physical layer security video streaming cellular networks published ieee trans wireless commun gabry bioglio land edging caching secrecy constraints proc ieee int conf commun icc kuala lumpur malaysia may zewail yener coded caching resolvable networks security requirements proc commun netw security cns philadelphia xiao xie chen dai poor mobile offloading game smart attacks ieee access vol wright cache hacking exposed wireless wireless security secrets solutions education group mantas komninos rodriuez logota marques security communications fundamentals mobile networks rodriguez john wiley sons technical specification group service system aspects security release rescorla http tls ietf rfc may callegati cerroni ramilli attack https protocol ieee security privacy vol yener cooperation untrusted relay secrecy perspective ieee trans inf theory vol checko christiansen yan scolari kardaras berger dittmann cloud ran mobile networks technology overview ieee commun surveys vol first quarter zhao quek lei coordinated multipoint transmission limited backhaul data transfer ieee trans wireless vol jun zhuang lau backhaul limited asymmetric cooperation mimo cellular networks via semidefinite relaxation ieee trans signal vol schober secure green swipt distributed antenna networks limited backhaul capacity ieee trans wireless vol sept goel negi guaranteeing secrecy using artificial noise ieee trans wireless vol jun optimal robust transmit designs miso channel secrecy semidefinite programming ieee trans signal vol ohm advances scalable video coding proc ieee vol schwarz marpe wiegand overview scalable video coding extension standard ieee trans circuits syst video vol fuente idash improved dynamic adaptive streaming http using scalable video coding proc acm mmsys california zhang meratnia havinga outlier detection techniques wireless sensor networks survey ieee commun surveys vol gogoi bhattacharyya borah kalita survey outlier detection methods network anomaly identification comput vol apr hodge austin survey outlier detection methodologies artif intell review vol bharadia mcmilin katti full duplex radios acm sigcomm comput commun review vol day margetts bliss schniter bidirectional mimo achievable rates limited dynamic range ieee trans signal vol jul tse viswanath fundamentals wireless communication cambridge university press floudas nonlinear mixed integer optimization fundamentals applications oxford university press palomar eldar convex optimization signal processing communications cambridge university press boyd vandenberghe convex optimization cambridge university press mao wang han cellular networks ieee wireless vol luo sturm zhang multivariate nonnegative quadratic mappings siam optimization vol jul grant boyd cvx matlab software disciplined convex programming version online available http mosek aps mosek optimization software version online available http interior point algorithms theory analysis john wiley sons terlaky interior point methods nonlinear optimization nonlinear optimization springer berlin heidelberg korte vygen combinatorial optimization springer sidiropoulos davidson luo transmit beamforming multicasting ieee trans signal vol jun breslau cao fan phillips shenker web caching distributions evidence implications proc ieee infocom new york mar advancements physical layer aspects release mar
| 7 |
jun bounds parameter estimation general gaussian processes soukaina khalifa frederi viens cadi ayyad university michigan state university abstract study rates convergence central limit theorems partial sum squares general gaussian sequences using tools analysis wiener space assumption stationarity asymptotically otherwise made main theoretical tool optimal fourth moment theorem provides sharp quantitative estimate total variation distance wiener chaos normal law assumptions made sequence existence asymptotic variance estimator variance parameter bias variance controlled sequence function may exhibit long memory memory fractional brownian motion hurst parameter main result explicit exhibiting bias variance memory apply result study drift parameter estimation problems subfractional bifractional processes observations processes fail stationary detailed calculations result explicit formulas estimators asymptotic normality key words central limit theorem stationary gaussian process nourdinpeccati analysis parameter estimation fractional brownian motion long memory mathematics subject classification introduction paper presents estimation asymptotic variance exists general gaussian sequence applies stochastic processes based fractional noises long memory choose work simple method based empirical second moments akin discretization method cadi ayyad university marrakesh morocco email national school applied sciences marrakesh cadi ayyad university marrakesh morocco email department statistics probability michigan state university red cedar east lansing usa viens corresponding processes using discrete observations assumption fixed time step work responds recent preprint first instance type estimation performed without assumptions data insisted keeping data stationary asymptotically paper available arxiv contains extensive description literature parameter estimation gaussian stochastic processes repeat analysis instead list number recent articles deal various versions parameter estimation processes since class processes similar various examples consider article respectively papers described context motivations highlight distinction paper work assumption stationarity data weakened asking asymptotic stationarity deviation stationary model converged exponentially fast strong assumption allowed work specific examples close stationary article models covered long driving noise stationary would standard fractional brownian motion paper first instance parameters processes could estimated asymptotic normality using discrete data fixed time step however range application extended processes benefitted exponential ergodicity property one leaves range applicabilith driving equations noises stationary begin primary motivation article develop framework assumption stationarity made level driving noise processes instead investigate set arguably minimal assumptions estimate asymptotic variance exists discretely observed gaussian process article make assumption speed convergence data stationary model asking convergence occur implies main result paper estimator asymptotic normality applies exotic covariance structures estimator properties handled classical current method main result isolates three terms order construct estimates fully quantitative show wasserstein distance clt scaling leastsquares estimator asymptotic variance normal law bounded sum bias term variance term term comes application optimal fourth moment theorem nourdin peccati depends upper bound covariance structure discrete observations allows formulate clear assumptions needed turn estimate quantiative asymptotic normality theorem estimator wasserstein distance advantage working fashion one immediate way identifying possible since corresponding three terms explict simply added together measure fast estimator converges normal additional features method described section ability separate minimal convergence assumptions needed strong consistency needed addition obtain asymptotic normality see hypotheses thru section bullet points follow statements theorems section applying general method section specific examples section consider solutions equation driven brownian motion brownian motion somewhat covariance structures increments inherited corresponding processes whose drift parameter interest processes well studied literature last decade including efforts understand statistical inference refer information processes additional references compute asymptotic variances processes explicit functions depending parameters assumed known fact power functions normal asymptotics estimators two converted similar results see instance analysis explore part able apply strategy section successfully examples section relies three things first two avoided third enables efficient estimation convergence speeds able identify preferably explicit way thus giving access least show data stream variance converges processes expressions simple see formulas process respectively proving convergence evaluating speed therein slightly showing estimator asymptotic variance need found explicitly long one prove variances converge succeed task processes moreover able compute using series representation based covariance function fractional gaussian noise fgn see formulas respectively expressions comes something surprise since priori reason expect estimator asymptotic variance could expressed using fgn covariance rather covariance structure increments processes computing speeds convergence estimator bias asymptotic variance explicitly possible done processes rather explicit fashion requires certain amount elbow grease calculations relegated lemmas appendix claim performed estimations efficiently possibly done however think power orders convergences probably optimal based estimations optimal fourth moment theorem nourdin peccati confirmed phenomenon observed simpler stationary context third moment theorem speed convergence given absolute third moment rather fourth cumulant iii final result suband processes identify threshold hurst parameter exceeding upper endpoint range quadratic variations asymptotically normal finish introduction note general structure paper next section presents preliminaries regarding analysis wiener space convenient facts needed technical parts paper reader interested results proofs safely skip section upon first reading section presents general framework estimate asymptotic variance gaussian process observed discrete time section shows method implemented two cases discretely observed processes driven two gaussian processes memory appendix collects calculations useful section preliminaries elements analysis wiener space section provides essential facts basic probability analysis wiener space malliavin calculus facts corresponding notation underlie results paper results arguments understood independently section interested reader find details chapter chapter convenient lemma following result direct consequence lemma proof elementary see convenient establishing convergences convergences lemma let let sequence random variables every exists constant kzn klp exists random variable almost almost surely moreover typical application sequence finite sum wiener chaoses converges mean square equivalence norms wiener chaos consequence hypercontractivity semigroup wiener space see section section lemma assumption automatically satisfied speed convergence mean square power form random variable typically chosen explicitly lemma guarantees instance well known gaussian chosen gauge function therefore tails larger gaussian see similar property holds processes finite sum wiener chaoses inferred pursue refinement since lemma sufficient purposes herein young integral fix let continuous functions orders respectively young proved integrals young integrals dgu dfu exist limits usual discretization morever integration parts product rule holds dfu dgu integral convenient define stochastic integrals pathwise sense respect processes smoother brownian motion sense almost surely continuous typical example process fractional brownian motion hurst parameter case chosen process regularity fbm thus enables stochastic calculus immediately correction terms occur owing instance lipshitz function formula holds gaussian process since particular first integral coincides wiener integral second ordinary integral correction occurs since one processes finite variation elements analysis wiener space denoting wiener space standardrwiener process deterministic function wiener integral also denoted inner product denoted gih wiener chaos expansion every denotes qth wiener chaos defined closed linear subspace generated random variables khkh qth hermite polynomial wiener chaos random variable orthogonal wiener chaos expansion fact written every summarized notation denotes constants relation hermite polynomials multiple wiener integrals mapping linear isometry symmetric tensor product equipped modified norm relate standard stochastic calculus one first notes interpreted multiple wiener integral mean approximation integral converges elementary fact analysis wiener space also proved using standard stochastic calculus martingales multiple integral interpretation integral shown coincinde times iterated integral first simplex generally wiener chaos expansion term interpreted multiple wiener intergral product formula isometry property every following extended isometry property holds similarly formula established using basic analysis wiener space also proved using standard stochastic calculus owing coincidence iterated integrals one uses version integration parts iterated calculations show coincidence expectation bounded variation term side typically referred product formula wiener space version formula taking expectations see section work beyond term formula coincides expectation need know full formula gih hypercontractivity wiener chaos multiple wiener integrals exhaust set satisfy hypercontractivity property equivalence norms implies fixed sum wiener chaoses noted constants known precision single chaos term indeed corollary malliavin derivative malliavin derivative operator wiener space needed explicitly paper however fundamental role plays evaluating distances random variables helpful introduce justify estimates function bounded derivative malliavin derivative random variable defined consistent following chain rule similar chain rule holds multivariate one extends subset closing inside norm defined wiener chaos random variable domain fact domain expressed explicitly qkfq generator semigroup linear operator defined diagonal wiener chaos expansion eigenspace eigenvalue ker constants operator negative since variables dealing article finite sums elements operator easy manipulate thereon use crucial evaluating total variation distance laws random variables wiener chaos see shortly distances random variables recall two random variables total variation distance law law given sup supremum borel sets two integrable random variables wasserstein distance law law given sup lip indicates collection lipschitz functions lipschitz constant let denote standard normal law malliavin operators distances laws wiener space two key estimates linking total variation distance malliavin calculus obtained nourdin peccati first one observation relating formula wiener space classical result stein second quantitatively sharp version famous moment theorem nualart peccati observation nourdin peccati let see proposition theorem combining properties solutions stein equations one gets one notes particular since obtain conveniently last observation leads quantitative version fourth moment theorem nualart peccati entails using jensen inequality note side bounded variance hdx xih relating variance case wiener chaos cumulant however strategy superceded following roots optimal fourth moment theorem integer let assume converges distribution normal law known original proof known fourth moment theorem convergence equivalent limn following optimal estimate known optimal fourth moment theorem proved sequence assuming convergence exist two constant depending max max given importance centered fourth moment also known fourth cumulant standardized random variable warrants following special notation let also recall denote third cumulant throughout paper use notation also use notation positive real constant independently value may change line line lead ambiguity general context let underlying gaussian process let meanzero gaussian sequence measurable respect means sequence random variables represented every wiener integral respect hilbert space associated particular assume stationary process define explained introduction goal estimate asymptotic variance sequence exists assuming exists first following four assumptions three give various levels additional regularity law measure speed convergence estimator symmetric positive function positive constant consider following estimator refer construction estimator classes processes sequences interpreted discretization least squares estimator next two subsections study strong consistency speed convergence asymptotic normality make comments roles motivations behind four assumptions said assumption states asymptotic variance sequence exists assumption helps establish strong consistency via lemma hypercontractivity wiener chaos states proper standartization asymptotic variance also used help establish quantitative upper bound total variation distance standardized version standard normal law assumption formalizes idea sequence stationary may covariance structure bounded one may comparable stationary one assumption largely used matter convenience since formally schwartz inequality always assumed hold trivial case however making quantitative estimates rate decay find convenient explicit upper bound expressions total variation distance standardization normal law corresponding results theorem particular assumption quantifies speed convergence mean towards combined assumption determines speed convergence variance using estimates wiener space one arrive fully quantified upper bound wasserstein distance law corresponding results theorem last point significant theorem rely using observed standardize theorem also decomposes distance limiting normal law sum three explicit terms one account bias assumption one account speed convergence asymptotic variance assumption one analysis wiener space uses speed decay correlations assumption strong consistency following theorem provides sufficient assumptions obtain estimator strong consistency almost sure convergence theorem assume hold almost surely proof clear hypothesis convergence means imply order prove remains check according exists almost surely using hypercontractivity property lemma result obtained asymptotic normality section study asymptotic normality product formula write set second equality elementary covariance calculation nvn let estimate third cumulant every observation nourdin peccati using calculus wiener chaos assume holds hfi hfi hfk nvn nvn therefore nvn last equality follows argument proof proposition symbol means omit multiplicative universal constants fourth cumulant similarly last inequality comes similar argument proof proposition remark proposition proposition sequence centered stationary gaussian sequence case necessarily stationary hypothesis sufficient avoid assumption stationarity also note need take absolute value fourth cumulant variable fixed chaos chaos known positive cumulant theorem assume hypothesis holds max particular log large exists proof estimate direct consequence optimal estimate estimates absolute third fourth cumulants computed since log desired result obtained log remark phenomenon observed proof previous theorem estimate third cumulant dominates fourth cumulant observed originally proposition proposition stationary sequences shown completely general stationary case related power scale see stationarity required begs question whether phenomenon generalizes sequences second chaos instance without using hypothesis investigate question since falls well outside scope motivating topic parameter estimation theorem assume hypothesis holds large log addition holds log particular assumptions hold large law proof theorem direct consequence theorem standard properties wasserstein distance fact remark assumption corresponds notion moderating long memory data might instance represents increments process based fbm hurst parameter expect restriction means wellknown threshold limit validity central limit theorems quadratic variations processes see instance excellent treatment classical breuermajor theorem chapter theorem shows speed convergence clt reaches rate long terms coming bias variance estimates namely greater order threshold coincides one translates memory scale already identified canonical stationary case fgn paper precursor optimal fourth moment theorem application gaussian ornstein uhlenbeck processes section apply results processes driven gaussian process necessarily stationary increments precisely study cases correspond brownian motion bifractional brownian motion consider gaussian process defined following linear stochastic differential equation dxt dgt arbitrary gaussian process continuous paths strictly positive order unknown parameter goal estimate discrete observations equation following explicit solution dgs integral understood young sense since lipshitz function young sense coincide case wiener integral sense mentioned introduction consider two different cases brownian motion brownian motion brownian motion consider brownian motion sfbm sth parameter gaussian process covariance function sth ssh note standard brownian motion kolmogorov continuity criterion fact sth ssh deduce continuous paths order every section replace given precisely estimate drift parameter following equation dxt dsth proposition suppose set every hence log particular hypothesis holds proof since dssh write see dse drds hence check term right side last inequality less write clear dse hand dse dre hence every also last term let completes proof proposition every hypothesis holds precisely large proof easy see suppose thanks lemma get hence define every dudv stationary gaussian process fractional brownian motion hurst parameter write dudv dby dbx zth zsh last inequality comes fact large zrh see define following rates convergence proposition let define zkh process given note particular hypothesis holds log log proof see appendix propositions lead assumptions applying theorem obtain strong consistency estimator form theorem let almost surely study asymptotic normality estimator using obtain following result proposition log log combining propositions deduce result theorem log log particular law law log bifractional brownian motion section suppose given bifractional brownian motion bifbm parameters bth gaussian process covariance function case corresponds fbm hurst parameter process verifies continuous paths thanks kolmogorov continuity criterion proposition assume large particular hypothesis holds proof fact dbsh write drds drds hence check term side less write hkah easy prove drds hand using get drds drds drds drds easy see deduce moreover similar argument proof proposition completes proof proposition fixed hypothesis holds precisely large proof let using lemma get dudv proof proposition dudv set dudv thus assume clear bounded implies finishes proof using arguments proposition obtain following result proposition fixed zkhk process given particular hypothesis holds log log similarly section obtain following asymptotic behavior results theorem let almost surely theorem let log log particular law appendix law log lemma let gaussian process continuous paths stictly positive ordre solution equation define assume exist every dudv proof fix dgu dgv dgu dgv moreover dgu dgv dudv applying several times get last term thus desired result obtained lemma let solution let constant defined proposition log log proof hfk need following lemmas lemma let solution proof proposition deduce lemma let solution proof suppose using every write dudv dudv hand furthermore using similar argument also straightforward calculus also since get thus conclude since every write first calculate hence need study rate convergence following terms moreover clear addition thus finally using get combining proof completed let study case lemma let solution log log proof using similar argument straightforward check log log completes proof lemma let solution process bifbm log log proof write moreover indeed using similar arguments subfractional case obtain completes proof first inequality lemma using similar techniques also obtain log log finishes proof references azmoodeh morlanes drift parameter estimation fractional process second kind statistics doi azmoodeh viitasaari parameter estimation based discrete observations fractional process second kind statist infer stoch proc belfadli ouknine parameter estimation fractional processes case frontiers science engineering international journal edited hassan academy science technology bonami nourdin peccati optimal rates wiener space barrier third fourth cumulants alea brouste iacus parameter estimation discretely observed fractional process yuima package comput stat cheridito kawaguchi maejima fractional processes electr prob davydov martynova limit behavior multiple stochastic integral statistics control random process preila nauka moscow russian dobrushin major limit theorems nonlinear functionals gaussian fields wahrsch verw gebiete machkouri ouknine least squares estimator ornsteinuhlenbeck processes driven gaussian processes journal korean statistical society doi press onsy tudor statistical analysis fractional process second kind preprint onsy viens parameter estimation ornsteinuhlenbeck driven fractional processes stochastics vol viens optimal rates parameter estimation stationary gaussian processes preprint http fernique des trajectoires des fonctions gaussiennes ecole lecture notes math springer nualart parameter estimation fractional processes statist probab lett song parameter estimation fractional processes discrete observations viens eds malliavin calculus stochastic analysis festschrift honor david nualart springer kloeden neuenkirch pathwise convergence approximation schemes stochastic differential equations lms comp math neufcourt viens theorem precise asymptotics variations stationary gaussian sequences press alea preprint available http neuenkirch tindel least procedure parameter estimation stochastic differential equations additive fractional infer stoch proc nourdin peccati optimal fourth moment theorem proc amer math soc nourdin peccati normal approximations malliavin calculus stein method universality cambridge tracts mathematics cambridge university press cambridge nualart malliavin calculus related topics berlin nualart peccati central limit theorems sequences multiple stochastic integrals ann probab viens vizcarra supremum concentration inequality modulus continuity chaos processes functional analysis young inequality type connected stieltjes integration acta math
| 10 |
single image using dense network zhang vishal patel department electrical computer engineering rutgers university piscataway feb abstract single image rain streak removal extremely challenging problem due presence rain densities images present novel densityaware densely connected convolutional neural algorithm called joint rain density estimation proposed method enables network automatically determine information efficiently remove corresponding guided estimated raindensity label better characterize different scales shapes densely connected network proposed efficiently leverages features different scales furthermore new dataset containing images labels created used train proposed network extensive experiments synthetic real datasets demonstrate proposed method achieves significant improvements recent methods addition ablation study performed demonstrate improvements obtained different modules proposed method code found https figure image results input rainy image result input rainy image result note tends image tends image tively consider various shapes scales density rain drops algorithms algorithms often tend image rain condition present test image properly considered training example rainy image shown fig using method tends remove important parts image right arm person shown fig similarly used image shown fig tends image leaves rain streaks output image hence adaptive efficient methods deal different rain density levels present image needed one possible solution problem build large training dataset sufficient rain conditions containing various levels different orientations scales achieved yang synthesize novel dataset consisting rainy images various conditions introduction many applications video surveillance self driving cars one process images videos containing undesirable artifacts rain snow fog furthermore performance many computer vision systems often degrades presented images containing artifacts hence important develop algorithms automatically remove artifacts paper address problem rain streak removal single image various methods proposed literature address problem one main limitations existing single image methods designed deal certain types rainy images train single network based dataset image however one drawback approach single network may capable enough learn types variations present training samples observed fig methods tend either results alternative solution problem learn model deraining however solution lacks flexibility practical density label information needed given rainy image determine network choose order address issues propose novel image method using multistream dense network automatically determine information heavy medium light present input image see fig proposed method consists two main stages classification rain streak removal accurately estimate level new classifier makes use residual component rainy image density classification proposed paper rain streak removal algorithm based network takes account distinct scale shape information rain streaks level estimated fuse estimated density information final denselyconnected network get final output furthermore efficiently train proposed network largescale dataset consisting images different heavy medium light synthesized fig present sample results network one clearly see image able provide better results compared paper makes following contributions world comparisons performed several recent approaches furthermore ablation study conducted demonstrate effects different modules proposed network background related work section briefly review several recent related works single image feature aggregation single image mathematically rainy image modeled linear combination component clean background image follows single image given goal recover observed image highly problem unlike methods leverage temporal information removing rain components methods proposed literature deal problem include sparse methods lowrank methods gaussian mixture model methods one limitations methods often tend image details recently due immense success deep learning vision tasks several methods also proposed image methods idea learn mapping input rainy images corresponding ground truths using cnn structure feature aggregation novel method automatically determines information efficiently removes corresponding guided estimated label proposed based observation residual used better feature representation characterizing raindensity information novel classifier efficiently determine given rainy image proposed paper new synthetic dataset consisting training images labels test images synthesized best knowledge first dataset contains label information although network trained synthetic dataset generalizes well rainy images extensive experiments conducted three highly challenging datasets two synthetic one observed combining convolutional features different levels scales lead better representation object image surrounding context instance efficiently leverage features obtained different scales fcn fully convolutional network method uses adds prediction layers intermediate layers generate prediction results multiple resolutions similarly architecture consists contracting path capture context symmetric expanding path enables precise localization hed model employs deeply supervised structures automatically learns rich hierarchical representations fused resolve challenging ambiguity edge object boundary detection features also leveraged various applications semantic segmentation visual tracking figure overview proposed method proposed network contains two modules classifier network goal classifier determine level given rainy image hand network designed efficiently remove rain streaks rainy images guided estimated information action recognition depth estimation single image dehazing also single image similar also leverage network capture components different scales shapes however rather using two convolutional layers different dilation factors combine features different scales leverage block building module connect features block together final removal ablation study demonstrates effectiveness proposed network compared structure proposed der image mainly due fact single network may sufficient enough learn different occurring practice believe incorporating density level information network benefit overall learning procedure hence guarantee better generalization different rain conditions similar observations also made use two different priors characterize light rain heavy rain respectively unlike using two priors characterize different conditions label estimated cnn classifier used guiding process accurately estimate density information given rainy input image raindensity classifier proposed residual information leveraged better represent rain features addition train classier synthetic dataset consisting rainy images density labels synthesized note three types classes labels present dataset correspond low medium high density proposed method proposed architecture mainly consists two modules classifier densely connected network classifier aims determine level given rainy image hand densely connected network designed efficiently remove rain streaks rainy images guided estimated information entire network architecture proposed method shown fig one common strategy training new classifier model newly introduced dataset one fundamental reasons leverage strategy new dataset discriminative features encoded models beneficial accelerating training also guarantee better generalization however observed directly deep model task efficient solution classifier discussed even though previous methods achieve significant improvements deraining performance often tend mainly due fact features deeper part cnn tend pay attention localize discriminative objects input image hence relatively small may localized well highlevel features words information may lost features hence may degrade overall classification performance result important come better feature representation effectively characterize one regard residual component used characterize estimate residual component observation without label fusion part using new dataset trained estimated residual regarded input train final classifier way residual estimation part regarded feature extraction procedure discussed section classification part mainly composed three convolutional layers conv kernel size one average pooling layer kernel size two layers details classifier follows conv means input consists channels output consists channels note final layer consists set neurons indicating class input image low medium high ablation study discussed section conducted demonstrate effectiveness proposed classifier compared model figure sample images containing various scales shapes contains smaller contains longer images shown fig rainy image fig contains smaller captured smallscale features smaller receptive fields image fig contains longer captured features larger receptive fields hence believe combining features different scales efficient way capture various rain streak components based observation motivated success using features single image efficient network estimate components proposed stream built introduced different kernel sizes different receptive fields blocks denoted yellow green blue blocks respectively fig addition improve information flow among different blocks leverage features estimating rain streak components modified connectivity introduced features block concatenated together estimation rather leveraging two convolutional layers stream create short paths among features different scales strengthen feature aggregation obtain better convergence demonstrate effectiveness proposed network compared structure proposed ablation study conducted described section leverage information guide deraining process label map concatenated rain streak features three streams concatenated features used estimate residual information addition residual subtracted input rainy image estimate coarse image finally refine estimated loss classifier efficiently train classifier training protocol leveraged residual feature extraction network firstly trained estimate residual part given rainy image classification trained using estimated residual input optimized via ground truth labels finally two stages feature extraction classification jointly optimized overall loss function used train classier follows indicates estimate residual component indicates crossentropy loss classification dense network different rainy images contain rainstreaks different scales shapes considering example label corresponding dimension output features stream pixel values label map classificaiton network regarded two parts extractor classifer testing coarse image make sure better details well preserved another two convolutional layers relu adopted final refinement six stream mathematically stream represented cat cat indicates concatenation dbi denotes output ith dense block denotes jth stream furthermore adopt different transition layer kernel sizes stream details stream follows three layers three layers kernel size two layers two transition layers two layers kernel size one layer four transition layers one layer kernel size note followed transition layer fig presents overview first stream testing label information using proposed classifier estimated corresponding input image fed network get final image experimental results section present experimental details evaluation results synthetic datasets performance synthetic data evaluated terms psnr ssim performance different methods images evaluated visually since ground truth images available proposed method compared following recent methods discriminative sparse codingbased method dsc iccv gaussian mixture model gmm based method cvpr cnn method cnn tip joint rain detection removal jorder method cvpr deep detailed network method ddn cvpr joint optimization jbo method iccv synthetic dataset even though exist several synthetic datasets lack availability corresponding label information synthetic rainy image hence develop new dataset denoted consisting images image assigned label based corresponding level three labels present dataset light medium heavy roughly images per level dataset similarly also synthesize new test set denoted consists total images ensured dataset contains rain streaks different orientations scales images synthesized using photoshop modify noise level introduced step generate different images light medium heavy rain conditions correspond noise levels respectively sample synthesized images three conditions shown fig better test generalization capability proposed method also randomly sample images synthetic dataset provided another testing set denoted figure details first stream loss network motivated observation cnn loss better improve semantic edge information enhance visual quality estimated image also leverage weighted combination pixelwise euclidean loss loss loss training densely connected network follows represents euclidean loss function reconstruct image featurebased loss image defined represents cnn transformation recovered image assumed features size channels method compute feature loss layer model http reason use three labels experiments found three levels significantly improve performance hence use three labels heavy medium light experiments transition layer function transition downsample transition transition table quantitative results evaluated terms average ssim psnr input dsc iccv gmm cvpr cnn tip jorder cvpr ddn cvpr jbo iccv network trained without procedure label fusion densely connected network trained without procedure label fusion network trained procedure estimated label fusion heavy medium light figure samples synthetic images three different conditions table quantitative results compared three baseline configurations single average psnr ssim results evaluated tabulated table shown fig even though single stream network yang network able successfully remove rain streak components tend image blurry output network without label fusion unable accurately estimate level hence tends leave rain streaks derained image especially observed around light contrast proposed network label fusion approach capable removing rain streaks preserving background details similar observations made using quantitative results shown table psnr ssim table accuracy estimation evaluated accuracy training details training image randomly cropped input image horizontal flip size adam used optimization algorithm size learning rate starts divided epoch models trained iterations use weight decay momentum entire network trained using pytorch framework training set parameters defined via crossvalidation using validation set results two synthetic datasets compare quantitative qualitative performance different methods test images two synthetic datasets quantitative results corresponding different methods tabulated table clearly observed proposed able achieve superior quantitative performance visually demonstrate improvements obtained proposed method synthetic dataset results two sample images selected one sample chosen newly synthesized presented figure note selectively sample images three conditions show method performs well different variations jorder method able remove parts still tends leave images similar results also observed even though method ablation study first ablation study conducted demonstrate effectiveness proposed classifier compared model two classifiers trained using synthesized training samples tested set classification accuracy corresponding classifiers tabulated table observed proposed classifier accurate model predicting levels second ablation study demonstrate effectiveness different modules method conducting following experiments better demonstrate effectiveness proposed network compared structure proposed replace part structured keep parts due space limitations better comparisons show results corresponding recent methods main paper results corresponding methods found supplementary material single densely connected network without procedure label fusion psnr ssim psnr ssim input single psnr ssim psnr ssim psnr ssim psnr inf ssim ground truth figure results ablation study synthetic image psnr ssim ssim psnr psnr ssim psnr ssim psnr inf ssim ssim ssim psnr psnr psnr inf psnr ssim psnr inf input jorder cvpr ddn cvpr jbo iccv ground truth figure removal results sample images synthetic datasets able remove especially medium light rain conditions tends remove important details well flower details shown second row window structures shown third row details better observed via figure overall proposed method able preserve better details effectively removing components previous methods either tend images contrast proposed method achieves better results terms effectively removing rain streaks preserving image details addition observed proposed method able deal different types rain conditions heavy rain shown second row fig medium rain shown fifth row fig furthermore proposed method effectively deal containing different shapes scales small round rain streaks shown third row fig second row fig overall results evaluated images captured different rain conditions demonstrate effectiveness robustness proposed results images performance proposed method also evaluated many images downloaded internet also images published authors results shown fig input jorder cvpr ddn cvpr jbo iccv figure removal results sample images mdn method results found supplementary material work jointly estimation deraining comparison existing approaches attempt solve problem using single network learn remove rain streaks different densities heavy medium light investigated use estimated label guiding synthesis derained image efficiently predict label classier proposed paper detailed experiments comparisons performed two synthetic one datasets demonstrate proposed method significantly outperforms many recent methods additionally proposed method compared baseline configurations illustrate performance gains obtained module running time comparisons running time comparisons shown table observed testing time proposed didmdn comparable ddn method average takes image size table running time seconds different methods averaged images size dsc gmm cnn gpu jorder gpu ddn gpu jbo cpu gpu conclusion references paper propose novel image deraining method densely connected chen chen kang visual depth guided color image rain streaks removal using sparse coding tan guo brown rain streak removal using layer priors ieee conference computer vision pattern recognition cvpr pages june long shelhamer darrell fully convolutional networks semantic segmentation proceedings ieee conference computer vision pattern recognition pages luo removing rain single image via discriminative sparse coding iccv pages peng feris wang metaxas recurrent network sequential face alignment european conference computer vision pages springer international publishing peng sohn metaxas chandraker reconstruction feature disentanglement face recognition iccv ren liu zhang pan cao yang single image dehazing via convolutional neural networks eccv pages springer ren tian han chan tang video desnowing deraining based matrix decomposition proceedings ieee conference computer vision pattern recognition pages ronneberger fischer brox convolutional networks biomedical image segmentation international conference medical image computing intervention pages springer santhaseelan asari utilizing local phase information remove rain video international journal computer vision simonyan zisserman deep convolutional networks image recognition arxiv preprint sindagi patel generating crowd density maps using contextual pyramid cnns iccv wang bovik sheikh simoncelli image quality assessment error visibility structural similarity ieee tip wei xie zhao meng encode rain streaks video deterministic stochastic proceedings ieee conference computer vision pattern recognition pages xie edge detection proceedings ieee international conference computer vision pages zhang huang zhang gan huang attngan text image generation attentional generative adversarial networks cvpr xue zhang dana nishino differential angular imaging material recognition cvpr yang tan feng liu guo yan deep joint rain detection removal single image proceedings ieee conference computer vision pattern recognition pages ieee transactions circuits systems video technology chen hsu generalized appearance model correlated rain streaks ieee iccv pages eigen krishnan fergus restoring image taken window covered dirt rain iccv pages eigen puhrsch fergus depth map prediction single image using deep network nips pages huang ding liao paisley clearing skies deep network architecture rain removal ieee transactions image processing huang zeng huang ding paisley removing rain single images via deep detail network ieee conference computer vision pattern recognition cvpr pages july zhang ren sun spatial pyramid pooling deep convolutional networks visual recognition european conference computer vision pages springer zhang ren sun deep residual learning image recognition proceedings ieee conference computer vision pattern recognition pages huang kang wang lin based image decomposition applications single image denoising ieee transactions multimedia huang kang yang lin wang single image rain removal multimedia expo icme ieee international conference pages ieee huang liu weinberger van der maaten densely connected convolutional networks arxiv preprint drozdzal vazquez romero bengio one hundred layers tiramisu fully convolutional densenets semantic segmentation computer vision pattern recognition workshops cvprw ieee conference pages ieee johnson alahi perceptual losses style transfer european conference computer vision pages springer kang lin automatic rain streaks removal via image decomposition ieee tip ledig theis caballero cunningham acosta aitken tejani totz wang single image using generative adversarial network proceedings ieee conference computer vision pattern recognition pages kong deep similarity learning networks visual tracking ijcai zhang dana generative network transfer arxiv preprint zhang patel convolutional sparse lowrank rain streak removal applications computer vision wacv ieee winter conference pages ieee zhang sindagi patel image using conditional generative adversarial network arxiv preprint zhang sindagi patel joint transmission map estimation dehazing using deep networks arxiv preprint zhang xie xing mcgough yang mdnet semantically visually interpretable medical image diagnosis network cvpr zhao shi wang jia pyramid scene parsing network proceedings ieee international conference computer vision pages zhou khosla lapedriza oliva torralba learning deep features discriminative localization proceedings ieee conference computer vision pattern recognition pages zhu lischinski heng joint bilayer optimization rain streak removal proceedings ieee international conference computer vision pages zhu lan newsam hauptmann hidden convolutional networks action recognition arxiv preprint
| 1 |
conversion rate optimization evolutionary computation risto neil aaron ron sam cory myles jonathan randy gurmeet sentient technologies university texas austin apr abstract conversion optimization means designing web interface many users possible take desired action register purchase design usually done hand testing one change time testing limited number combinations multivariate testing making possible evaluate small fraction designs vast design space paper describes sentient ascend automatic conversion optimization system uses evolutionary optimization create effective web interface designs ascend makes possible discover utilize interactions design elements difficult identify otherwise moreover evaluation design candidates done parallel online large number real users interacting system case study existing media site shows significant improvements possible beyond human design ascend therefore seen approach massively multivariate conversion optimization based massively parallel interactive evolution keywords conversion optimization interactive evolution online evolution design introduction designing web interfaces web pages interactions convert many users possible casual browsers paying customers important goal design principles including simplicity consistency often also unexpected interactions elements page determine well converts element headline image testimonial may work well one context often hard predict result even harder decide improve given page entire subfield information technology emerged area called conversion rate optimization conversion science standard method testing designing two different versions page showing different users collecting statistics well convert process allows incorporating human knowledge domain conversion optimization design testing effect observing results new designs compared gradually improved testing process difficult small fraction page designs tested way subtle interactions design likely unnoticed unutilized alternative multivariate testing value combinations elements tested process captures interactions elements small number elements usually included rest design space remains unexplored paper describes new technology conversion optimization based evolutionary computation technology implemented ascend conversion optimization product sentient technologies deployed numerous websites paying customers since september ascend uses search space starting point consists list elements web page changed possible alternative values header text font color background image testimonial text content order ascend automatically generates candidates tested improves candidates evolutionary optimization sites often high volume traffic fitness evaluations done live large number real users parallel evolutionary process ascend thus seen massively parallel version interactive evolution making possible optimize web designs weeks application point view ascend novel method massively multivariate optimization designs depending application improvements human design observed approach paper describes technology underlying ascend presents example use case outlines future opportunities evolutionary computation optimizing background explosive growth recent years entirely new areas study emerged one main ones conversion rate optimization study web interfaces designed effective possible converting users casual browsers actual customers conversion means taking desired action web interface making purchase registering marketing list clicking desired link email website desktop mobile social media application conversions usually measured number clicks also metrics resulting revenue time spent site rate return site conversions currently optimized manual process requires significant expertise web design expert marketer first creates designs believes effective designs tested testing process directing user traffic measuring well convert conversion rates statistically significantly different better design adopted design improved using domain expertise change another rounds creation testing conversion optimization component ecommerce companies spent billion drive customers websites much investment result sales conversion rates typically users come site convert within days top sites conversion optimization january growth largely due available conversion optimization tools optimizely visual website optimizer mixpanel adobe target tools make possible configure designs easily allocate users record results measure significance process several limitations first tools make task designing effective web interfaces easier design still done human experts tools thus provide support confirming experts ideas helping explore discover novel designs second since step process requires statistical significance designs tested third improvement step amounts one step hillclimbing process get stuck local maxima fourth process aimed reducing false positives therefore increases false negatives designs good ideas may overlooked fifth tools provide support multivariate testing practice combinations tested five possible values two elements three possible values three elements result difficult discover utilize interactions design elements evolutionary optimization well suited address limitations evolution efficient method exploration weak statistical evidence needed progress stochastic nature avoids getting stuck local maxima good ideas gradually become prevalent importantly evolution searches effective interactions instance ascend may find button needs green transparent header small font header text aligned interactions difficult find using testing requiring human insight results evolution makes discovery process automatic ascend thus possible optimize conversions better larger scale technically ascend related approaches interactive evolution crowdsourcing evaluations candidates done online human users usual interactive evolution paradigm however employs relatively small number human evaluators task select good candidates evaluate fitness pool candidates explicitly contrast ascend massive number human users interacting candidates fitness derived actions convert implicitly defining search space starting point ascend search space defined web designer ascend configured optimize design single funnel consisting multiple pages landing page selections shopping cart space designer specifies elements page values take instance landing page example figure logo size header image button color content order elements take values ascend searches good designs space possible combinations values space combinatorial large example interestingly exactly combinatorial nature makes optimization good application evolution even though human designers insight values use combinations difficult predict need discovered search process evolution initializing evolution typical setup already current design web interface goal ascend improve performance current design web interface designated control improvement measured compared particular design fitness evaluated real users exploration incurs real cost customer therefore important candidates perform reasonably well throughout evolution especially beginning initial population generated randomly many web interfaces would perform poorly instead initial population created using control starting point candidates created changing value one element systematically small search space initial population thus consists candidates one difference control large search space population sample set candidates initialization candidates perform similarly control candidates also cover search dimensions well thus forming good starting point evolution evolutionary process page represented genome shown two example pages figure left side usual genetic operations crossover elements two genomes middle mutation randomly changing one element offspring right side performed create new candidates current implementation selection used generate offspring candidates current population current population candidates another new candidates generated way evaluations expensive consuming traffic customers pay useful minimize evolution page needs tested extent possible decide whether promising whether serve parent next generation discarded process similar therefore used allocate fitness evaluations generation new candidate old candidate evaluated small number maturity ascend method ascend consists defining space possible web interfaces initializing population good coverage space allocating traffic candidates intelligently bad designs eliminated early testing candidates online parallel steps described detail section figure genetic encoding operations web interface candidates pages represented concatenations element values encoding crossover mutation operate vectors usual creating new combinations values figure elements values example web page design example elements possible values resulting combinations alternative designs evaluated learning process discovering optimal designs best design avg return figure overall architecture online evolution system outcome interaction whether user converted constitutes one evaluation design many evaluations run parallel different users averaged estimate good design designs evaluated adaptation process discards bad designs generates variations best designs process generation testing selection repeated sufficiently good design found time allocated process spent best design found far output result learning process system thus discovers good designs web interfaces live online testing age user interactions top candidates retained bottom discarded manner bad candidates eliminated quickly good candidates receive progressively evaluations confidence fitness estimate increases process ascend learns combinations elements effective gradually focuses search around promising designs thus sufficient test tiny fraction search space find best ones thousands pages instead millions billions learning approach particularly effective evaluations done parallel entire population tested different users interact site simultaneously also unnecessary test design statistical significance weak statistical evidence sufficient proceed search process thousands page designs tested short time impossible multivariate testing figure shows overall architecture system population alternative designs center adapted right based evaluations actual users left population designs center evaluated many users parallel left evolutionary process right generates new designs outputs best design end system also keeps track design online evolution simple cases space possible designs small optimization potentially carried simpler mechanisms systematic search reinforcement figure control design three best evolved designs days evolution user interactions design search widget found converted better control well good designs much improvement based discovering combination colors draws attention widget makes call action clear show user get see design return within certain time limit day excellent result given control candidate already using art tools unlike control top candidates utilize bright background colors draw attention widget important interaction background blue banner whose color fixed best two designs middle background distinct banner competing moreover given colored background white button black text provided clear call action difficult recognize interactions ahead time yet evolution discovered early many later candidates built factors active call action get started find program rather request info amplified time evolution turned better designs still discovered suggesting prolonged evolution larger search space including banner color choices could improved results case study example ascend works let consider case study optimizing web interface media site connects users online education programs experiment run september november desktop traffic site initial design page shown left side figure hand designed using standard tools optimizely conversion rate time experiment found typical web interfaces based page web designers came nine elements two nine values resulting potential combinations figure much larger search spaces possible example represents space common many current sites initial population candidates formed systematically replacing values control page one alternative values described section evolution run days four generations altogether testing candidates user interactions total estimated conversion rates candidates time shown figure conversion rates top candidates shown figure figures show evolution successful discovering significantly better candidates control independent verification three top candidates figure subjected test using optimizely user interactions best candidate confirmed increase conversion rate greater significance two future work ascend applied numerous web interfaces consistently improved conversion rates compared hand designed controls main limitation often human element web designers used multivariate testing often try minimize search space number elements values much possible thereby giving evolution much space explore discover powerful solutions often evolution discovers significant improvement couple generations designers eager adopt right away instead letting evolution optimize designs fully populationbased optimization requires different thinking designers become figure screenshot user interface designing ascend experiments showing elements values case study nine elements two nine different values result potential web page designs first value element designated control problem typical current web interface designs comfortable believe let evolution take course reaching refined results currently ascend delivers one best design small number good ones end result keeping testing tradition many cases seasonal variations changing trends making performance good designs gradually decay possible counter problem running optimization every months however new paradigm would appropriate evolutionary optimization run continuously low volume keeping changing trends dynamic evolutionary optimization new designs adopted periodically performance exceeds old designs significantly furthermore currently ascend optimizes single design used future users mobile desktop site interesting extension would take user segmentation account evolve different pages different kinds users moreover mapping user characterizations page designs automatic mapping system neural network take user variables location time device past history site inputs generate vector elements values outputs neuroevolution discover optimal mappings effect evolve discover dynamic continuous segmentation user space users shown designs likely convert well based experience users similar characteristics continuously automatically possible analyze evolved neural networks discover variables predictive characterize main user segments thereby develop understanding opportunity finally ascend approach limited optimizing conversions outcome measured revenue user retention optimized approach also used different role optimizing amount resources spent attracting users placement selection adword bidding email marketing approach seen fundamental step bringing machine optimization demonstrating value evolutionary computation problems conclusion sentient ascend demonstrates interactive evolution scaled testing large number candidates parallel real users includes technology keeping cost exploration reasonable minimizing number evaluations needed application point view ascend first automated system massively multivariate conversion replacing finds subtle combinations variables lead conversion increases web designer spend time trying ideas less time statistics giving freedom need make difference references tim ash rich page maura ginty landing page optimization definitie guide testing tuning conversions second wiley hoboken daren brabham crowdsourcing mit press cambridge branke evolutionary optimization dynamic environments springer berlin builtwith testing usage https retrieved emarketer digital spending surpass year https retrieved dario floreano peter claudio mattiussi neuroevolution architectures learning evolutionary intelligence babak hodjat hormoz shahrzad introducing fitness estimation function genetic programming theory practice rick riolo ekaterina vladislavleva marylyn ritchie jason moore springer new york figure estimated conversion rates online evolution run days conversion rate axis dark blue dots top indicate current best candidate light blue dots middle average currently active candidates orange dots bottom estimated performance control design shaded areas display confidence intervals binomial distribution observed mean dark blue peaks indicate start new generation peaks emerge first days new candidates evaluated small number times high estimated rates random chance eventually evaluated maturity age user interactions estimates become lower confidence intervals narrower elite candidates tested across several generations described section resulting narrow intervals towards end estimated conversion rates best candidates later generations significantly higher control suggesting evolution effective discovering better candidates interestingly active population average also higher control indicating experiment incur cost performance ron kohavi roger longbotham online controlled experiments tests encyclopedia machine learning data mining claude sammut geoffrey webb springer new york joel lehman risto miikkulainen boosting interactive evolution using human computation markets proceedings international conference theory practice natural computation springer berlin joel lehman risto miikkulainen neuroevolution scholarpedia http lehman khalid salehd ayat shukairy conversion optimization art science converting prospects customers reilly media sebastopol jimmy secretan nicholas beato david ambrosio adelein rodriguez adam campbell kenneth stanley picbreeder case study collaborative evolutionary exploration design space evolutionary computation sentient technologies http ascend retrieved hormoz shahrzad babak hodjat risto miikkulainen estimating advantage evolutionary algorithms proceedings genetic evolutionary computation conference gecco acm new york usa takagi interactive evolutionary computation fusion capacities optimization human evaluation proc ieee http daniel yankelovich david meer rediscovering market segmentation harvard business review figure estimated conversion rates top candidates control candidates identified numbers corresponding figure ordered according estimated conversion rate candidates significantly better control level best one right better confirmed independent test showed improvement
| 9 |
using artificial intelligence data reduction mechanical engineering heyns dynamic systems group department mechanical aeronautical engineering university pretoria pretoria south africa school electrical information engineering university witwatersrand private bag wits south africa abstract paper artificial neural networks support vector machines used reduce amount vibration data required estimate time domain average gear vibration signal two models estimating time domain average gear vibration signal proposed models tested data accelerated gear life test rig experimental results indicate required data calculating time domain average gear vibration signal reduced proposed models implemented introduction calculating time domain average tda gear vibration signal direct averaging using digital computers requires large amounts data requirement makes difficult develop online gearbox condition monitoring systems utilize time domain averaging calculated direct averaging enhance diagnostic capability study presents novel approach estimating tda gear vibration signal using less data would used calculating tda direct averaging artificial neural networks anns support vector machines svms used estimating tda gear vibration signal two models presented input data comprises rotationsynchronized gear vibration signals output tda gear vibration signal model used results indicate amount gear vibration data required calculating tda reduced percent amount data required calculating tda direct averaging model used amount data stored data acquisition system reduced less percent data would stored calculating tda direct averaging anns implemented perceptrons mlps radial basis functions rbfs two parameters selected verify whether tda estimated models retains original diagnostic capability tda parameters kurtosis impulses peak value overall vibration computational time also compared determine suitability proposed models analysis anns svms anns svms may viewed parameterized mapping input data output data learning algorithms viewed methods finding parameter values look probable light data learning process takes place training anns svms supervised learning supervised learning case input data set output data set known anns svms used approximate functional mapping two data sets peceptron mlp architecture used selection made universal approximation theorem states twolayered architecture adequate mlp mlp provides distributed representation respect input space due hidden units output perceptron expressed following equation outer wkj inner outer finner activation functions denotes weight first layer going input hidden unit hidden unit bias denotes weight second layer parameter inner selected hyperbolic tangent function tanh outer selected linear function hyperbolic tangent function maps interval onto interval linear activation function maps interval onto interval approach used training mlp network weight decay regularization used cost functions weight decay penalizes large weights ensures mapping function smooth avoiding mapping input data output data study regularization coefficient found suitable weights biases hidden layers varied using scaled conjugate gradient scg optimization cost function minimized determined empirically mlp network five hidden units best suited application radial basis functions rbf network approximates functions combination radial basis functions linear output layer rbf neural networks provide smooth interpolating function number basis functions determined complexity mapping represented rather data set rbf neural network mapping given biases output layer weights input vector jth basis function thin basis function used study radial basis function trained two stages first stage input data set alone used determine basis function parameters first training stage basis functions kept fixed second layer weights determined second training phase since basis functions considered fixed network equivalent network optimized minimizing suitable error function error function also used train rbfs error function quadratic function weights minimum therefore found terms solution set linear equations regression basis function parameters found treating basis function centers widths along secondlayer weights adaptive parameters determined minimizing error function study determined empirically rbf network five basis functions suitable support vector machines svms developed vapnik gained much popularity recent years svm formulation embodies structural risk minimization srm principle shown superior traditional empirical risk minimization erm principle employed conventional neural networks srm minimizes upper limit expected risk opposed erm minimizes error training data difference gives svms greater ability generalize svms applied regression problems loss functions include distance measure used loss function selected study loss function defined otherwise regression mapping used map data higher dimensional feature space linear regression performed kernel approach employed address curse dimensionality support vector regression solution using loss function given max constraints solving equation constraints equation determines lagrange multipliers regression function given svs network configuration rotation synchronized gear vibration signals input different kernels investigated mapping data higher dimensional feature space linear regression performed exponential radial basis function kernel order found suitable application proposed models two different models proposed model maps input space target using simple feedforward network configuration size input space systematically reduced find optimal number input vectors used estimate target vector correctly anns svms properly trained model capable mapping input space target using less data would otherwise used calculating tda direct averaging determined empirically rotationsynchronized gear vibration signals suitable predicting tda model consequently amount data required calculating tda reduced percent using gear vibration signals since gear vibration signals used calculating tda direct averaging figure shows schematic diagram proposed methodology output tda signal figure schematic proposed methodology model estimates tda input space small sequential steps analogous taking running average input space model consists number networks instead using network estimate tda entire input data model first sequentially estimates average subsections input data output first stage used input second network estimates tda entire input data networks trained reduce computation time model data discarded immediately use means model require large amounts data stored data logger even though data collected study gear vibration signals found suitable estimating instantaneous average first stage estimation result amount vibration data stored data logger reduced less percent amount data stored data logger calculating tda direct averaging estimation results model model used estimating tda gear vibration signal tda estimated proposed models could compared tda calculated direct averaging data used accelerated gear life test rig comparisons made time frequency domains quantify accuracy tda estimated models fit parameter defined ydesired used equation simulation accuracy defined equation number data points used desired achieved ydesired tda signal calculated direct averaging yachieved tda signal estimated models fit parameter gives single value simulation therefore used compare performance different formulations entire gear life high value fit parameter implies bad fit whereas low value implies good fit trial error established suitable upper point simulation accuracy model gear vibration signals first test points per revolution used training anns points per revolution used train svms resulted training sets dimensions respectively fifteen test data sets measured life gear used validation sets resulted validation data sets dimensions anns svms respectively model whole data set rotationsynchronized gear vibration signals first test used training anns svms resulted training set dimensions anns svms rest data dimensions used validation data anns svms respectively figure shows estimation results obtained model mlp network simulated using unseen validation data sets input signals stages gear life dotted line tda estimated model solid line tda calculated direct averaging first plot figure time domain representation results second plot frequency domain representation observed time frequency domain representations almost exact fits shows model mlp networks retains time frequency domain properties original time domain averaging process using gear vibration data accelerated gear life test rig similar performances obtained throughout life gear using models rbfs svms even though significant changes vibration signatures condition monitored gear deteriorated changes vibration signatures due changes meshing stiffness caused cracks gear teeth good performance due mapping generalization capabilities anns svms figure model mlp estimation using data test conducted constant load conditions figure shows simulation accuracy plotted gear life model observed model rbf network model svms give similarly performance performance slightly better model mlp networks performances three formulations acceptable less value formulations simulation optimal consequently generalize well changes measured vibration gear failure progressed similar results obtained using model figure model simulation accuracy gear life constant load conditions addition simply considering goodness fit diagnostic capabilities tda estimated models assessed using peak value vibration xmax given interval kurtosis peak value used monitor overall magnitude vibration distinguish acceptable unacceptable vibration levels kurtosis useful detecting presence impulse within vibration signal peak value vibration xmax kurtosis tda calculated using direct averaging compared tda estimate proposed models figure plot xmax kurtosis calculated tda estimated model superimposed xmax kurtosis calculated tda obtained using direct averaging figure indicates kurtosis exact fit three formulations implies tda predicted model used monitor presence impulses measured gear vibration signal also observed kurtosis high early stages gear life characteristic stages gear life vibration signature tends random similar trend observed stage strong impulses caused reduction stiffness cracked broken gear teeth peak values obtained model mlp svm close fits used monitor amplitude overall vibration model rbf achieved unsatisfactory performance rbf network selected figure comparison kurtosis peak values tda calculated direct averaging tda predicted model mlp rbf svms put proposed models perspective computation time compared existing time domain averaging method using pentium computer ghz processor computation times listed table table computation time seconds time parameter model training model simulating model model training model simulating model tda mlp rbf svm clear model requires less time tda calculated direct averaging model uses percent vibration data original tda process uses vibration data required preprocessing time model equal calculating tda direct averaging models use amount vibration data model model used rbf mlp give best performance terms simulating time svms give poorest performance models trained therefore training time would influence performance applications models used svms performance poor poor performance svms due optimization problem svms optimization quadratic problem variables number data training points longer training times needed operations required process much slower mlp rbf neural networks weights biases basis centers obtained minimizing error functions conclusion paper novel approach uses artificial neural networks support vector machines reduce amount data required calculate time domain average gear vibration signal presented two models proposed using model data calculating time domain average reduced percent data required calculate time domain average direct averaging model found excellent estimating time domain average disadvantage requiring operations execute using model less percent data needed stored data logger given time furthermore suitability developed models diagnostic purposes assessed observed performances model model similar entire life gear good performance model attributed fact model uses whole data set training simulation whereas model uses section data set using whole data set training simulation exposes formulations model transient effects data resulting accurate estimate time domain average performance model relies generalization capabilities formulation used references hongxing hongfu chengyu liangheng mechanical systems signal processing improved algorithm direct time domain averaging trimble journal signal averaging braun acustica extraction periodic waveforms time domain averaging braun seth journal sound vibration analysis repetitive mechanism signatures mcfadden mechanical systems signal processing revised model extraction periodic waveforms time domain averaging mcfadden mechanical systems signal processing interpolation techniques time domain averaging gear vibration samanta mechanical systems signal processing gear fault detection using artificial neural networks support vector machines bishop neural networks pattern recognition oxford clarendon press gunn technical report university southampton department electrical computer science support vector machines classification regression haykin neural networks edition new jersey usa vapnik ieee transactions neural networks overview learning theory vapnik nature statistical learning theory new york usa stander heyns proceedings international congress condition monitoring diagnostic engineering management birmingham september instantaneous shaft speed monitoring gearboxes fluctuating load conditions stander heyns mechanical systems signal processing using vibration monitoring local fault detection gears operating fluctuating load conditions raath structural dynamic response reconstruction time domain thesis department mechanical aeronautical engineering university pretoria norton fundamentals noise vibration analysis engineers new york cambridge university press
| 5 |
dec feature weight tuning recursive neural networks jiwei computer science department stanford university stanford usa jiweil abstract paper addresses recursive neural network model automatically leave useless information emphasize important evidence words perform weight tuning representation acquisition propose two models weighted neural network wnn neural network benn automatically control much one specific unit contributes representation proposed model viewed incorporating powerful compositional function embedding acquisition recursive neural networks experimental results demonstrate significant improvement standard neural models introduction recursive neural network models constitute one type neural structure obtaining higherlevel representations beyond phrases sentences works fashion tree structures parse trees dependency extent captured figure gives brief illustration recursive neural models work obtain distributed representation short sentence movie wonderful suppose hwonderful embeddings tokens wonderful representation parent node second layer given hvp hwonderful denote parameters involved convolutional function activation function usually tanh sigmod rectifier linear function nlp tasks obtained embeddings could fed machine learning parameters optimized take sentiment analysis example could feed aforementioned sentence embedding logistic regression model classify either positive negative embeddings sometimes capable capturing latent semantic meanings syntactic rules within text manually developed features many nlp tasks would benefit type structure suffers sorts intrinsic drawbacks revisit figure common sense tells tokens like movie contribute much sentiment decision word wonderful key part good machine learning model ability learning rules unfortunately intrinsic structure recursive neural networks makes less flexible get rid influence less tokens keyword wonderful hides deep parse tree example sentence studied russia moscow course embeddings could also optimized objective functions figure illustration standard recursive neural network representation calculation model unigram svm recursive neural net accuracy table brief comparison svm standard neural network models sentiment classification using date set neural network models trained regularization using adagrad minibatches details implementations recursive networks please see section parameters trained based cross validation training data report best performance searching optimum regularization parameter optimum batch size convolutional function word embeddings borrowed glove dimensionality generates better performance senna rnnlm family think winter wonderful takes quite convolution steps keyword wonderful comes surface consequence influence final sentence representation could trivial issue usually referred gradient vanishing specific recursive models deep learning architectures compare neural models svm one notable weakness based svm inability considering words combined form meanings order information words interestingly downside svm comes advantage resilience feature managing optimization low weights assigned lessinformative evidence could pushed zero regularization table gives brief comparison unigram based svm neural network models sentiment prediction pang dataset seen task standard neural network models underperform revisit form two straws grasp deal aforementioned problem expecting learned feature embeddings less useful words exert little influence example zero vector best expecting compositional parameters extremely powerful former sometimes hard mostly borrow initialize word embeddings trained large corpus rnnlm senna rather training embeddings objective functions neural models easily fitted given small amount training regarding latter issue several alternative compositional functions proposed enable varieties composition cater recent proposed approaches include example matrixvector rnn represents every word vector matrix rntn allows greater interactions input vectors algorithm presented note results comparable socher work obtains performance sentiment classification labels constitute sort supervision svm neural network models details see footnote use example illustration practically might good sentiment indicator usually superlatives cases example word embeddings learned requires sufficient training data avoid fitting example socher work labels every single node along parse trees total number phrases associates different labels pos tags relation tags different sets compositional parameters approaches extent enlarge power compositional functions paper borrow idea weight tuning feature based svm try incorporate idea neural architectures achieve goal propose two recursive neural architectures weighted neural network wnn neural network benn major idea involved proposed approaches associate node recursive network additional parameters indicating important final decision example would expect type structure would dilute influence tokens like movie magnifies impact tokens like wonderful great sentiment analysis tasks parameters associated proposed models automatically optimized objective function manifested data proposed model combines capability neural models capture local compositional meanings weight tuning approach reduce influence undesirable information time yield better performances range different nlp tasks compared standard neural models rest paper organized follows section briefly describes related work details wnn benn illustrated section experimental results presented section followed brief conclusion related work distributed representations calculated based neural frameworks extended beyond tokenlevel represent phrases sentences discourse paragraphs documents recursive recurrent models constitute two types commonly used frameworks embedding acquisition different variations models proposed cater different scenarios recently proposed approaches included sentence compositional approach proposed vector representations optimized predicting words within sentence neural network architecture sometimes requires vector representation input token various deep learning architectures explored learn embeddings unsupervised manner large corpus might different generalization capabilities able capture semantic meanings depending specific task hand proposed architectures work inspired long memory lstm model first proposed hochreiter schmidhuber back process time sequence data long time lags unknown size important lstm associates time series gates determine whether information early timesequence forgotten current information allowed flow memory lstm could partially address gradient vanishing problem recurrent neural models widely used machine translation weight tuning neural network let denote sequence token wns could phrases sentences etc word associated specific vector embedding denotes dimensionality word embedding wish compute vector representation sentence denoted based parse trees using recursive neural models parse tree sentence obtained stanford parser wnn recursive neural network node parse tree associated representation basic idea wnn associate node additional weight variable range denote importance current node technically used pushing output representation node towards direction retain relatively important information http expect information regarding importance current node whether relevant sentiment embedded representation use convolution function enable type information emerge surface following compositional functions sigmod utm dimensional matrix bias vector dimensional intermediate vector implementation viewed using neural model latent neurons output projected space let output denote output node parent wnn output would consider current information embedded embedding related importance output therefore given output recall example figure output mthe hthe output movie mmovie hmovie model thinks much relevant information embedded value would small pushing output vector towards representations parents example figure therefore computed follows hvp tanh output output wonderful hnp tanh output output movie denotes dimensional matrix output output wonderful denotes concatenation two vectors optimum situation mthe mmovie take values around leading representation node vector training wnn illustration purpose use binary classification task show train wnn note described training approach applies situations classification regression minor adjustments binary classification task sequence associated label takes value positive otherwise standardly determine value feed representation logistic regression model sigmod vector denotes bias adding regularization part parameterized loss function training dataset given log revisit example figure parameter optimize calculation gradient trivial mvp hvp mnp hnp obtained given hvp mvp hvp note hvp embraced mvp components equ continuous gradient efficiently obtained standard backpropagation figure illustration wnn benn recursive neural network benn associates node binary variable sampled binary distribution parameterized scalar fixed range indicating possibility current node pass information ancestors obtained similar ways wnn using convolution project current representation scalar lying within sigmod utb binary smoothing purpose benn current node outputs expectation embedding parent given output take case figure example vector hnp therefore follow following distribution hnp tanh hthe hmovie lthe lmovie hnp tanh hmovie lthe lmovie hnp tanh hthe lthe lmovie hnp tanh lthe lmovie hnp obtained based distribution hnp hnp note leaf nodes training benn training use binary sentiment classification illustration sentence label sigmod respect given parameter derivative given lvp tanh hnp hvp lnp lvp tanh hvp lvp tanh hnp lnp lvp tanh components continuous gradient efficiently obtained standard backpropagation recursive svm model standard glove standard learned rntn wnn benn unigram accuracy table binary sentiment classification supervision sentence level word embeddings initialized dimensional embeddings borrowed glove experiment perform experiments better understand behavior proposed models compared standard neural models variations achieve implement model problems require vector representations phrases sentences sentiment analysis labels first perform experiments dateset setting binary labels top sentence constitute resource supervision note different settings described neural models adopt settings fair comparison regularization gradient decent based adagrad mini batch size tuned parameters regularization cross validation standard neural models implement two settings standard glove word embeddings directly fixed glove standard learned word embeddings treated parameters optimize framework additionally implemented recent popular variations recursive models sophisticatedly designed compositional functions including rnn proposed represents every node parse tree vector matrix given vector representation matrix representation child node child node vector representation matrix representation parent given fix word vector embeddings using senna treat matrix representations parameters optimize rntn recursive neural tensor network proposed given children nodes rntn computes parent vector following way associate sentence roles specific composition matrix report results table discussed earlier standard neural models underperform bag word models note derivations standard neural models standard learned mvrnn many parameters learn performance even worse due wnn benn although significantly output bag words svm generates better results yielding significant improvement standard neural models existing revised versions figure illustrates automatic learned muted factor regarding different nodes parse tree based recursive network observe model capable learning proper weight vocabularies assigning larger weight values important sentiment indicators wonderful silly tedious suppressing influence less important ones attribute better performance proposed models standard neural models automatic ability figure visual illustration automatic learning weight associated node wmm note scenario claiming generate results using proposed model sophisticated models example generate better performance proposed models achieve point wish illustrate proposed models provide promising perspective standard neural models due weight tuning property cases detailed data available capture compositionally proposed models hold promise generate compelling results illustrate socher setting sentiment analysis socher settings consider socher dataset sentiment analysis contains labels every phrase node parse tree task could considered either classification task labels based labeled dataset follow experimental protocols described word embeddings treated parameters learn rather fixed externally borrowed embeddings work consider labeling full sentences addition varieties neural models mentioned socher work also report performance recently proposed paragraph vector model first obtains sentence embeddings unsupervised manner predicting words within context feeds embeddings logistic regression model paragraph vector achieves current performance regarding socher dataset performances reported table seen proposed approach slightly underperforms current performance achieved paragraph vector outperforms versions recursive neural models indicating adding weight tuning parameters indeed leads better compositionally note comprehensive dataset rely obtain favorable word embeddings compositionally plays important role deciding whether review positive negative harnessing local word order information case neural models exhibit power capturing local evidence composition leading significantly better performance based models svm bigram naives bayes sentiment analysis imdb dataset move sentiment analysis document level use imdb dataset proposed maas dataset consists movie reviews taken imdb movie review contains several sentences follow experimental protocols described first train word vectors using training documents next train compositional functions using labeled documents keeping word embedding fixed first obtain representations using recursive review contains multiple sentences convolute sentence representations one single vector using model svm bigram naives recursive rntn paragraph vector wnn benn table performance proposed approaches compared methods stanford sentiment treebank dataset baseline performances reported model wnn benn precision table performance proposed model compared approaches binary classification imdb dataset results baselines reported note reported results underperform current performances paragraph vectors reported accuracy terms imdb dataset recurrent network cross validate parameters using labeled documents test models testing reviews results approach baselines reported table seen long documents unigram diagram perform quite well difficult beat standard neural models generate competent results compared bag word models task incorporating weighted tuning mechanism got much better performance roughly compared standard neural models although wnn benn still underperform current model paragraph vector produces better performance models sentence representations coherence evaluation sentiment analysis forces semantic perspective meaning next turn syntactic oriented task obtain representations based proposed model decide coherence given sequence sentences use corpora widely employed coherence prediction one contains reports airplane accidents national transportation safety board contains reports earthquakes associated press standardly use pairs articles one containing original document order assumed coherent used positive examples random permutation sentences document treated examples follow protocols introduced considering window approach feeding concatenation representations adjacent sentences logistic regression model classified either coherent test time assume model makes right decision original document gets score higher one random permutations current performance regarding task obtained using standard recursive network described table illustrates performance different models model generates performance among network models seen neural models perform pretty well task compared existing feature based algorithm reported results better sentence representations obtained incorporating weighted tuning properties pushing state art task accuracy model recursive accuracy table comparison different coherence models reported baseline results reprinted conclusion paper propose two revised versions neural models wnn benn obtaining higher level feature representations sequence tokens proposed framework automatically incorporates concept weight tuning svm architectures lead better representations generate better performance standard neural models multiple tasks still underperforms models cases newly proposed paragraph vector approach provides alternative existing recursive neural models representation learning note limit attentions recursive models work idea weight tuning wnn benn associates nodes neural models additional weighed variables general one extended many deep learning models minor adjustment references ronald williams david zipser learning algorithm continually running fully recurrent neural networks neural computation richard socher alex perelygin jean jason chuang christopher manning andrew christopher potts recursive deep models semantic compositionality sentiment treebank proceedings conference empirical methods natural language processing emnlp pages citeseer ozan irsoy claire cardie bidirectional recursive neural networks labeling structure arxiv preprint pang lillian lee shivakumar vaithyanathan thumbs sentiment classification using machine learning techniques proceedings conference empirical methods natural language pages association computational linguistics john duchi elad hazan yoram singer adaptive subgradient methods online learning stochastic optimization journal machine learning research richardsocher jeffreypennington christopherd manning glove global vectors word representation ronan collobert jason weston bottou michael karlen koray kavukcuoglu pavel kuksa natural language processing almost scratch journal machine learning research tomas mikolov stefan kombrink anoop deoras lukar burget cernocky rnnlmrecurrent neural network language modeling toolkit proc asru workshop pages yoshua bengio patrice simard paolo frasconi learning dependencies gradient descent difficult neural networks ieee transactions raymond mooney razvan bunescu subsequence kernels relation extraction advances neural information processing systems pages tomas mikolov martin lukas burget jan sanjeev khudanpur recurrent neural network based language model interspeech pages richard socher brody huval christopher manning andrew semantic compositionality recursive spaces proceedings joint conference empirical methods natural language processing computational natural language learning pages association computational linguistics jiwei rumeng eduard hovy recursive deep models discourse parsing richard socher jeffrey pennington eric huang andrew christopher manning recursive autoencoders predicting sentiment distributions proceedings conference empirical methods natural language processing pages association computational linguistics phil blunsom edward grefenstette nal kalchbrenner convolutional neural network modelling sentences proceedings annual meeting association computational linguistics proceedings annual meeting association computational linguistics yangfeng jacob eisenstein representation learning discourse parsing quoc tomas mikolov distributed representations sentences documents arxiv preprint misha denil alban demiraj nal kalchbrenner phil blunsom nando freitas modelling visualising summarising documents single convolutional neural network arxiv preprint mike schuster kuldip paliwal bidirectional recurrent neural networks signal processing ieee transactions ilya sutskever james martens geoffrey hinton generating text recurrent neural networks proceedings international conference machine learning pages nal kalchbrenner edward grefenstette phil blunsom convolutional neural network modelling sentences proceedings annual meeting association computational linguistics june yoshua bengio holger schwenk morin gauvain neural probabilistic language models innovations machine learning pages springer ronan collobert jason weston unified architecture natural language processing deep neural networks multitask learning proceedings international conference machine learning pages acm andriy mnih geoffrey hinton three new graphical models statistical language modelling proceedings international conference machine learning pages acm tomas mikolov kai chen greg corrado jeffrey dean efficient estimation word representations vector space arxiv preprint sepp hochreiter schmidhuber long memory neural computation felix gers schmidhuber fred cummins learning forget continual prediction lstm neural computation martin sundermeyer tamer alkhouli joern wuebker hermann ney translation modeling bidirectional recurrent neural networks proceedings conference empirical methods natural language processing october kyunghyun cho bart van merrienboer caglar gulcehre fethi bougares holger schwenk yoshua bengio learning phrase representations using rnn statistical machine translation arxiv preprint richard socher john bauer christopher manning andrew parsing compositional vector grammars proceedings acl conference citeseer christoph goller andreas kuchler learning distributed representations backpropagation structure neural networks ieee international conference volume pages ieee richard socher christopher manning andrew learning continuous phrase representations syntactic parsing recursive neural networks proceedings deep learning unsupervised feature learning workshop pages andrew maas raymond daly peter pham dan huang andrew christopher potts learning word vectors sentiment analysis proceedings annual meeting association computational linguistics human language pages association computational linguistics regina barzilay lillian lee catching drift probabilistic content models applications generation summarization pages regina barzilay mirella lapata modeling local coherence approach computational linguistics annie louis ani nenkova coherence model based syntactic patterns proceedings joint conference empirical methods natural language processing computational natural language learning pages association computational linguistics jiwei eduard hovy model coherence based distributed sentence representation
| 9 |
nearest neighbor information estimator adaptively near minimax feb jiantao weihao yanjun february abstract analyze nearest neighbor estimator differential entropy obtain first uniform upper bound performance balls torus without assuming conditions close density could zero accompanying new minimax lower bound ball show estimator achieving minimax rates logarithmic factors without cognizance smoothness parameter ball arbitrary dimension rendering first estimator provably satisfies property introduction information theoretic measures information entropy divergence mutual information quantify amount information among random variables many applications modern machine learning tasks classification clustering feature selection information theoretic measures variants also applied several data science domains causal inference sociology computational biology estimating information theoretic measures data crucial aforementioned applications attracted many interest statistics community paper study problem estimating shannon differential entropy basis estimating information theoretic measures continuous random variables suppose observe independent identically distributed random variables drawn density function consider problem estimating differential entropy empirical observations fundamental limit estimating differential entropy given minimax risk inf supf department electrical engineering stanford university email jiantao yjhan department electrical computer engineering coordinated science laboratory university illinois email infimum taken estimators function empirical data denotes nonparametric class density functions problem differential entropy estimation investigated extensively literature discussed exist two main approaches one based kernel density estimators based nearest neighbor methods pioneered work problem differential entropy estimation lies general problem estimating nonparametric functionals unlike parametric counterparts problem estimating nonparametric functionals challenging even smooth functionals initial efforts focused inference linear quadratic cubic functionals gaussian white noise density models laid foundation ensuing research attempt survey extensive literature area instead refer interested reader references therein functionals entropy recent progress designing theoretically minimax optimal estimators estimators typically require knowledge smoothness parameters practical performances estimators yet known nearest neighbor differential entropy estimator estimator computed following way let distance nearest neighbor among let denote closed ball centered radius lebesgue measure differential entropy estimator defined tdt constant lebesgue measure unit ball exist intuitive explanation behind construction differential entropy estimator writing informally first approximation based law large numbers second approximation replaced nearest neighbor density estimator nearest neighbor density estimator follows intuition final additive bias correction term follows detailed analysis bias estimator become apparent later exist extensive literature analysis differential entropy estimator refer recent survey one major difficulties analyzing estimator nearest neighbor density estimator exhibits huge bias density small indeed shown bias nearest neighbor density estimator fact vanish even deteriorates gets close zero literature large collection work assume density uniformly bounded away zero others put various assumptions quantifying average close density zero paper focus removing assumptions close density zero main contribution let hds ball unit cube formally defined later definition smoothness parameter worst case risk nearest neighbor differential entropy estimator hds controlled following theorem theorem let samples density function differential entropy estimator satisfies sup constant depends estimator fact nearly minimax logarithmic factors quote following theorem theorem let samples density function exists constant depending inf sup constant depends remark emphasize one remove condition theorem indeed ball small width density bounded away zero makes differential entropy smooth functional minimax rates theorem imply estimator achieves minimax rates logarithmic factors without knowing implies near minimax within logarithmic factors dimension expect vanilla version estimator adapt higher order smoothness since nearest neighbor density estimator viewed variable width kernel density estimator box kernel well known literature see chapter positive kernel exploit smoothness refer detailed discussion difficulty significance work obtain first uniform upper bound performance differential entropy estimator balls without assuming close density could zero emphasize assuming conditions type density bounded away zero could make problem significantly easier example density assumed satisfy constant differential entropy becomes smooth functional consequently general technique estimating smooth nonparametric functionals directly applied achieve minimax rates main technical tools enabled remove conditions close density could zero besicovith covering lemma generalized maximal inequality recent paper assumption based maximal function considered instead simply assuming density away zero paper different since completely removed assumption maximal function make result potentially easier generalize problems show entropy estimator nearly achieves minimax rates without knowing smoothness parameter functional estimation literature designing estimators theoretically proved adapt unknown levels smoothness usually achieved using lepski method known performing well general practice hand simple approach achieves rate known estimator well known exhibit excellent empirical performance existing theory yet demonstrated optimality smoothness parameter known work makes step towards closing gap provides theoretical explanation wide usage estimator practice notations sequences use notation denote exists universal constant depends equivalent notation equivalent write constant universal depend parameters notation means lim inf equivalent write min max organizations rest paper organized follows section formally present definition ball unit cube section prove upper bound error consider bias variance separately refer proof lower bound section discuss related work future directions definition ball order define ball unit cube first review definition ball definition ball ball hds specified parameters order smoothness dimension argument smoothness constant follows positive real uniquely represented nonnegative integer definition hds comprised times continuously differentiable functions continuous exponent constant derivatives order lkx euclidean norm differential taken point along directions paper consider functions lie balls ball compact set defined follows definition ball unit cube function said belong ball hds exists another function hds function variable hds introduced definition words standard basis definition appeared literature motivated observations sliding window kernel methods usually deal boundary effects without additional assumptions indeed near boundary sliding window kernel density estimator may significantly larger bias interior points nonparametric statistics literature usually assumed density value derivatives vanishing boundary assumptions fact weaker proof upper bound section prove hds proof consists two parts upper bound bias upper bound variance show proof bias main text proof variance appendix firstly introduce following notation probability measure specified density function torus lebesgue measure lebesgue measure unit ball euclidean space hence average density neighborhood near first state two main lemmas used later proof lemma hds dlts defined lemma hds max defined furthermore show relegate proof lemma lemma appendix investigate bias following argument reduces bias analysis function analytic problem notation simplicity introduce new random variable study every denote nearest neighbor distance torus first show second term universally controlled regardless smoothness indeed random variable beta shown theorem exists universal constant left show split analysis two parts section shows completes proof section shows upper bound fact expectation taken respect randomness define function sup split three terms handle three terms separately goal show every taking integral respect leads desired bound term whenever lemma dlr implies term whenever satisfies definition implies nvd nvd follows lemma case hence term lemma hence last step used fact since finally note beta beta notice hence upper bound splitting term two parts denote simplicity notation term proof upper bound shown similarly proof upper bound every therefore consider conjecture case able prove prove define function sup since supt denote max therefore last inequality uses fact since beta beta density therefore last inequality used fact digamma function hence introduce following lemma lemma let two borel measures finite bounded borel sets borel set sup constant depends dimension applying second part lemma lebesgue measure measure specified torus view function sup taking know deal recall lemma know therefore therefore provedhthat completes proof upper bound proof lemma first introduce besicovitch covering lemma plays crucial role analysis nearest neighbor methods lemma theorem besicovitch covering lemma let suppose collection balls assume bounded exist countable collection balls constant depending dimension ready prove lemma let sup let hence exists follows besicovitch lemma applying set exists set countable cardinality let therefore every discussions tempting question ask whether one close logarithmic gap theorem believe neither upper bound lower bound tight fact conjecture upper bound theorems could improved lower bound could improved believe logarithmic gap inherent estimator since general phenomenon effective sample size enlargement demonstrated shows performance minimax estimators samples usually estimator samples estimator could viewed kernel density estimator variable width kernel density estimator baseds differential entropy estimator constructed achieves upper bound hds one future interesting direction analyze minimax risk nearest neighbor kullbackleibler divergence estimator using rsimilar technique divergence relative entropy defined dkl used asymmetric distance two distributions training objective machine learning nearest neighbor estimators divergence samples proposed recently minimax estimator divergence studied analyzing nearest neighbor divergence estimator minimax optimality possible future direction another related problem minimax risk nearest neighbor mutual information estimator mutual information defined dkl pxy widely used many machine learning data science applications mutual information written distribution purely continuous therefore one apply differential entropy estimator three times obtain estimator mutual information however nearest neighbor estimator ksg estimator proposed adaptively choose hyperparameter cancel distance terms convergence rate ksg estimator studied recently closing gap upper bound lower bound still open problem also ksg estimator proved consistent even underlying distribution purely continuous purely discrete convergence rate case still missing acknowledgement grateful biau shashank singh terence tao insightful discussions proof variance section prove var prove utilizes inequality control variance define min kxj distance nearest neighbor among samples except define let index nearest neighbor note shown section independent constant depends therefore follow analysis section var first inequality uses inequality second inequality uses inequality prove using inequality since therefore know beta equal certain constants lni smaller need prove recall defined maximal function follows sup similarly define sup therefore max similarly inequality holds replace lemma rewrite term notice implies either moreover since lim lim lim exists every every implies every implies constant therefore implies least one following three statement holds iii therefore hence proof completed proof lemmas section provide proofs lemmas used paper proof lemma consider cases separately following definition smoothness lku xks denoting considering unit sphere rewrite integral using polar coordinate system obtain dvd dlts dvd consider case rewrite difference fixed bound using gradient theorem definition smoothness follows lkvks lkvk plug using similar method case lkvks dvd dlts dvd last inequality uses fact proof lemma consider following two cases lemma dlts hence case define nonnegativity therefore case obtain desired statement combining two cases furthermore taking applying lemma immediately obtain references battiti using mutual information selecting features supervised neural net learning neural networks ieee transactions biau luc devroye lectures nearest neighbor method springer jan beirlant edward dudewicz edward van der meulen nonparametric entropy estimation overview international journal mathematical statistical sciences lucien pascal massart estimation integral functionals density annals statistics pages peter bickel yaacov ritov estimating integrated squared density derivatives sharp best order convergence estimates indian journal statistics series pages thomas berrett richard samworth ming yuan efficient multivariate entropy estimation via neighbour distances arxiv preprint yuheng shaofeng zou yingbin liang venugopal veeravalli estimation divergence distributions ieee international symposium information theory isit pages ieee chan ebrahimi kaced liu multivariate mutual information inspired agreement proceedings ieee tony cai mark low note nonparametric estimation linear functionals annals statistics pages tony cai mark low nonquadratic estimators quadratic functional annals statistics pages sylvain delattre nicolas fournier entropy estimator journal statistical planning inference david donoho michael nussbaum minimax quadratic estimation quadratic functional journal complexity lawrence craig evans ronald gariepy measure theory fine properties functions crc press fidah haje hussein golubev entropy estimation method journal mathematical sciences jianqing fan estimation quadratic functionals annals statistics pages fleuret fast binary feature selection conditional mutual information journal machine learning research weihao gao sreeram kannan sewoong pramod viswanath conditional dependence via shannon capacity axioms estimators applications international conference machine learning pages weihao gao sreeram kannan sewoong pramod viswanath estimating mutual information mixtures advances neural information processing systems pages evarist richard nickl simple adaptive estimator integrated square density bernoulli pages weihao gao sewoong pramod viswanath breaking bandwidth barrier geometrical adaptive entropy estimation advances neural information processing systems pages weihao gao sewoong pramod viswanath demystifying fixed neighbor information estimators information theory isit ieee international symposium pages ieee peter hall limit theorems sums general functions mathematical proceedings cambridge philosophical society volume pages cambridge university press yanjun han jiantao jiao tsachy weissman yihong optimal rates entropy estimation lipschitz balls arxiv preprint yanjun han jiantao jiao rajarshi mukherjee tsachy weissman estimation gaussian white noise models arxiv preprint yanjun han jiantao jiao tsachy weissman minimax estimation divergences discrete distributions arxiv preprint peter hall james stephen marron estimation integrated squared density derivatives statistics probability letters peter hall sally morton estimation entropy annals institute statistical mathematics jiantao jiao yanjun han tsachy weissman minimax estimation distance ieee international symposium information theory isit pages ieee harry joe estimation entropy functionals multivariate density annals institute statistical mathematics jiantao jiao kartik venkat yanjun han tsachy weissman minimax estimation functionals discrete distributions information theory ieee transactions jiantao jiao kartik venkat yanjun han tsachy weissman maximum likelihood estimation functionals discrete distributions ieee transactions information theory rhoana karunamuni tom alberts boundary correction kernel density estimation statistical methodology kirthevasan kandasamy akshay krishnamurthy barnabas poczos larry wasserman nonparametric von mises estimators entropies divergences mutual informations advances neural information processing systems pages akshay krishnamurthy kirthevasan kandasamy barnabas poczos larry wasserman nonparametric estimation divergence friends international conference machine learning pages kozachenko nikolai leonenko sample estimate entropy random vector problemy peredachi informatsii kerkyacharian dominique picard estimating nonquadratic functionals density using haar wavelets annals statistics alexander kraskov harald peter grassberger estimating mutual information physical review smita krishnaswamy matthew spitzer michael mingueneau sean bendall oren litvin erica stone dana garry nolan conditional densitybased analysis cell signaling data science laurent efficient estimation integral functionals density annals statistics oleg lepski problems adaptive estimation white gaussian noise topics nonparametric estimation boris levit asymptotically efficient estimation nonlinear functionals problemy peredachi informatsii pan olgica milenkovic inhomogoenous hypergraph clustering applications arxiv preprint oleg lepski arkady nemirovski vladimir spokoiny estimation norm regression function probability theory related fields nowozin lampert information theoretic clustering using minimum spanning trees springer rajarshi mukherjee whitney newey james robins semiparametric efficient empirical higher order influence function estimators arxiv preprint mack murray rosenblatt multivariate neighbor density estimates journal multivariate analysis rajarshi mukherjee eric tchetgen tchetgen james robins adaptive estimation nonparametric functionals arxiv preprint rajarshi mukherjee eric tchetgen tchetgen james robins lepski method adaptive estimation nonlinear integral functionals density arxiv preprint arkadi nemirovski topics ecole saintflour peng long ding feature selection based mutual information criteria pattern analysis machine intelligence ieee transactions james robins lingling rajarshi mukherjee eric tchetgen tchetgen aad van der vaart higher order estimating equations models annals statistics appear james robins lingling eric tchetgen aad van der vaart higher order influence functions minimax estimation nonlinear functionals probability statistics essays honor david freedman pages institute mathematical statistics david reshef yakir reshef hilary finucane sharon grossman gilean mcvean peter turnbaugh eric lander michael mitzenmacher pardis sabeti detecting novel associations large data sets science shashank singh analysis nearest neighbor density functional estimators advances neural information processing systems pages kumar sricharan raviv raich alfred hero estimation nonlinear functionals densities confidence ieee transactions information theory eric tchetgen lingling james robins aad van der vaart minimax estimation integral power density statistics probability letters tsybakov introduction nonparametric estimation alexandre tsybakov van der meulen consistent estimators entropy densities unbounded support scandinavian journal statistics pages bert van estimating functionals related density class statistics based spacings scandinavian journal statistics pages ver steeg galstyan maximally informative hierarchical representations data stat gregory valiant paul valiant power linear estimators foundations computer science focs ieee annual symposium pages ieee qing wang sanjeev kulkarni sergio divergence estimation multidimensional densities via distances information theory ieee transactions yihong pengkun yang minimax rates entropy estimation large alphabets via best polynomial approximation ieee transactions information theory
| 7 |
oct zou hoiem complete scene parsing single rgbd image complete scene parsing single rgbd image chuhang zou http zhizhong http department computer science university illinois usa derek hoiem http abstract inferring location shape class object single image important task computer vision paper aim predict full parse visible occluded portions scene one rgbd image parse scene modeling objects detailed cad models class labels layouts planes interpretation useful visual reasoning robotics difficult produce due high degree occlusion diversity object classes follow recent approaches retrieve shape candidates rgbd region proposal transfer align associated models compose scene consistent observations propose use support inference aid interpretation propose retrieval scheme uses convolutional neural networks cnns classify regions retrieve objects similar shapes demonstrate performance method compared new nyud dataset annotations labelled detailed shapes objects introduction paper aim predict complete models indoor objects layout surfaces single rgbd images prediction represented detailed cad objects class labels planes layouts interpretation useful robotics graphics human activity interpretation difficult produce due high degree occlusion diversity object classes approach propose approach recover model room layout objects rgbd image major challenge cope huge diversity layouts objects rather restricting parametric model detectable object classes previous reconstruction work models represent every layout surface object mesh approximates original depth image projection take approach proposes set potential object regions matches region similar region training images transfers aligns associated labeled models encouraging agreement observations matching step use cnns retrieve objects similar class shape incorporate copyright document resides authors may distributed unchanged freely print electronic forms zou hoiem complete scene parsing single rgbd image support cnns rgb depth cnns feature extraction region proposals input rgb classification cnns shape retrieval cnns shape candidates scene composition figure overview approach given one rgbd image approach perform complete parsing generate layout proposals object proposals predict proposal support height class retrieve similar object shape select subset layout object shapes compose scene complete interpretation support estimation aid interpretation hypothesize confirm experiments support height information help interpreting occluded objects full extent occluded object inferred support height subset proposed objects layouts best represent overall scene selected optimization method based consistency observed depth coverage constraints occupied space flexibility models enabled approach fig propose large number likely layout surfaces objects compose complete scene subset proposals accounting occlusion image appearance depth layout consistency detailed labeling approach requires dataset labeled shape region matching shape retrieval evaluation make use nyud dataset consists indoor scene rgbd images image segmented labeled object instances categories segmented object also corresponding annotated model provided guo hoiem labeling provides groundtruth scene representation layout surfaces planar regions furniture cad exemplars objects coarser polygonal shapes however polygonal shapes coarse enable comparison object shapes therefore extend labeling guo hoiem detailed annotations object scale annotations labeled automatically adjusted manually described sec evaluate method newly annotated groundtruth measure success according accuracy depth prediction complete layout surfaces voxel occupancy accuracy semantic segmentation performance evaluate method newly annotated nyud dataset experiments object retrieval demonstrate better region classification shape estimation compared performance scene composition shows better semantic segmentation results competitive estimation results compared contributions refine nyud dataset detailed shape annotations objects image labelling process performed approach utilized labelling rgbd scene datasets apply support inference aid region classification images use cnns classify regions retrieve objects similar shapes demonstrate better performance full scene parsing single rgbd images compared zou hoiem complete scene parsing single rgbd image paper extension previous work available arxiv predicts full scene parsing rgbd image main new contributions refinement nyud dataset detailed shape annotations use cnns classify regions retrieve object models similar shapes region use support inference aid region classification also provide detailed discussion conduct extensive experiments demonstrating qualitative quantitative improvement related work relavent work paper guo predicts complete models indoor scene single rgbd image introduced approach guo recover complete models limited viewpoint contrast multiview context interpretation method zhang advocate making use panoramas introduce related topics follows inferring shape single rgb image within rgb images lim find furniture instances aubry recognize chairs using part detectors rgbd images song xiao search main furniture sliding shapes enumerating possible poses similarly gupta fit shape models classes poses improve object detection approach finds approximate shape object layout scene take approach apply retrieval transfer similar shape training region query region semantic segmentation single rgbd images silberman use image depth cues jointly segment objects categories infer support relations gupta apply generic features assign region labels following work encodes rgb depth descriptor hha height ground angle gravity horizontal disparity cnns structure better region classification long introduce fully convolutional network structure learning better features method make use rgb hha features categorize objects larger variety classes distinguish infrequent objects scenes rather restricting parametric model detectable objects though semantic segmentation main purpose method infer region labels projecting scene models images support height estimation guo hoiem localize height full extent support surfaces one rgbd image addition object height priors shown crucial geometric cues better object detection deng applies height ground distinguish objects propose use object support height aid region class interpretation helps classification occluded regions distinguishing objects appear different height levels chair floor alarm clock table detailed annotations indoor scenes conduct experiments dataset provides complete labeling objects layouts indoor images object layout segment labeling corresponding annotated model provided guo hoiem annotations use models represent categories furniture zou hoiem complete scene parsing single rgbd image input rgb annotations guo hoiem detailed annotations figure samples object annotations nyud dataset images first third row input rgb image annotations guo hoiem refined annotations annotations scene much detailed object shape scale common use extruded polygons label objects models provide good approximation object extent often poor representations object shape shown fig therefore extend nyud dataset replacing extruded polygons cad models collected shapenet modelnet align datasets manually map model class labels object labels nyud dataset shape retrieval alignment process performed automatically adjusted manually follows coarse alignment groundtruth region nyud dataset retrieve model set collected models class label also include region original coarse annotation guo hoiem model set preserve original labeling provided cad models better fit depth initialize location world coordinate center annotation labeled guo hoiem resize height annotation fine alignment next align retrieved object model fit available depth map corresponding region target scene initial alignment often correct scale orientation region chair often resembles chair needs rotated found using iterative closest point solve parameters yield good results instead enumerate orientations view allows minor scale revision ratio perform icp solve translation initialized using scale rotation pick best icp result based following cost function fittingcost cdepth cmissing max represents scale rotation translation mask rendered aligned means rendered depth object denotes observed depth pixel first term encourages depth similarity groundtruth rgbd region second zou hoiem complete scene parsing single rgbd image penalizes pixels proposed region rendered third term penalizes pixels rendered model closer observed depth image model stick space known empty based fitting cost algorithm picks model best translation orientation scale set term weights cdepth cmissing cocc based grid search validation set efficiency first obtain top models based fitting cost maximized initial orientations icp models solve best translation scale rotation based finally select aligned model lowest fitting cost automatic fitting may fail due high occlusion missing depth values manually conduct check refine badfitting models affects roughly models using gui annotator checks automatically produced shape region result satisfactory user compares top model fits none good matches fitting optimization based applied original polygonal labeling helps ensure detailed shape annotations strict imfigure cumulative relative depth provement original course annotations validation figure reports cumulative error detailed annotations tive error rendered depth detailed annotations guo notations compared groundtruth depth nyud hoiem nyud dataset dataset relative error computed rgbd images dataset represents pixel image groundtruth depth pixel sensor rendered depth label annotation pixel comparison report annotations guo hoiem annotations points low relative depth error achieve better modeling depth image generating object candidates given rgbd image input aim find set layout object models fit rgb depth observations provide likely explanation unobserved portion scene produce object candidate regions use method rgbd images gupta extract top ranked region proposals image experiments show region retrieval effective method based prims algorithm used previous work likely object categories shapes assigned candidate region zou hoiem complete scene parsing single rgbd image proposed height region proposal depth support height stack floor height depth height region proposal bottom floor height conv filters stride max pool max pool conv stride filters stride stride figure cnn predicting candidate object support height perform relu convolutional layer max pooling layer local response normalization performed first fully connected layer add dropout layer training predicting region support height class train use cnn networks predict object category support height region shown fig fig support height used feature object classification also train use cnn siamese network design find similar shape training object based region depth features support height prediction predict support height object aim better predicting class position first find candidate support heights using method guo hoiem use cnn estimate likely given object region based crops depth maps height maps region proposal region extends bottom region proposal estimated floor position support height network also predicts whether object supported behind create feature vector subtract candidate support height height cropped images four crops patches concatenate illustrated fig test set identify closest candidate support height shape similarity curacy average distance error bed cosine feature classification softmax use support height relative camera height leads slightly better perforfc mance using support height relative estimated ground likely dataset images taken consistent heights estimated ground height may mistaken categorization classification netregion proposal rgbd exemplar rgbd work gets input region proposal support height type along cnn features rgb depth figure cnns region classification network predicts probability left similar shape retrieval right class shown fig model perform relu dropout ious classes shapes indoor scene first layer networks classify regions training mon classes least training support cnns rgb cnns depth cnns rgb cnns depth cnns zou hoiem complete scene parsing single rgbd image samples less common objects classified prop furniture structure based rules silberman addition identify region proposal representative object shape piece chair leg region whole chair region visible bad region class leads classifier input support height type directly predicted support height prediction network create classification features copy two predicted support values times useful trick reduce sensitivity local optima important features concatenate region proposal rgb hha features gupta bounding box masked region experiments show using predicted support type support height improves classification accuracy larger improvement occluded objects predicting region shape using siamese network fig learn similarity measure predicts shape similarity corresponding objects network embeds rgb hha features used classification network space cosine distance correlates shape similarity training use distance mesh pairs ground truth similarity train network penalize errors shape similarity orderings region pair shape similarity score compared next pair among randomly sampled batch current epoch penalized ordering disagrees groundtruth similarity attempted sharing embedding weights classification network observed drop classification performance also found predicted class probability unhelpful predicting shape similarity candidate region selection apply retrieval scheme region proposals image obtaining shape similarity rank compared training samples class class probability region proposal order reduce number retrieved candidates scene interpretation sec first reduce number region proposals using suppression based class probability threshold class probability set threshold obtain region proposals image average select two probable classes remaining region proposal select five similar shapes class leading shape candidates region proposal refine retrieved shapes align shape candidate target scene translating model using offset depth point mass centers region proposal retrieved region perform icp using grid initial values rotation scale pick best one shape based fitting energy set term weight cmissing cdepth cocc based grid search validation set tried using estimated support height region proposal aligning related shape models observed worse performance scene composition result relatively small error object support height estimation cause larger error fitting finally select two promising shape candidates based following energy function itting log log zou hoiem complete scene parsing single rgbd image itting fitting energy defined used alignment softmax class probability class probability output classification network regon proposal normalize sum order penalize class twice energy function set term weights using grid search note itting scale number pixels region training details first find validation set training classifiers training set retrain training validation order report results test set train networks region proposals highest least iou groundtruth region train set train support height prediction network groundtruth support type region proposal set groundtruth support height closest support height candidate within meters related annotation bottom training regions supported behind penalize support height estimation since support height candidates vertical support training classification network also include class region proposals iou groundtruth regions randomly sample number regions total number object regions training avoid unbalanced weights different classes sample dataset number training regions class epoch training shape similarity network translate model origin resize cuboid computing distance use adam train network learning rate network support height prediction classification siamese network scene composition candidate shape given set retrieved candidate shapes layout candidates guo select subset candidates closely reproduce original depth image rendered correspond minimally overlapping aligned models composition hard high degree occlusions large amount objects scene apply method proposed guo perform scene composition leads final scene interpretation experiments experiments setting evaluate method detailed annotated nyud dataset also report result guo new annotations gupta tune parameters validation set training train set report result test set training train validation set zou hoiem complete scene parsing single rgbd image table classification accuracy groundtruth regions test set compare two methods classification network estimating support height compute average accuracy class average precision based predicted probability accuracy averaged instances classification networks trained evaluated times means standard deviations reflecting variation due randomness learning reported common object class results also listed bold numbers signify better performance method avg per class avg precision avg instance picture chair cabinet pillow bottle books paper table box window door sofa bag lamp clothes support height support height table quantitative evaluation retrieval method compared guo method method guo avg class accuracy top top top top avg iou top top avg surface distance top top top evaluation region classification shape retrieval classification report region classification table classification accuracy accuracy groundtruth regions test set different occlusion ratios classification network shown table groundtruth regions test set overall including support height prediction imocclusion ratio prove classification results certain classes support height objects often appear floor level chair support height desk better classification accuracy given object height objects often appear several heights picture benefit table shows per instance classification accuracy different occlusion ratios groundtruth regions test set improvement classification accuracy larger highly occluded area conforms claim estimating object support height help classify occluded regions retrieval evaluate candidate shape retrieval method compared guo given groundtruth regions dataset shown table since use groundtuth regions perform candidate selection sec evaluation evaluate top retrieved class accuracy top retrieved shape similarity based shape intersect union iou distance experiment set avoid rotation ambiguity shape similarity measurement rotate retrieved object find best shape similarity score groundtruth shape retrieval method outperforms evaluation criteria evaluation scene composition semantic segmentation evaluate semantic segmentation object classes layout classes wall ceiling floor rendered image scene composition result compare method guo automatically generated region proposals groundtruth regions table shows average class accuracy avacc average class accuracy weighted frequency fwavacc average pixel zou hoiem complete scene parsing single rgbd image table results semantic segmentation automatic region proposals groundtruth regions class support accuracy minus support accuracy automatic region proposals overall results method mean coverage weighted mean coverage unweighted guo support support detailed groundtruth labeling table results instance segmentation automatic region proposals groundtruth regions method mean coverage weighted mean coverage unweighted guo support support detailed groundtruth labeling curacy pixacc method better semantic segmentation result report result support height prediction result given groundtruth support height upper bound see improvements compared guo due classification method cnn cca region proposal method mcg gupta prim guo also report semantic segmentation compared pixacc avacc fwavacc note method focuses predicting geometry rather semantic segmentation instance segmentation report instance segmentation result experiment setting semantic segmentation evaluation follows protocol rmrc method outperforms slightly note lower results meancovu caused large diversity classes less frequency estimation evaluate estimation guo layout depth prediction freespace occupancy estimation shown fig method competitive results estimation compared qualitative results sample qualitative results shown fig fig method better region classification shape estimation property results slightly better scene composition result failure cases caused bad pruning region proposals last two column fig confusion similar class top right fig conclusions paper predict full parse visible occluded portions scene one rgbd image propose use support inference aid interpretation protable results layout freespace occupancy estimation method guo layout pixel error overall visible occluded layout depth error overall visible occluded freespace precision recall precision occupancy recall zou hoiem complete scene parsing single rgbd image input rgb guo input rgb guo figure qualitative results scene composition automatic region proposals randomly sample images top first four rows medium row worst last four rows based semantic segmentation accuracy zou hoiem complete scene parsing single rgbd image input rgb guo input rgb guo figure qualitative results scene composition given groundtruth labeling region proposals randomly sample images top first two rows medium row worst last two rows based semantic segmentation accuracy pose retrieval scheme uses cnns classify regions find objects similar shapes experiments demonstrate better performance method semantic segmentation instance segmentation competitive results scene estimation acknowledgements research supported part onr muri grant onr muri grant thank david forsyth insightful comments discussion references aubry maturana efros russell sivic seeing chairs exemplar alignment using large dataset cad models cvpr angel chang thomas funkhouser leonidas guibas pat hanrahan qixing huang zimo silvio savarese manolis savva shuran song hao jianxiong xiao fisher shapenet model repository technical report stanford university princeton university toyota technological institute chicago zhuo deng sinisa todorovic longin jan latecki semantic segmentation rgbd images mutex constraints proceedings ieee international conference computer vision pages zou hoiem complete scene parsing single rgbd image ruiqi guo derek hoiem support surface prediction indoor scenes iccv ruiqi guo chuhang zou derek hoiem predicting complete models indoor scenes arxiv preprint saurabh gupta pablo arbelaez jitendra malik perceptual organization recognition indoor scenes images cvpr saurabh gupta ross girshick pablo jitendra malik learning rich features images object detection segmentation european conference computer vision pages springer saurabh gupta pablo ross girshick jitendra malik aligning models images cluttered scenes proceedings ieee conference computer vision pattern recognition pages bharath hariharan pablo ross girshick jitendra malik simultaneous detection segmentation computer pages springer derek hoiem alexei efros martial hebert putting objects perspective international journal computer vision diederik kingma jimmy adam method stochastic optimization arxiv preprint joseph lim hamed pirsiavash antonio torralba parsing ikea objects fine pose estimation iccv joseph lim aditya khosla antonio torralba fpm fine pose model cad models european conference computer vision pages springer dahua lin sanja fidler raquel urtasun holistic scene understanding object detection rgbd cameras iccv jonathan long evan shelhamer trevor darrell fully convolutional networks semantic segmentation proceedings ieee conference computer vision pattern recognition pages santiago manen matthieu guillaumin luc van gool prime object proposals randomized prim algorithm proceedings ieee international conference computer vision pages jason rock tanmay gupta justin thorsen junyoung gwak daeyun shin derek hoiem completing object shape one depth image proceedings ieee conference computer vision pattern recognition pages nathan silberman derek hoiem pushmeet kohli rob fergus indoor segmentation support inference rgbd images computer pages zou hoiem complete scene parsing single rgbd image shuran song jianxiong xiao sliding shapes object detection depth images european conference computer vision pages springer urtasun fergus hoiem torralba geiger lenz silberman xiao fidler reconstruction meets recognition challenge stefan walk konrad schindler bernt schiele disparity statistics pedestrian detection combining appearance motion stereo computer pages springer zhirong shuran song aditya khosla fisher linguang zhang xiaoou tang jianxiong xiao shapenets deep representation volumetric shapes proceedings ieee conference computer vision pattern recognition pages yih kristina toutanova john platt christopher meek learning discriminative projections text similarity measures proceedings fifteenth conference computational natural language learning pages association computational linguistics yinda zhang shuran song ping tan jianxiong xiao panocontext wholeroom context model panoramic scene understanding european conference computer vision pages springer
| 1 |
hashing predicted future frames informed exploration deep reinforcement learning jul haiyan yin sinno jialin pan nanyang technological university singaproe haiyanyin sinnopan abstract reinforcement learning tasks efficient exploration mechanism able encourage agent take actions lead less frequent states may yield higher accumulative future return however knowing future evaluating frequentness states tasks especially deep domains state represented image frames paper propose novel informed exploration framework deep tasks build capability agent predict future transitions evaluate frequentness predicted future frames meaningful manner end train deep prediction model generate future frames given pair convolutional autoencoder model generate deep features conducting hashing seen frames addition utilize counts derived seen frames evaluate frequentness predicted frames tackle challenge making hash codes predicted future frames match corresponding seen frames way could derive reliable metric evaluating novelty future direction pointed action hence inform agent explore least frequent one use atari games testing environment demonstrate proposed framework achieves significant performance gain informed exploration approach domains introduction reinforcement learning involves agent progressively interacting initially unknown environment order learn optimal policy objective maximizing cumulative rewards collected environment throughout learning process agent alternates two primal behaviors exploration try novel states could potentially lead high future rewards exploitation perform greedily according learned knowledge past exploitation learned knowledge well studied efficiently explore state space still remained critical challenge especially deep domains deep domains state represented sensory inputs image pixels often continuous thus state space deep huge often intractable searching performing exploration huge state space existing deep works adopt simple exploration heuristic strategy agent takes random action probability via uniform sampling domains corrupting action gaussian noise domains way agent explores state space without conscious without incorporating meaningful knowledge environment exploration heuristic turns work well simple problem domains fails handle challenging conference neural information processing systems nips long beach usa domains extremely sparse rewards lead exponentially increasing state space large action space unlike unconscious exploratory behavior agents using strategy human beings intending explore unfamiliar task domain one often actively applies domain knowledge task accounts state space less frequently visited intentionally tries actions lead novel states work aim mimic exploratory behaviors improve upon strategy random action selection come efficient informed exploration framework deep agents one hand develop agent knowledge environment make able predict future trajectories hand integrate developed knowledge hashing techniques state space order make agent able realistically evaluate novelty predicted future trajectories specifically proposed informed exploration framework first train prediction model predict future frames given pair second perform hashing state space seen agent train deep convolutional autoencoder generate features state apply hashing lsh state features generate binary codes represent state however learned hashing function counting actually seen states need query counts predicted future frames compute novelty hence introduce additional training phase autoencoder match hash codes predicted frames corresponding frames actually seen frames way able utilize environment knowledge hashing techniques states generate reliable novelty evaluation metric future direction pointed action given state related work recently works enhancing exploration behavior deep agent demonstrated great potential improving performance various deep task domains asynchronous training techniques adopted multiple agents created perform learning update model parameters separately atari domain ensemble trained reduce bias values approximated increases exploration depth atari domain exploration strategy based maximizing information gain agent belief environment dynamics adopted tasks continuous states actions probability obtained learned dynamics model actual trajectory used measure surprise experience novelty state measured based mechanisms reward shaping performed adding reward bonus term computed based counts approaches exploration strategy incorporated either function approximation optimization process agent still needs randomly choose action explore without relying knowledge model work aim conduct informed exploration utilize knowledge derive deterministic choice action agent explore exploration deep prediction models recent works aiming incentivize exploration via deep prediction models shown promising results deep domains autoencoder model trained jointly policy model reconstruction error autoencoder used determine rareness state pixelcnn trained jointly policy model density model state prediction gain state measured difference state density given pixelcnn observing state approaches novelty state measured loss output another model exact statistics work use counts derived hashing state space reliably infer novelty state mostly related work prediction model trained predict future frame given pair compute gaussian kernal distance predicted future frame set history frames inform agent take action leads frame dissimilar compared window recent frames work use architecture construct prediction model however involve hashing mechanism count state space determine novelty state based counting statistics enable model query exact counting statistics predicted frames compute novelty action direction based novelty evaluated prediction hashing deep domain running algorithms discretized features yields faster learning promising performance shown latent features learned autoencoder trained unsupervised manner great promise efficiently discretize state space state space atari domain first discretized using latent features derived autoencoder model hashing performed encourage exploration computing reward bonus term form like work also introduce hashing state space based latent features trained deep autoencoder model exploration mechanism significantly different first work count actually seen image frames query hash predicted frames second reward bonus added function approximation target approach counts future states influence previous states backwards via bellman equation whereas work count used informed exploration direct influence approximated methodology notations paper consider discounted markov decision process mdp discrete actions formally defined tuple set states could continuous set actions state transition probability distribution specifying probability transiting state issuing action state reward function mapping pair reward discount factor goal agent learn policy maximizes expected cumulative future rewards given policy context deep step agent receives state observation number consequent frames represent state dimension frame agent selects action among possible choices receives reward informed exploration framework propose informed exploration framework mimic exploratory behavior human beings unfamiliar task domain generally agent longer randomly selects action explore without incorporating domain knowledge instead aim let agent intentionally select action leads least frequent future states thus explore state space informed deterministic manner end build capability agent performing following two tasks predicting future transitions evaluating visiting frequency predicted future frames figure deep neural network architectures adopted informed exploration left actionconditional prediction model predicting future transition frames right autoencoder model conducting hashing state space learning transition model prediction network architecture prediction shown figure left specific train deep prediction model make agent able predict future transitions given pair state input set recent image frames action input represented vector number actions task domain predict new state model predicts one single frame time denoted new state formed concatenating predicted new frame recent frames adopt transformation proposed form joint feature state input action input specifically state input first passed three stacked convolutional layers form feature vector hst state feature hst action feature perform linear transformation multiplying corresponding transformation matrix wts wta linear transformation features shaped dimensionality features state action linear transformation performs multiplicative interaction form joint feature follows wts hst wta hat afterwards joint feature passed stacked deconvolutional layers sigmoid layer form final prediction output predict multiple future steps prediction model progressively composes new state using prediction result predict transition hashing state space autoencoder lsh evaluate novelty state adopt hashing model count state space first train autoencoder model frames unsupervised manner reconstruction loss follows lrec log reconstructed pixel row column architecture autoencoder model shown figure right specific convolutional layer followed rectifier linear unit relu layer max pooling layer kernel size discretize state space hash last frame state adopt output last relu layer encoder state features denote corresponding feature map generates feature vector state discretize state feature hashing lsh adopted upon end projection matrix randomly initialized entries drawn standard gaussian projecting feature sign outputs form binary code introduced discretization scheme able count state space problem domain process hash table created count state denoted stored queried updated hash table overall process counting state expressed following formulas sgn azt matching prediction reality derive novelty predicted frames updating hash table seen frames need match predictions realities make hash codes predicted frames corresponding seen frames training end introduce additional training phase autoencoder model make hash codes derived feature vectors predicted frames seen fames need close introduce additional loss function pair seen frame predicted frame follows lmat finally combing define following overall loss function lrec lrec parameter autoencoder note even though prediction model could generate almost identical frames training autoencoder reconstruction loss may lead distinct state codes task domains details shown section therefore effort matching codes necessary however matching state code guaranteeing satisfying reconstruction behavior extremely challenging fine tuning autoencoder fully trained lrec code matching loss lmat would fast disrupt reconstruction behavior code loss could decrease expected level training autoencoder scratch lrec lmat also difficult lmat initially low lrec high network hardly find direction consistently decrease lrec imbalance therefore work propose train autoencoder two phases first phase uses lrec train convergence second phase uses composed loss function proposed address requirement matching prediction reality computing novelty states prediction model autoencoder model trained agent could perform informed exploration following manner step agent performs exploration probability less perform greedy action selection otherwise given state performing exploration agent predicts future trajectories length possible actions formally novelty score action given state denoted computed count future state derived predicted frames predefined prediction length discount rate evaluating novelty possible actions agent selects one highest novelty score explore overall policy agent proposed informed exploration strategy defined arg min arg max function random value sampled uniform experiments empirical evaluation use arcade learning environment ale consists atari video games testing domain choose representative games require significant exploration learn policy breakout freeway frostbite among five games frostbite large action space consists full set actions breakout state distribution changes significantly ability policy network changes others sparse rewards tasks use state representation concatenates consequent image frames size evaluation prediction model architecture prediction model identical one shown figure left train prediction model create training dataset consists transition records generated fully trained dqn agent performing uniform random action selection set equal training adopt adam optimization algorithm use learning rate size moreover discount gradient scale multiplying gradient value state input normalized dividing pixel values show pixel prediction loss mean square error mse future prediction table task domains prediction errors within small scale prediction error increases increase prediction length demonstrate trained prediction models able generate realistic future frames visualized close frames results shown figure game breakout freeway frostbite table prediction loss mse trained prediction models evaluation hashing autoencoder lsh architecture autoencoder model identical shown figure right autoencoder trained dataset collected identical manner prediction model trained two phases first phase trained reconstruction loss use adam optimization algorithm learning rate size discount gradient multiplying second phase train autoencoder based loss second phase use adam optimization algorithm learning rate size value discount gradient multiplying gradient value figure prediction reconstruction result task domain task present sets frames set consists four frames sets frames domain put row four frames set organized follows frame seen agent predicted frame prediction model reconstruction autoencoder trained reconstruction loss reconstruction autoencoder trained second phase trained reconstruction loss code matching loss overall extremely challenging match state codes predicted frames corresponding seen frames maintaining satisfying reconstruction performance demonstrate figure showing code loss measured terms number mismatch binary codes pair predicted frame corresponding frame presented result derived averaging pairs codes first result shows without second phase impossible perform hashing autoencoder trained reconstruction loss since average code losses domains distinct hash codes count values returned querying hash table meaningless second result shows training second phase code loss significantly reduced freeway frostbite pacman freeway frostbite pacman code loss reconstruction loss reconstruction loss mse code error num bits figure comparison code loss reconstruction loss mse autoencoder model training phase phase also show reconstruction errors measured terms mse training two phases domain figure incorporating code matching loss reconstruction behavior autoencoder receives slightly negative effect comparison frame reconstruction effect training two phases shown figure shown training match state codes reconstructed frames slightly blurred still able reflect essential features problem domain except moreover use breakout illustrative example demonstrate presented hashing framework generate meaningful hash codes predicted future frames figure given frame show predicted frames length taking action found trajectories ball positions predicted regardless action choice different actions lead different board positions hash codes first two actions fire lead little change frames almost future frames hashed code last two actions right left lead significant changes board position codes future frames much distinct result also shows hash code less sensitive ball position compared board position game domain figure left predicted future trajectories action breakout row first frame frame following five frames predicted future trajectories length row agent takes one following actions constantly fire right left right hash codes frames row ordered manner save space four binary codes grouped one hex code range color map normalized linearly hex value evaluation informed exploration framework evaluate efficiency proposed informed exploration framework integrate dqn algorithm compare two baselines dqn performs uniform random choice action exploration denoted informed exploration approach proposed denoted proposed informed exploration approach paper denoted note proposed approach adopt dqn base algorithm run experiment following standard setting used methods train agent million frames evaluate agent episodes game play use length hashing future frames report result table model breakout freeway frostbite scores obtained scores obtained implementing tasks used table performance score proposed approach baseline approaches performance scores obtained except game breakout frostbite included work among test domains outperforms baseline approaches significant performance gains observed domain note breakout agent fails progress always scores almost may due pixel distance evaluation metric used encourages agent explore states dissimilar recent history insufficient let agent explore note demonstrates superior performance deterministic exploration mechanism indicates counting predicted future frames could provide meaningful direction exploration conclusion paper propose informed exploration framework deep tasks discrete action space incorporating deep convolutional prediction model future transitions hashing mechanism based deep autoencoder model lsh enable agent predict future trajectories intuitively evaluate novelty future action direction based hashing result empirical result atari domain shows proposed informed exploration framework could efficiently encourage exploration several challenging deep domains references joshua achiam shankar sastry intrinsic motivation deep reinforcement learning bellemare naddaf veness bowling arcade learning environment evaluation platform general agents journal artificial intelligence research jun marc bellemare sriram srinivasan georg ostrovski tom schaul david saxton remi munos unifying exploration intrinsic motivation advances neural information processing systems pages charles blundell benigno uria alexander pritzel yazhe avraham ruderman joel leibo jack rae daan wierstra demis hassabis episodic control arxiv preprint moses charikar similarity estimation techniques rounding algorithms proceedings annual acm symposium theory computing pages acm rein houthooft chen yan duan john schulman filip turck pieter abbeel vime variational information maximizing exploration advances neural information processing systems pages diederik kingma jimmy adam method stochastic optimization corr diederik kingma max welling variational bayes arxiv preprint sergey levine chelsea finn trevor darrell pieter abbeel training deep visuomotor policies jmlr timothy lillicrap jonathan hunt alexander pritzel nicolas heess tom erez yuval tassa david silver daan wierstra continuous control deep reinforcement learning arxiv preprint mnih kavukcuoglu silver rusu veness bellemare graves riedmiller fidjeland ostrovski petersen sadik beattie antonoglou kumaran king wierstra legg hassabis control deep reinforcement learning nature volodymyr mnih adria puigdomenech badia mehdi mirza alex graves timothy lillicrap tim harley david silver koray kavukcuoglu asynchronous methods deep reinforcement learning international conference machine learning vinod nair geoffrey hinton rectified linear units improve restricted boltzmann machines proceedings international conference machine learning pages junhyuk valliappa chockalingam satinder singh honglak lee control memory active perception action minecraft icml junhyuk xiaoxiao guo honglak lee richard lewis satinder singh video prediction using deep networks atari games advances neural information processing systems pages curran associates ian osband charles blundell alexander pritzel benjamin van roy deep exploration via bootstrapped dqn advances neural information processing systems pages georg ostrovski marc bellemare aaron van den oord remi munos exploration neural density models arxiv preprint emilio parisotto jimmy ruslan salakhutdinov deep multitask transfer reinforcement learning iclr tom schaul john quan ioannis antonoglou david silver prioritized experience replay iclr satinder singh andrew barto nuttapong chentanez intrinsically motivated reinforcement learning nips volume pages bradly stadie sergey levine pieter abbeel incentivizing exploration reinforcement learning deep predictive models arxiv preprint richard sutton andrew barto reinforcement learning introduction volume mit press cambridge haoran tang rein houthooft davis foote adam stooke chen yan duan john schulman filip turck pieter abbeel exploration study exploration deep reinforcement learning arxiv preprint hado van hasselt arthur guez david silver deep reinforcement learning double aaai pages ziyu wang tom schaul matteo hessel hado van hasselt marc lanctot nando freitas dueling network architectures deep reinforcement learning icml pages appendix informed exploration algorithm algorithm informed exploration via hashing predicted future frames train prediction model autoencoder model initialize entries drawn standard gaussian initialize empty hash table episode terminal receive state observation environment compute value draw uniform informed exploration let novelty let predict compute latent feature compute hash code sgn derive count update novelty based update let argmaxa end end let argmaxa deterministically explore least frequent action else let argmaxa greedy end take action observe reward compute hash code observed state sgn hashing update hash table update sampled transitions frequently end end appendix result prediction future frames section show prediction result future frames task domain demonstrate predicted future frames prediction length task illustrate sets prediction results set consists rows first frame first row corresponds frame following frames first row correspond frames frames second row correspond predicted future frames figure prediction result breakout figure prediction result freeway figure prediction result frostbite figure prediction result figure prediction result qbert appendix result hashing image frames demonstrate effect hashing image frames figure domain show set image frames hashed hash code row overall adopting counting approach expect hashing mechanism could consist certain degree generalization ability similar frames could hashed hash code dissimilar frames could distinguished end using features derived autoencoder layer apparently better using raw pixels since image frames distinct pixel level however using autoencoder features extremely hard control degree generalization achieved output feature layer moreover binarilization approach via lsh additional training phase autoencoder match codes predicted frames real seen frames makes even harder ensure degree generalization hash code within expected level results shown figuer observe hashing mechanism could identify significant features instance breakout find hash code much sensitive shape bricks board positions ball position freeway intensity cars position cars matters frostbite hash codes distinguish well layout platform pattern however also find current limitation representing multiple patterns one code challenge distinguishing frames based desired feature ball position breakout may fail captured hashing mechanism without intelligently tune hashing mechanism based desired purpose result hashing image frames domain row corresponds frames lead hash code corresponding game domain figure
| 2 |
noncentralized cryptocurrency wtih blockchain qian xiaochao fxxfzx btc final abstract give explicit definition decentralization show decentralization almost impossible current stage bitcoin first truly noncentralized currency currency history propose new framework noncentralized cryptocurrency system assumption existence weak adversary bank alliance abandons mining process blockchain removes history transactions data synchronization propose consensus algorithm named noncentralized cryptocurrency system keywords cryptocurrency converged consensus bitcoin noncentralization introduction bitcoin peer peer distributed digital currency system whose implementation mainly based cryptography decentralized cryptocurrency attracted lots attention widely adopted whole world proposed satoshi nakamoto created goal build practical decentralized cryptocurrency system achieve goal adopts proof work mechanism distribute coins validate transactions mechanism implemented sophisticated design block blockchain system adopts longest blockchain principle represent consensus users general ledger bitcoin system every transaction stored blockchain time size blockchain grows fast ordinary users hate verify transactions synchronize whole blockchain trading bitcoin fiat currency invietable rely exchange hub leads additional safety dependence severely compromises safety bitcoins event broke background many innocent users lost bitcoins addition intensive mining process results colossal waste electricity computing resources proof work mechanism makes bitcoin system expose potential superiority computing power attacker may even hold bitcoin bad news bitcoin system jun major mining pool ever controlled computing power dangerous sign unacceptable alleged decentralized cryptocurrency system various phenomenon express feasibility decentralized cryptocurrency questionable main content paper divided two parts first try show decentralization almost impossible current stage part propose new framework noncentralized cryptocurrency system bank alliance part noncentralization definition centralization cryptocurrency system totally controlled one single entity definition decentralization cryptocurrency system one safety dependency private key definition noncentralization cryptocurrency system neither centralized decentralized simply speaking decentralization means trust noncentralization means partial trust centralization means fully trust mining centralization longest chain principle bitcoin provides competition mechanism miner best strategy try best get stronger merge miners get stronger long one miner mining pool deemed miner fierce competition cease thus mining system strong incentive get centralized equilibrium result one miner left although still one major mining pools present weak pools going disappear able get enough rewards cover cost running mining pool number mining pools going even though number reduce one could know major mining pools controlled single underground big boss called sybil attack means mining pools different names merely sybils underground big boss according previous definition decentralization noncentralization bitcoin obviously decentralized fans claimed running noncentralized system seen essentially satoshi competitive blockchain solution decentralization centralization trivial solution decentralized cryptocurrency bitcoin system clearly failed however failed experiment provide much valuable information first noncentralized currency although mining centralization inevitable bitcoin still truly noncentralized cryptocurrency system fact mining center control part bitcoin system whole system typical centralized currency system central bank super power transferring money account account mining center super power possible stakeholders fork bitcoin system whenever find mining center longer trustworthy bitcoin totally controlled mining center according previous definition bitcoin system noncentralized cryptocurrency first truly noncentralized currency currency history virtual general adversary vga decentralized system assume one single virtual general adversary trying attack system virtual general means includes factor including communication problem abstract attack behaviors may lead failure decentralization virtually adversary whatever attack system consider major attack behaviors maliciously construct messages maliciously schedule messages sybil attack centralization bitcoin system manages build strong resistance constructing block high difficulty value expensive nevertheless weak resistance mining pool may withhold blocks discovered know whether underground boss strong incentive get centralized mean blockchain mechanism merely partially meets challenge overcome one attack behaviors big challenge let alone overcome simultaneously build truly decentralized cryptocurrency decentralization good true current stage probably belongs far future say century truly revolutionary breakthrough communication technique computational theory may occurred take second best try propose new framework noncentralized cryptocurrency system assumption existence weak adversary may practical current stage talk adversary consider adversary quadruple basic resources computing power money adversary possesses ability adversary scheduling messages ability adversary gathering information system attraction degree system getting centralized hard accurately quantify value simply assume super high value always get negative result hence assume weak virtual general adversary low value new system weak exactly honestly idea like bitcoin system need experiments gather information finetune parameters suppose system found bank alliance includes member banks whole world banks highly trusted completely trusted though accords assumption weak adversary name new system every expected event system timeout value set globally others set locally balanceview since need mining anymore replace concept block blockchain normal data list definition balanceview data list compounding balance record list bank list definition baseview balanceview based important actions taken balance record pubkey balance balanceview bank list height number balanceview increasing continuously baseview hash refers baseview balanceview package hash refers package actually calculated transaction height baseview hash unique code sender pubkey receiver pubkey volume transaction fee sender signature transaction list bank signature end user sets max transaction fee pay every new transaction sends selected bank bank pick transaction provides enough transaction fee every bank verifies collects transactions makes transaction list broadcasts list banks definition agent agent bank chooses collect transaction lists banks make transaction package broadcasts package verified granted banks bank grants package shares baseview signing package sends granter item back agent every bank freely sets granting rule select packages wants grant definition agent collects granter items banks collects grants combines transactions package granter items list form broadcasts announce new candidate balanceview new balanceview calculated according last baseview total transaction fees divided ratio agent deputy bank equally divided banks granter item granter pubkey granter signature signs package granter item list transaction package height baseview hash pubkey source node transaction list signature package granter item list signature agent consensus possible one package based one baseview get grants consequently one broadcasted consensus new balanceview split circumstance analogous blockchain forks according longest chain principle bitcoin system guarentee termination consensus process means whenever see longer valid blockchain decide blockchain consistency gaurenteed multiple valid blocks one height possible virtual general adversary may mailously schedule spreading blocks prevent uncertainty checkpoint mechanism confirmation delay blocks introduced decentralization guarantees termination consistency bitcoin obviously decentralized taking consensus standard bitcoin reference guarantees termination consistency high probability main idea consensus algorithm roughly simple following suppose red balls black balls bag one randomly selects balls red balls black balls one balck ball turn red ball vise versa time time eventually colors balls converge one single color ether black red suppose balls bag one consecutively times randomly selects ball red balls probability balls red balls extremly high practice present balanceview indirectly denoted ibv save space necessary reconstruct balanceview according ibv indirect presented balanceview height hash refers actually calculated bank receives first valid current height makes ibv new balanceview current ibv converged consensus algorithm consecutive identical observed randomly sample ibv timeout gives null ibv frequent ibv current height discard never backward decide null baseview collision detected alert termination every honest node eventually decides probability since adversary limited ability scheduling messages termination reached sooner later bank reconstructs baseview according bank current balanceview volatile bank keeps updating baseview rather stable still permanent algorithm guarantee completely consistent note distribution ibv gets consistent sampling updating speed converging process gets faster faster due positive feedback effect good merit practical application avoid double spending referring bitcoin strategy may delay confirmation transactions several baseviews baseview collision means bank decides different ibv one height bank detects baseview collision theoretically possible rare one height decides null ibv human intervention needed meta data adjust major parameters system requires grants take effect adjust ratio dividing transaction fees controls intensity competition among banks give strong incentive banks choosing agent increase initial consistency much possible set small positive number ratio controls competition intensity among banks serving end users ratio provides basic support banks make public contribution verification granting keeping sampling according moore law cost making public contribution trivial commercial bank banks longer completely selfish entity hidden dark corner internet handled decentralization noncentralization banks legal public companies care public reputation image system weak resistance small number dishonest banks adjust sampling termination parameters according conditions environment even inflate money supply necessary switch system leader election mode specifies fixed bank agent increase consistency efficiency fix emergent problems make statistic tool monitor model standard client banks monitor behavior bank extremely abnormal behaviors reported use remove highly suspicious bank bank list long banks convinced banks sufficient incentive social pressure behaving normally little incentive behave badly conclusion agree maintaining trustable financial system expensive satoshi original idea trade expensive electricity computing resources dencentralization strategy work worth cost unfortunately shown bitcoin actually running noncentralized system obvious trend evolve centralized system unnecessary waste much electricity computing resources meaningless hashing decentralized system bitcoin obviously failed decentralization blockchain mechanism work noncentralization blockchain mechanism unnecessary blockchain mechanism proven reality trivial wasteful design assert blockchain mechanism flash pan valuable information get failed bitcoin experiment bitcoin shown noncentralized cryptocurrency system possible shown decentralization almost impossible current stage researchers claim decentralization believe claimed one noncentralization one question claim decentralization whether system one safety dependency answer unknown unclear undefined safety dependencies could ensure safety without even knowing safety dependencies actually could give accurate quantitified definition adversary know adversary exactly strong adversary exactly calculations safety analysis questionable decentralization simply good true time decentralization suggest give noncentralized system based converged consensus try although extremely hard build truly decentralized cryptocurrency system human beings gotten one truly decentralized currency system gold system one safety dependency gold safe long keep gold well safety dependency owner gold think gold system natual decentralized currency system gold system special balanceview represents natual consensus distribution fortune mankind finally conjecture timeline currency mankind like natural decentralization gold artificial centralization fiat currency artificial noncentralization bitcoin artificial decentralization ultimate goal references satoshi nakamoto bitcoin electronic cash system michael fischer nancy lynch michael paterson impossibility distributed consensus one faulty process journal acm april pease shostak lamport reaching agreement presence faults journal lamport shostak pease byzantine generals problem acm transactions programming languages systems july http revision march utc appendix solution ball problem changing state balls forms markov chain obviously absorbing markov chain suppose transition matrix transient states two colors absorbing states single color matrix nonzero matrix zero matrix identity matrix thus describes probability transitioning transient state another describes probability transitioning transient state absorbing state steps transition matrix probability process absorbed
| 5 |
perfect privacy maximal correlation borzoo rassouli deniz abstract problem private data disclosure studied information theoretic perspective considering pair dec correlated random variables denotes observed data denotes private latent variables following problem addressed maximum information revealed disclosing information assuming markov kernel maps revealed information shown maximum mutual information obtained solution standard linear program required independent called perfect privacy solution shown greater equal information carried maximal information disclosure perfect privacy shown solution linear program also utility measured reduction mean square error probability error jointly gaussian shown perfect privacy possible kernel applied whereas perfect privacy achieved mapping private latent variables also observed encoder next measuring utility privacy respectively slope optimal curve studied finally similar independent analysis alternative characterization maximal correlation two random variables provided index terms privacy perfect privacy information maximal correlation mutual information ntroduction explosion machine learning algorithms applications many areas science technology governance data becoming extremely valuable asset however growing power machine learning algorithms learning individual behavioral patterns diverse data sources privacy becoming major concern calling strict regulations data ownership distribution hand many recent examples attacks publicly available anonymized data show regulation sufficient limit access private data alternative approach also considered paper process data time release private information leaked called perfect privacy assuming joint distribution observed data latent variables kept private known study carried paper characterize fundamental limits perfect privacy consider situation alice wants release useful information bob represented random variable receives utility disclosure information may represent borzoo rassouli deniz information processing communications lab department electrical electronics engineering imperial college london united kingdom emails data measured health monitoring system smart meter measurements sequence portion dna detect potential illnesses time wishes conceal bob private information depends represented end instead letting bob direct access mapping applied whereby distorted version denoted revealed bob context privacy utility competing goals distorted version revealed privacy mapping less information bob infer less utility obtained result dependencies extreme point scenario termed perfect privacy refers situation nothing allowed inferred bob disclosure condition modelled statistical independence concern privacy design mappings focus broad area research view privacy gained increasing attention recently general statistical inference framework proposed capture loss privacy legitimate transactions data cost function considered called privacy funnel closely related information bottleneck introduced sharp bounds optimal privacy funnel derived alternative characterization perfect privacy condition see proposed measuring privacy utility terms mutual information perfect privacy fully characterized binary case furthermore new quantity introduced capture amount private information latent variable carried observable data study information theoretic perfect privacy paper main contributions summarized follows adopting mutual information utility measure show maximum utility perfect privacy solution standard linear program obtain similar results measures utility minimum error probability error considered show jointly gaussian pair correlation coefficient privacy mapping perfect privacy feasible words independent also independent maximum privacy obtained expense zero utility however case mapping form encoder access private latent variables well data denoting maximum perfect privacy characterize relationship information carried defined considering mutual information privacy utility measure optimal curve characterized supremum straightforward problem instead investigate slope curve linear approximation curve provides maximum utility rate small amount private data leakage allowed obtain slope perfect privacy feasible propose lower bound perfect privacy feasible slope analysis provide alternative characterization maximal correlation two random variables notations random variables denoted capital letters realizations lower case letters alphabets capital letters calligraphic font matrices vectors denoted bold capital bold lower case letters respectively integers discrete interval tuple written short integer denotes column vector random variable finite probability simplex standard given furthermore probability mass function pmf denoted corresponds matrix diag probability vector whose element pair random variables joint pmf matrix entry equal likewise matrix entry equal denotes cumulative distribution function cdf random variable admits density probability density function pdf denoted denotes binary entropy function function denoted throughout paper random variable corresponding probability vector written interchangeably quantities ystem model preliminaries consider pair random variables distributed according joint distribution assume since otherwise supports could modified accordingly equivalently means probability vectors interior corresponding probability simplices let denote data user wants conceal denote useful data user wishes disclose assume privacy release mechanism takes input maps released data denoted scenario form markov chain privacy mapping captured conditional distribution let sup words mutual information adopted measure utility privacy gives best utility obtained privacy mappings keep sensitive data private within certain level done abuse notation written proposition sufficient also write max max proof proof provided appendix later show sufficient restrict attention iii erfect rivacy definition pair random variables say perfect privacy feasible exists random variable satisfies following conditions forms markov chain independent independent definition say perfect privacy feasible equivalent proposition perfect privacy feasible dim null proof theorem authors showed given pair random variables exists random variable satisfying conditions perfect privacy columns linearly dependent equivalently must exist vector equivalent proposition null space null therefore null exists positive real number proof follows fact assumption belongs null last claim proposition due fact interior theorem pair random variables solution standard linear program given proof let matrix entry equal singular value matrix right eigenvectors equivalent null space written null span random variables independent furthermore form markov chain equivalent null column vectors definition index afterwards construct matrix write null therefore triplet forms markov chain must convex polytope defined apy note element probability vector according proposition hand pair simply therefore write leads max max min assume without loss generality singular values arranged descending order since minimization rather constraint added preserve marginal distribution proposition minimizing sufficient consider extreme points proof proof provided appendix proposition problem divided two phases phase one extreme points set identified phase two proper weights extreme points obtained minimize objective function first phase proceed follows extreme points basic feasible solutions see basic feasible solutions set apy procedure finding extreme points follows pick set indices correspond linearly independent columns matrix since matrix full rank note rows mutually orthonormal rank ways choosing linearly independent columns let matrix whose columns columns indexed indices also let xtb xtn vectors whose elements elements indexed indices respectively basic feasible solution exists set indices correspond set linearly independent columns corresponding vector satisfies following hand set indices correspond set linearly independent columns corresponding vector basic feasible solution hence extreme points obtained mentioned number second phase proceed follows assume extreme points found previous phase denoted equivalent min weight vector verified constraint satisfied constraint met problem standard linear program efficiently solved following example clarifies optimization procedure proof theorem example consider pair joint distribution specified following matrix results since dim null therefore singular value decomposition obvious columns matrix right eigenvectors span null space hence matrix given first phase finding extreme points clear possible ways choosing linearly independent columns hence index set get obvious feasible since satisfy therefore extreme points obtained second phase standard written min minimum value objective function bits achieved therefore note finally corresponds matrix given remark verified degenerate case null span equivalently case extreme points zero entropy therefore minimum value zero ith extreme point result also consistent fact independent maximizes mmse perfect privacy assume instead goal minimize perfect privacy constraint formulated follows min expectation according joint distribution obviously upperbound var one could choose follows show similar solution difference realizations particular values elements irrelevant solution since mass probabilities role evaluating objective function takes account pmf realizations write var classical result mmse estimation used var therefore min min var equality holds proposition var strictly concave function proof proof provided appendix concavity var proposition apply reasoning proof proposition conclude sufficient consider extreme points hence problem two phases phase one extreme points found second phase denoting extreme points boils standard linear program follows min vark vari denotes var finally solved realizations random variable set equalize expectations corresponding distributions extreme points mass probability example problem pair given example min extreme points already known example minimum value standard achieved therefore var setting obtain minimum probability error perfect privacy objective optimization error probability min obviously upper bound maxy one could choose arg maxy arbitrary joint distribution write dfu max dfu holds equality arg maxy max dfu min max verified maxy convex hence following reasoning proof proposition sufficient consider extreme points optimization therefore problem two phases phase one extreme points identified second phase denoting extreme points problem boils standard linear program follows max pmk pmi maximum element vector solved optimal conditionals optimal mass probabilities obtained finally realizations random variable set equalize arg maxy example problem pair given example max extreme points already known example minimum probability error obtained maxy achieved hence thus far investigated constraint perfect privacy next theorem succeeding example consider two cases least one infinite following theorem shows perfect privacy feasible correlated jointly gaussian pair theorem let pair jointly gaussian random variables since otherwise pair proof exists random variable form markov chain must hence since density equivalently must dfy also must exists least follows show holds satisfied therefore perfect privacy feasible jointly gaussian pair known conditioned also gaussian given equivalently dfy dfy multiplying sides taking integral respect obtain fubini write manipulations get since hence lhs fourier transform due invertiblity fourier transform must therefore hold perfect privacy feasible correlated jointly gaussian pair following example consider observe bounded unbounded without even revealing undistorted renders usage mutual information measure dependence counterintuitive continuous alphabets related fact differential entropy interpreted measure information content random variable take negative values example let let uniform uniform verified probability density function pdf conditional pmf conditioned given note since support interval support must subset also independence implies dfy finally order preserve pdf conditional cdf must satisfy following dfu cdf corresponding pdf let set cdfs defined defined dfy write sup dfu follows show supremum unbounded let arbitrary positive integer construct also let joint distribution follows let otherwise verified also satisfied hence construction since true positive integer conclude finite therefore letting quantized version results unbounded consider mmse utility function solve following problem var min dfu since var strictly concave function sufficient consider optimization extreme points distributions concentrated two mass points one interval one mass probability interval mass probability words denotes unit step function denote two aforementioned mass points function simple analysis shows var therefore var min dfu min min dfu min dfu dfu dfy dfy due convexity jensen inequality fact ydfy ydfy order achieve minimum proceed follows let uniform means two mass points probabilities respectively verified satisfies finally following equalities hold var therefore mmse var consider probability error utility function solve following problem max dfu verified similarly analysis mmse utility function restrict attention extreme points obtain minimum error probability achieved uniform private information pair random variables private information carried defined min since implies deterministic function means among functions make conditionally independent want find one lowest entropy verified first inequality due data processing inequality applied markov chain second inequality direct result fact satisfies constraints information carried defined let mapping probability simplex defined shown theorem minimizer hence furthermore proved lemma exist loosely connected latter represents roughly amount information contained correlated three examples provided two last one finally question raised regarding condition joint distribution holds follows characterize relation let denote matrix corresponding conditional distribution lemma must least corresponding columns equal let set least two indices corresponding columns equal generalize definition matrix several subsets identical columns hence corresponding index sets denoted words let matrix index sets construct corresponding matrix eliminating columns except one example following pair theorem pair random variables distributed according equality holds either following holds perfect privacy feasible dim null perfect privacy feasible dim null words proof obvious exist holds assume exist index sets corresponding equal columns defined since hence random variable whose support cardinality whose mass probabilities elements following set let set probability vectors simplex given tuple written short probability vectors defined otherwise proposition set set furthermore probability vector written convex combination points proof proof provided appendix example assume example write finally write min justified follows according proposition preserved hence vectors belong constraint minimization inequality follows proposition due proves proof necessary sufficient condition equality first prove second direction sufficient condition perfect privacy feasible equality immediate assume perfect privacy feasible exist index sets corresponding equal columns defined proposition dim null extreme points convex polytope defined elements dim null none elements extreme point proof proof provided appendix dim null proposition say vector extreme point means proposition min equivalent first direction necessary condition equality assume perfect privacy feasible nothing prove perfect privacy feasible must means must exist index sets corresponding equal columns prove dim null contradiction assume dim null proposition conclude none elements extreme point words find triplet therefore min due strict concavity entropy comes fact corresponding mass probabilities belong constraints minimization results contradiction hence must dim null ull data observation output perturbation thus far assumed privacy release mechanism takes input maps released data denoted form markov chain privacy mapping captured conditional distribution general scenario privacy mapping take noisy version input case privacy mapping denoted form markov chain triplet distributed according given joint distribution model perfect privacy feasible triplet exists privacy mapping whose output depends useful data independent private data proposition perfect privacy feasible dim null proof proof follows similarly proposition noting form markov chains words must exist vector change along vector changes keeps unchanged verified general scenario mapping denoted perfect privacy obtained similar theorem following modifications convex polytope modified set probability vectors null denoting extreme points changes max min special cases full data observation output perturbation refer scenarios privacy mapping direct access private useful data useful data respectively definitions sections consider particular case output perturbation follows consider full data observation scenario briefly proposition deterministic function perfect privacy always feasible full data observation model proof deterministic function must exist let choose sufficiently small let otherwise verified preserved also former indicates latter shows considering output perturbation model theorem proved perfect privacy feasible correlated jointly gaussian pair following theorem states opposite full data model theorem jointly gaussian pair correlation coefficient perfect privacy feasible full data observation model sup log proof denoting variances respectively already known write independent letting used follows fact aximal correlation consider pair random variables distributed according let denote set functions define let defined similarly random variable maximal correlation defined max defined zero alternative characterization maximal correlation given witsenhausen follows let matrix defined singular values shown maximal correlation equal second largest singular value matrix follows propose alternative characterization maximal correlation also helps interprete singular values matrix following preliminaries needed sequel preliminaries assume real symmetric matrix vector satisfying assume interested finding stationary values constant almost surely characterizations see subject constraints letting lagrange multipliers differentiating lagrangian respect obtain results multiplying sides noting substituting value led prx cct since projection matrix stationary values singular values matrix occur corresponding eigenvectors finally assume vector constraints replaced matrix also assume columns matrix orthonormal verified results remain modified cct alternative characterization consider pair random variables distributed according marginals matrix viewed channel input output input channel distributed according output distributed according let defined write kqy interested finding stationary values theorem stationary values squared singular values matrix particular lim sup kqy proof write auxiliary vector relationship assume two probability vectors interior let denote corresponding probability mass functions write taylor series expansion relative entropy log diag log log gradient hessian respectively higher order terms denoted dots therefore boils used facts higher order terms shown dots ignored hence interested finding stationary values note condition redundant norm cancels numerator denominator equivalently write obvious without loss generality assume therefore led finding stationary values subject constraints note real symmetric matrix vector satisfying therefore problem whose stationary values eigenvalues matrix cct occur corresponding eigenvectors defined also eigenvector corresponding eigenvalue follows therefore eigenvalues matrix cct singular values matrix hence leads following equality maximal correlation lim sup eigenvalues cct equivalently singular values matrix except largest one interpreted similar way assume maximizer eigenvector cct corresponds eigenvalue equivalently ratio maximized converges direction besides constraints also impose constraint orthogonal replacing matrix whose first second columns respectively maximum would achieved corresponding eigenvector equivalently ratio maximized converges direction procedure continued cover singular values matrix second largest smallest remark natural question arises whether largest singular value one similar interpretation constraint omitted maximum would occurs constraint due turn results fact probability vector therefore definition relative entropy extended vectors positive elements vector positive elements ratio maximized converges direction sections iii paper problem perfect privacy quantity studied although optimal curve straightforward analytically interest check behaviour curve small amount private data leakage allowed end study slope curve following section vii slope privacy trade region section consider region defined interested evaluating rate increase utility words focus slope curve case slope obtained case lower bound proposed let defined sup convention proposition proof proof provided appendix slope proposition lim proof perfect privacy feasible already known slope infinity origin along proposition proves follows consider situation perfect privacy feasible definition fixed induced let sufficiently small let sufficiently small makes probability vector interested inspecting behaviour taylor series expansion constant dots denote higher order terms taylor series expansion similar expansion written first term denominator corresponding constant denoted hence lim lim since chosen arbitrarily write lim hand lim sup sup sup assumption summations causes loss generality excludes zero terms proved remark general observation model section mapping similarly verified slope optimal curve characterized given sup convention definition holds probability vectors matrices derived joint distribution remark let bernoulli connected binary symmetric channel bsc crossover probability hence distributed bernoulli denotes binary convolution lemma log log binary entropy function inverse lim lim log log application hospital rule according perfect privacy feasible hence sup inf lim inf proposition permissible since ratios involved bounded away zero infinity direct result adding constraint infimum analysis note specific case example inequality replaced equality standard one dimension words infimum supremum ratio must log log simple calculation second largest singular value matrix get obvious fixed therefore intuitive since pair moves towards independence therefore correlation vanishes result left hand side lhs becomes unbounded right hand side rhs tends zero example lhs approximately rhs makes invalid reason phenomenon upperbound hold general due subtle error employing gerber lemma lemma conditional entropy bounded crossover probability captured additive binary noise bernoulli due gerber lemma replaced since obtain bound however statement gerber lemma must independent pair necessarily case assume markov chain obtained passing independent pair actually verified one realization becomes deterministic function therefore application gerber lemma permissible remark readily verified theorem proposition lim lim use convention sup sup maxu term equal written like comparable constraint supremization average constraint versus constraint slope previous subsection slope obtained follows consider case interested finding lim sequel propose lower bound assume obtained formulation theorem achieved vectors belong extreme points set defined define sup happens exactly corner point probability simplex let proposition proof proof provided appendix let max channel binary input output alphabets conditional pmf denote maximizer fixed construct pair follows let note sufficiently small probability vector since words entry vector zero note extreme point corresponding entry also zero finally verified marginal probability vector also preserved construction markov chain numerator denominator fact due write taylor series expansion second term numerator log log log diag replacing numerator becomes log log log manipulations becomes equal following similar steps expansion second term denominator obtained letting ignoring higher order terms denoted dots get lim due since chosen arbitrarily get lim hand construct pair follows let vector denoting ith extreme point probability simplex verified marginal probability vector preserved construction markov chain used facts log therefore combining lim max example assume achieved formulation fig pictorial representation example simply obtain sup achieved respectively results max lim figure illustrates example triangle represents probability simplex corner points third right eigenvector whose span null space line segment passing direction convex polytope two extreme points viii onclusions paper addresses problem perfect privacy goal find maximum utility obtained distorted disclosure available data guaranteeing maximum privacy private latent variable shown problem boils standard linear program utility measured mutual information disclosed distorted version similar results obtained utility measures particular error probability error shown private variable observed data form jointly gaussian pair utility obtained expense privacy data release mechanism access observed data hand shown privacy mapping direct access data latent variable perfect privacy feasible measuring utility privacy mutual information investigated slope optimal curve revealed private data independent finally proposed alternative characterization maximal correlation two random variables ppendix let arbitrary set let set probability mass functions pmf let mapping defined since corresponds standard closed bounded subset compact also continuous mapping therefore support lemma every defined exists random variable collection conditional pmfs indexed means arbitrary terms preserved replaced since simply loss optimality considering let set standard therefore compact since set finite compact sets still compact finally set closed subset due continuity mutual information closedness interval therefore also compact since continuous mapping supremum achieved therefore maximum proves first equality second equality follows convexity objective function maximum occurs extreme point ppendix already known reasoning appendix modified follows replaced mapping modified last constraint removed element therefore obtain sufficiency assume minimum achieved points prove points must belong extreme points let arbitrary point among points written points belong extreme points concavity entropy equality holds one zero definition extreme point extreme point written least two makes inequality strict however violates assumption points achieve minimum hence points minimizer must belong set extreme points ppendix let given obvious arbitrary function therefore var var due due strict convexity convex polytope convex subset therefore point written convex combination extreme points ppendix construction hence means entries probability vector elements set therefore finally let set probability vectors defined induction define verified equivalent constructions respectively write ymn therefore written letting led ppendix without loss generality appropriate labelling elements assume let common column vector corresponding denoted write times times times define vectors proposition let span eim null holds equality dim null proof proof provided appendix dim null element extreme point reasoning follows note hence remains show point written convex combination two different points pick arbitrary point verified exist remain probability vector due negative element either proposition null turn means written convex combination two different points therefore elements belong extreme points hand dim null assume exists extreme point show leads contradiction firstly note among elements correspond must one element justified follows assume two elements construct vector remaining terms zero obviously null let min obvious written convex combination vectors however contradicts assumption extreme point hence among elements correspond one element result find point whose positions elements matches since must differ least one element assume exists elements correspond zero except verified written linear combination vectors linear combination vectors eim produce vector whose elements corresponding zero except one since dim null proposition null turn means null therefore contradiction must still written linear combination vectors vector leads contradiction therefore conclude dim null extreme points elements dim null proposition must exist vector null pick arbitrary point order make analysis simple let picked vector construct vector null done obvious sufficiently small written convex combination shows extreme point similar applied show difference constructing vector point perturbed along direction still lies done noting sufficient construct whose position zero elements matches arbitrary point similar way null constructed using orthogonal vectors instead eim new constructed whose position zero elements matches arbitrary point points belong set extreme points hence dim null conclude none elements extreme point ppendix fact null verified observing eim dim null must null true must exist vector null let denote ith element write obtained uniquely verified coefficients vectors eim mutually orthogonal since eim null null also note vector since otherwise would result finally structure observe obtained eliminating zero vectors denoted since vector must hence dim null contradiction therefore must null null must dim null true exists vector correspondingly vector relation similar previous paragraph therefore null however verified due structure vectors eim positions zero elements written linear combination vectors eim results contradiction assumed null proves dim null ppendix since otherwise could constructed random variable sufficiently small sufficiently small makes still probability vector construction verified contradicts hence way existence sequence distributions qny qny qnx since perfect privacy feasible must min means order qnx must qny know ratio bounded minimum eigenvalue matrix arbitrary vector must possible since vector due fact min null space vector therefore minimum eigenvalue matrix bounded away zero equivalently inverse bounded inverse minimum eigenvalue hence proof second direction immediate since leads existence turn violates ppendix firstly note point satisfies defined since otherwise fact sufficiently small make probability vector also belongs however violates fact extreme point since written convex combination two points alternatively say satisfies null hence way possibly unbounded existence sequence distributions qny qny qnx requires qny converging point let denote set indices corresponding zero elements let denote set probability vectors words set probability vectors whose elements corresponding indices zero since closed set conclude qny converges point must also satisfies mentioned null contradicts fact hence must therefore suffices consider following problem lim inf similarly analysis becomes equal minimum eigenvalue matrix obtained eliminating columns correspond indices diagonal matrix whose diagonal elements corresponding elements follows show minimum eigenvalue matrix bounded away zero since otherwise must exists vector let since full rank matrices must null therefore according proposition construct dimensional vector follows let elements corresponding indices zero terms equal elements obvious null since elements corresponding columns zero null proposition null results sufficiently small let since moreover since elements corresponding indices zero therefore reasoning beginning appendix however since null contradiction therefore minimum eigenvalue matrix bounded away zero turn means inverse bounded away zero therefore eferences narayanan shmatikov robust large sparse datasets ieee symp security privacy ding zhang zhiguo brief survey attacks online social networks international conf computational aspects social networks cason kumar nilsen pavel srivastava mobile health revolutionizing healthcare transdisciplinary research computer vol smart meter privacy multiple users presence alternative energy source ieee trans information forensics security motahari bresler tse information theory dna shotgun sequencing ieee trans information theory vol dwork mcsherry nissim smith calibrating noise sensitivity private data analysis theory cryptography springer sweeney model protecting privacy intl journal uncertainty fuzziness systems vol machanavajjhala kifer gehrke venkitasubramaniam privacy beyond acm trans knowledge discovery data vol venkatasubramanian privacy beyond ieee intl conf data calmon fawaz privacy statistical inference annual allerton conference illinois usa makhdoumi salamatian fawaz information bottleneck privacy funnel ieee information theory workshop itw tishby pereira bialek information bottleneck method arxiv preprint calmon makhdoumi fundamental limits perfect privacy ieee int symp inf theory isit berger yeung multiterminal source encoding encoder breakdown ieee trans inf theory asoodeh alajaji linder notes privacy annual allerton conference illinois usa hirschfeld connection correlation contingency proc cambridge philosophical soc gebelein das statistische problem der korrelation als und und sein zusammenhang mit der angew math und mech ichungsrechnung zeitschrift measures dependence acta math vol bertsimas tsitsiklis introduction linear optimization murty linear programming athena scientic john wiley sons levy principles signal detection parameter estimation springer tradeoffs wang basciftci ishwar constrained data release mechanisms https witsenhausen sequences pairs dependent random variables siam journal applied mathematics vol anantharam gohari kamath nair maximal correlation hypercontractivity data processing inequality studied erkip cover https apr golub modified matrix eigenvalue problems siam review vol apr gamal kim network information theory cambridge university press
| 7 |
deep interactive evolution philip wending julian sebastian jan new york university new york united states philibjb julian university copenhagen copenhagen denmark sebr beijing university posts telecommunications beijing china abstract paper describes approach combines generative adversarial networks gans interactive evolutionary computation iec gans trained produce lifelike images normally sampled randomly learned distribution providing limited control resulting output hand interactive evolution shown promise creating various artifacts images music objects traditionally relies evolvable representation target domain main insight paper gan trained specific target domain act compact robust mapping produced phenotypes resemble valid domain artifacts gan trained latent vector given input gan generator network put evolutionary control allowing controllable image generation paper demonstrate advantage novel approach user study participants able evolve images strongly resemble specific target images introduction paper addresses question generate artifacts included limited images indirect interaction computer software means instead effecting change directly artifact example drawing picture virtual brushes traditional image editing software user communicates software example evaluative feedback suggestions interactive creation number potential uses perhaps obviously combining generative capacity computer software human input allow humans create artifacts images quality richness could create using direct methods possess requisite technical skills drawing methods could also used explore enforce particular aesthetics allowing users create art aesthetic would normally adhere use cases creation methods could include creating avatars games virtual environments creating facial composites help criminal cases product design customization search interactive evolution particular form creation human user functions fitness function evolutionary algorithm system serves number artifacts human every generation human responds indicating artifact prefer crossover mutation system generates another generation artifacts user selects favorite artifact conceptual appeal interactive evolution strong several problems hampering practical usefulness technique one issue large number evaluations may necessary find desired artifact leading issue known user fatigue might also possible find desired artifact optimizing traits seem correlated found desired artifact studies suggest evolution often fails part problem might underlying representation mapping employed interactive evolution system might conducive search might thread wrong balance generality domainspecificity one potential solution find better artifact representations map genotype phenotype within particular domain preserving property points search space correspond reasonable artifacts suggest could done using generator parts generative adversarial networks gans trained produce images artifacts particular domain hypothesize using generative representation acquired adversarial training give searchable space call approach deep interactive evolution rest paper discuss previous work interactive evolution generative adversarial networks present specific approach also show results letting set users use deep interactive evolution using generator trained shoe dataset one trained face dataset background section reviews interactive evolutionary generative adversarial networks fundamental combined approach presented paper interactive evolutionary computation iec interactive evolutionary computation iec traditional objective fitness function replaced human selecting candidates next generation iec traditionally used optimization tasks subjective criteria domains objective undefined surveyed takagi iec applied several domains including image generation games music industrial design data mining evolutionary art music often hinges human evaluation content due subjective nature aesthetics beauty iec approaches follow paradigm original blind watchmaker dawkins artifacts evolved users interactive interface allows iteratively select set candidate designs however problem iec approach user fatigue many attempts made minimize user fatigue problem evolution takes time humans tend suffer fatigue evaluating relatively generations one approach limit user fatigue collaborative interactive evolution users continue evolution promising starting point generated users example system allows users collaboratively evolve images online building intermediate results published others another approach seed initial population iec meaningful individuals paper introduce novel approach suffer less user fatigue restricting space possible artifacts certain class images faces shoes etc hypothesis way user needs spend less time exploring space possible solutions generative adversarial networks gans algorithm generative adversarial network initialize training iterations discriminator updates minibatch sample real data minibatch latent variables loss real loss generated update minibatch latent variables loss real update space potential images approach represented generative adversarial network gan gans new class deep learning algorithms extended reach deep learning methods unsupervised domains require labeled training data basic idea behind gans two networks compete one network generative network discriminative generative network tries create synthetic content images faces indistinguishable discriminative network real data images real faces generator discriminator trained turns generator becoming better forging content discriminator becoming better telling real synthetic content http detail generator network called conditioned latent variables generates images convince separate discriminator network called generated images authentic ones training set algorithm outlines basic steps training generator using adversarial technique typically discriminator updated every time generator updated lot research gans focused loss functions allow stable training latent variables random variables typically sampled normal distribution generator trained fixed number latent variables stochasticity forces generator generalize approach deep interactive evolution trained generative network transposed convolution tanh output images user input select images set variation preference apply mutation probability keep selected chromosomes standard uniform crossover selections random numbers deviation set user next generation normal latent variables transposed convolution kernel stride padding batch normalization relu random samples input latent variables fig figure illustrates four steps deepie first latent variables passed trained image generator produce images next user selects images based interests third new sets latent variables chosen based selected image last latent variables mutated intensity set user figure depicts deep interactive evolution deepie approach main differentiating factor deepie interactive evolution techniques employed generator content generator trained dataset constrain enhance evolved implementation paper trained nonspecialized network images general number goals optimized training process example generating art network specializes creative output used recently elgamma proposed variation gan training prioritizes creative output work kim train network map latent variables discrete outputs text generator works transforming number latent variables content work generator latent variables input content images various domains typically latent variables chosen random produce random pleasing images deepie initially sample latent variables randomly evolve get generator produce user optimal image randomly sampled latent variables used generate images images selected user latent variables produced images evolved create set new variables many ways approached user could simply ability select images complexity could increased user could allowed provide information ability rank images score depending interface evolutionary strategy chosen take advantage provided information important use crossover allows user ability guide process picking images later combined user determines amount mutation apply implemented number ways setup general create images specific domain person needs sufficient amount data train generator generator automatically trained domain user evolve inputs exactly way always done use deepie content outside images generator architecture might need modified algorithm still remains experimental setup work implement vanilla version deepie allows observe effectiveness technique general sense use fairly simple setup evolutionary process also interface kept minimalistic try keep focus different interface design could affect usability system involved interfaces could allow information shared evolutionary algorithm allowing tune process different people domains provide information also may increase user fatigue testing beyond scope paper generator recent advances neural network image generators enable new take interactive evolutionary computation work number techniques designing training data generator include gans variational autoencoders autoregressive techniques well techniques setup decided implement gan gans seen lot recent success generating realistic looking relatively high resolution images time publishing paper one state art techniques training gan images wasserstein gan gradient penalty process uses wasserstein distance function loss discriminator gradient penalties keep gradients acceptable range loss function generator shown produce even better results discriminator made classify data well work trained network default setup without classification keep setup general possible training method typically combined deep convolutional network architecture dcgan implementation network shown figure algorithm deep interactive evolution defaults oreign traingan data interf ace interface matrix use population size latent variables repeat images images wait true indices selection zindicies length selection max oreign cross matrix crossi mutate unif orm selection max oreign new matrix newi selection apply mutate selectioni selection cross new user stops function uniform population random individual population random individual population mask vector length maski bernoulli return mask mask function mutate individual std mutation bernoulli noise vector length wherenoisei std return individual mutation noise left end generator network made repeating modules consist transposed convolution batch normalization layer relu nonlinearity function discriminator architecture module consists convolutional layer leakyrelu nonlinearity convolutional layer kernel size stride downsampling get good consistency batch normalization layers training evaluation better train large batch sizes minimize number variables evolve network latent variables means input random normal vector size berthelot tests work boundary equilibrium gans found changing number latent variables noticeable impact diversity network evolution implemented evolutionary process consists mutation crossover two primary operators optimization mutation allows local variation user selections crossover really allows user guide process user select images match looking contain feature want final design crossover allows possibility feature added current best design keep number generations low important frequent mutations quickly sample design space probability mutation mutation generate random normal vector size latent variable vector add latent variables standard deviation set user low end set results mutation high end magnitude latent variables use uniform crossover combine selected parents new chromosome created randomly sampling variables two parents childi randomchoice parenta parenti variable since variables independent effect generator form crossover lead unexpected results testing show independence though features two images combine practice since user selects images want keep system way evaluating relative quality ranking selected images selected images kept population rebuilt random crossover two new foreign chromosomes new chromosomes added continually provide user new features select population size keep overwhelming aware using mutation crossover techniques result latent variables match prior distribution variables network trained likely cause increased image artifacts longer images evolved options mutation crossover could involve interpolating vectors along hypersphere maintain expected distribution inputs data decided produce images lot recent advances using networks generate images looked datasets represent domains interesting work initial setup use three datasets celeba face dataset shoes chairs images separate network trained domain add new domain one apply new dataset theory setup translate data alterations interface figure shows screenshot implementation deepie dropdown menu top left allows user select domain design presented entire generation user simply selects images next generation bottom slider allows user set standard deviation noise added mutation user know right slider increases variability left decreases fig web interface deepie framework user testing get feedback implementation deepie volunteers try interface gave two main tasks reproduce image shoe created system reproduce image face choosing intention see much control user evolution process major use case tool help someone communicate image exists head purpose testing required target image chosen advance volunteers asked familiarize system figuring evolve chair comfortable asked select shoe image list generated images asked use minimum generations try recreate image volunteers given freedom second task could choose public domain face image recreate asked use minimum generations completed tasks asked rate ability two tasks scale also asked describe strategies creating image well experience finished answering questions showed randomized mix shoes selected faces selected asked pick best domain way could get crude measure whether images improving time results target fig demonstrates authors attempt evolve image looks like nicolas cage column contains selected images given generation example three images selected step possible select less images last column shows best image generations output trained network visible images provided due difficulties hosting pytorch enabled website currently publicly hosting site figure shows process evolving image nicholas cage selection column represents selected images marked generation example given selection made based particular feature possesses face shape hair pose etc time two separate tracks evolved merge acceptable way merged original tracks let seen generation figure top middle image generation created middle image generation user testing test approach depth volunteers tried system previously described outcomes efforts seen figure target image user appears left respective domain user looked two different outcomes final evolved image user favorite evolved image ideally would image due stochastic nature optimization process best image sometimes lost times image really improve last generations making hard discern best generation shoes particular observed image could recreated many users within generations since asked users generations resulted lot images essentially results figure suggest wide diversity well people able recreate target shoes tended pretty close though people less picky distinction two similar looking shoes face recreation tasks people usually able focus characteristics quantitative analysis mentioned earlier measured far evolutionary process best image occurred taking value total number iterations user searched provides ratio evolutionary process best image found figure histogram distribution best image found shoes faces shoes represented orange faces blue clear right away best faces appeared much closer final iteration best shoes looking significant difference ratios faces shoes confirms observed volunteers said found shoe looking first generations continued complete required generations reason best shoe image scattered different generations seen figure often little difference best image final image shoes implies search converged early shoes mostly looked user best pick end mostly random guess similar looking shoes data faces hand tells different story individual ratios clustered close good sign image face improving throughout process best image selected later time randomized selected images user still chose one later designs average number generations users stuck requested generations able measure long faces would continue improve important note progress based user reporting best image means objective measure measure improvement measures user able better express want express important metric since communication tool metric confused indicator image quality experience based numbers users felt got much closer reproducing shoes face could predicted figure average users reported ability reproduce faces ability reproduce shoes standard deviation users wrote experience split amusement frustration people embrace chaotic nature system others frustrated enough control part frustration could caused nature task assigned providing image gave rigid goal different intended use case course current implementation limitations quality images produce acceptable user faces target final shoes best target final best fig evolved images user study target images intended target row final images last selected image best images images user selected best compilation selections entire evolutionary process best image iteration histogram subjects best image iteration total iterations fig histogram depicts point evolutionary process best image found participants data shoe domain shown orange results face domain shown blue strategies reviewing user feedback strategies used able put together three main strategies selection strategies people deployed try reach target image three primary strategies follows collect distinct traits select best likeness hierarchical trait selection collect distinct traits far popular strategy half users subscribing strategy strategy involves breaking target distinct parts identifying parts population provided face include hairstyle gender face profile etc user strategy selects images best representation traits traits start merge single images individual traits ignored followed desired traits combined focus moves finding best variation image select best likeness simpler strategy user looks image images closest final result user used strategy strategy closest using loss function evaluation technique main difference users reported focusing overall look image instead details loss functions bad quarter subjects reported using strategy half implemented selecting one image per generation relying mutation slider final strategy hierarchical trait selection variation first strategy instead maintaining separate threads different traits user focuses first desired trait see cultivate diversity find image another desired trait point two desired traits ready repeat process adding third least common strategy two people using user one users used strategy expect distinct strategies emerge experiment compares effectiveness different strategies also measure practice affects outcome strategies could also result trying recreate faces seem like fundamental methods designing complex systems series selections deploying strategies users found specific things wanted interface two popular requests form elitism ability random samples elitism would allow people keep best images feel comfortable explore risky strategies generating new batch random samples would allow users quickly search new features without affecting features already found emergent strategies requests show lots ways optimize interface explored also demonstrates even one simplest interfaces certain depth designing image selection discussion work allows interactive evolutionary computation brought many new domains existing techniques either general difficult corral towards specific problem domain require hand build model domain problem domain building domain model using data many new domains iec applied obviously still many things modeled well machine learning techniques lot active research field applications keep growing aside ability learns domains directly data deepiec allows user control direction evolutionary process despite complex mapping latent variables output gan trained smooth output means two similar latent codes likely produce similar outputs also midpoint two latent codes far apart likely map images share qualities original images way variation combinations phenotype carry genotype allows evolutionary process transparent turn makes possible direct process testing implementation deepie focused ability user guide population toward predefined target ability great communicating idea design otherwise might ability create allows someone without ability draw better express medium pictures look work tool ability used creative exploration deepie shares key strength iec ability help user explore creative space user simply select interesting suggestion see takes fact volunteers system tempted times instead pursuing target coupling ability help user explore space deepie ability focus domain get tool help designer come new ideas area think deepie could impact positive exploration downside applications communication example someone try use help victims crime create picture perpetrator want system suggestions influencing user would case options come screen would likely bias user memory criminal even generator could produce every face perfect fidelity extensive tests would done make sure process alter person memory clear interface affects user discovers example allowing users save best images conservative exploration giving enough diversity phenotypes feel powerless guide process direction interested means optimal interface might make big difference ensuring user always feels control engaged process going forward many directions different generative models evolution ideas experimented new interesting types domains explored domains finally way user interacts discussed many directions conclusion paper presented novel approach interactive evolution users able interactively evolve latent code gan certain class images shoes faces chairs approach tries strike balance system like picbreeder allows evolution arbitrary images hard guide towards specific target systems like gans images normally sampled randomly without user input presented initial results show users able interactively recreate target images deepie approach shown difficult traditional iec approaches future interesting extend approach domains video games benefit controllable content generation references aubry maturana efros russell sivic seeing chairs exemplar partbased alignment using large dataset cad models cvpr berthelot schumm metz began boundary equilibrium generative adversarial networks arxiv preprint bongard hornby combining search user modeling evolutionary robotics proceeding annual conference genetic evolutionary computation conference gecco dawkins blind watchmaker evidence evolution reveals universe without design norton company elgammal liu elhoseiny mazzone creative adversarial networks generating art learning styles deviating style norms arxiv preprint goodfellow nips tutorial generative adversarial networks arxiv preprint goodfellow mirza ozair courville bengio generative adversarial nets advances neural information processing systems gulrajani ahmed arjovsky dumoulin courville improved training wasserstein gans arxiv preprint hastings guha stanley automatic content generation galactic arms race video game ieee transactions computational intelligence games dec hoover szerlip stanley interactively evolving harmonies functional scaffolding proceedings annual conference genetic evolutionary computation acm kamalian zhang takagi agogino reduced human fatigue interactive evolutionary computation micromachine design machine learning cybernetics proceedings international conference vol vol kim zhang rush lecun adversarially regularized autoencoders generating discrete structures arxiv preprint liu luo wang tang deep learning face attributes wild proceedings international conference computer vision iccv december pallez collard baccino dumercy evolutionary algorithm minimize user fatigue iec applied interactive problem proceedings gecco conference companion genetic evolutionary computation radford metz chintala unsupervised representation learning deep convolutional generative adversarial networks arxiv preprint risi lehman ambrosio hall stanley petalz procedural content generation casual gamer ieee transactions computational intelligence games sept secretan beato ambrosio rodriguez campbell stanley picbreeder evolving pictures collaboratively online proceedings sigchi conference human factors computing systems acm sims artificial evolution computer graphics vol acm takagi interactive evolutionary computation fusion capabilities optimization human evaluation proceedings ieee todd latham evolutionary art computers academic press woolley stanley deleterious effects priori objectives evolution representation proceedings annual conference genetic evolutionary computation acm grauman visual comparisons local learning computer vision pattern recognition cvpr june zhang taarnby liapis risi drawcompileevolve sparking interactive evolutionary art human creations international conference evolutionary biologically inspired music art springer
| 9 |
dec congruence subgroup problem low rank free free metabelian groups david alexander lubotzky january zelmanov riend leader abstract congruence subgroup problem finitely generated group asks whether aut aut injective generally kernel denotes profinite completion paper first give two new short proofs two known results new result free group two generators free metabelian group generators free profinite group generators contains results contrasted upcoming result first author showing abelian mathematics subject classification primary secondary key words phrases congruence subgroup problem groups automorphism groups free groups free metabelian groups introduction classical congruence subgroup problem csp asks say sln gln whether every index subgroup contains principal congruence subgroup subgroup form ker gln equivalently asks whether natural map gln injective completions group ring respectively generally csp asks kernel map classical century result answer negative moreover classical kernel case free group countable number generators hand map injective kernel therefore trivial csp generalized follows let group index characteristic subgroup denote ker aut aut index normal subgroup aut called principal congruence subgroup index subgroup contains called congruence subgroup csp asks whether every index subgroup congruence subgroup generated aut csp equivalent question ber map aut aut injective generally asks kernel map gln aut classical congruence subgroup results mentioned therefore reformulated free abelian group generators results known surprising result proved asada methods algebraic geometry theorem free group two generators congruence subgroup property namely aut aut injective purely group theoretic proof theorem given ber goal paper give easier direct proof theorem also give better quantitative estimate give explicitly constructed congruence subgroup aut contained given index subgroup aut index estimates index function substantially better ber see theorems turn free metabelian group two generators initial treatment similar quite surprisingly named author showed negative answer also give shorter proof result deducing theorem ahead prove theorem contains copy particular congruence subgroup property strongly fails also surprising especially compared upcoming paper author showing abelian dichotomy abelian case metabelian case main ingredient proof theorem showing aut large index subgroup mapped onto free group use method developed grunewald second author produce arithmetic quotients aut particular shown aut large starting point prove theorem observation proof shows also aut large proof theorem largeness aut also playing crucial role word warning needed largeness aut deduce negative answer csp example aut large answer csp time mentioned aut large know whether congruence subgroup property prove theorem use largeness aut combined fact every simple group involved aut already involved commutative ring show paper organized follows give short proof theorem theorem section devoted proof theorem close remarks open problems free nilpotent solvable groups acknowledgements author wishes deepest thanks rudin foundation trustees generous support period research paper part phd thesis hebrew university jerusalem second author wishes acknowledge support isf nsf erc max rossler walter haefner foundation eth foundation csp start let quote general propositions bring throughout paper proposition ber lemma let exact sequence groups assume finitely generated center profinite completion trivial sequence profinite completions also exact proposition ber corollaries let free group set center profinite completion trivial centralizer closure cyclic group generated start following lemma whose easy proof left reader lemma let aut congruence subgroup ker aut ker aut particular map aut injective map aut injective denote free group well known theorem nielsen mks kernel natural surjective map aut aut aut inn inner group also well known automorphism group free two generators dex denote contains preimage map aut index aut contains principal congruence subgroup ker aut aut aut injective lemma enough prove aut description deduce exact sequence inn free sequence splits map thus inn propositions exact sequence inn yields exact sequence inn aut gives inn aut thus need show following map injective inn aut prove three parts part map inn aut injective obvious inn mapped cally inn second part show map aut injective last part show intersection images inn aut trivial inn remains prove next two lemmas lemma lemma lemma map aut injective proving lemma recall classical result schreier theorem mks let free group set subgroup index let right schreier transversal system representatives right cosets containing identity initial segment element also freen group elements set free generating set every denote unique element satisfying proof lemma ker characteristic subgroup index part theorem isomorphic ker therefore natural also homomorphism aut aut aut induces composition aut aut thus enough show composition injective map aut let right schreier transversal applying second part theorem get following set free generators yxy xyxy hence automorphisms act following way yxy yxy xyxy xyxy yxy yxy xyxy yes let map following way easy see ker normal subgroup generated normal subgroup invariant action automorphisms since induces homomorphism therefore aut homomorphism aut thus enough show last map injective map act following way namely act via inner automorphisms hence mapped isomorphically inn yielding map aut injective aut injective well required lemma inn aut map defined proof first observe thus second part proposition inn zinn inn hinn hinn ker proof lemma image hinn inn trivial thus image inn inn trivial isomorphic inn saw mapped isomorphically inn inn trivial proof theorem ber authors give explicit construction congruence subgroup contained given index subgroup aut prove following theorem theorem ber theorem let finite index normal subgroup aut inn let pick two distinct odd primes set exists explicitly constructed normal subgroup index dividing general normal subgroup define acts trivially end section much simpler explicit construction congruence subgroup better bound index let recall discrete version proposition ber proposition ber propositions let free group set let finite quotient pick prime dividing order set image every normal abelian subgroup natural projection trivial image centralizer natural projection theorem let finite index normal subgroup aut inn inn let every prime exists explicitly constructed normal group index dividing proof recall map proof lemma let system representatives right cosets denote also subgroup index normal subgroup index dividing index dividing index normal subgroup index dividing thus schreier formula index divides index dividing remains show let ker aut inn therefore write inn assumption acts trivially thus acts inn deduce thus proposition hence acts group inn therefore acts inn acts ker inn ker acts inn also acts inn moreover computations made proof lemma acts inn thus exists inn inn acts trivially part proposition abelian normal subgroup mapped trivially also thus inn required csp section prove theorem show congruence kernel free metabelian group two generators free group countable number generators start let observe group one also ask parallel congruence subgroup problem one ask whether every index subgroup contains principal congruence subgroup form ker index characteristic subgroup generb ated equivalent question whether congruence map injective moreover easy see lemma parallel version namely congruence subgroup ker ker start next proposition slightly general lemma ber nevertheless proven arguments proposition ber lemma let finitely generated residually finite group trivial center considering congruence map ker aut aut ker well known residually group theorem also proven trivial proposition proposition ker aut aut ker addition old result bachmuth kernel surjective map ker aut aut aut inn free group congruence subgroup contains ker ker appropriate version lemma proposition obtain ker ker ker ker free group also state aut aut ker automorphisms previous section preimages map aut respectively need show lemma ker proof free group congruence subgroup group aut ker ker ker aut aut thus denote ker aut using equation consider action ker conjugation abelian actually obtain action generated element since presentation moreover observed previously therefore acts trivially also let make following observation two automorphisms group act trivially abelian commute indeed thus conclusion previous discussion abelian thus also abelian finally known isomorphic moreover proposition corollary every normal closed subgroup abelian also isomorphic thus well required remark proof theorem much shorter given latter gives information show abelian one deduce infact see fore csp section prove theorem claims contains copy let start showing aut large proposition group aut large finite index subgroup mapped onto free group proof proof follow method developed produce arithmetic quotients aut denote free group generators cyclic group order map kernel ker using right transversal deduce theorem freely generated thus generated free abelian group images action conjugation induces sending xyx xyx xyx action matrix two eigenvalues eigenspaces recall abelian metabelian thus surjective homomorphism denote identify acts matrix denote also aut clear index aut natural map aut induces map aut claim commutes first observe exists plays role image map let remember action induced action conjugation hence therefore commutes follows eigenspaces invriant action inparticular deduce invariant action thus obtain homomorphism aut consider following automorphisms aut play role images act following way therefore map thus image contains free index finally denote preimage index subgroup aut mapped onto free group required let continue following definition say group involved group isomorphic quotient group subgroup see group involved group involved quotient showed aut index subgroup mapped onto thus free map splits thus hence map aut contains copy thus group involved aut hand claim proposition let finite simple group involved aut prime involved special linear group field order proof let free group natural injective homomorphism matrix group map free basis right called magnus embedding usually properties studied fox free calculus need explicitly one prove induction word length magnus embedding identity shows polynomials determine word uniquely thus injective map homomorphism end check using identity map mean acts every enrty separately denote fnm natural maps znm induce map znm znm shown proposition ker hence moreover proven proposition following equality lim observe every ker characteristic every ker characteristic thus aut aut lim lim aut observe identity also valid entries elements thus every element determined left lower coordinate therefore every automorphism lifted endomorphism injective map homomorphism aut znm identity action znm natural prom jection znm denote ker aut aut observe acts trivially znm map gives homomorphism also injective mentioned gln znm simple group involved aut must involved aut thus must involved either aut must involved commutative ring every commutative ring artinian decomposed local rings thus must involved local commutative ring denote unique maximal ideal local noetherian ring well known note commutative ring ker ker denotes identity element ker indeed mod mod mod observation deduce every kernel map abelian must involved prime finally using fact abelian obtain involved required corollary exists finite simple group involved aut proof proposition enough show simple group involved prime theorem jordan exists function every char subgroup gln contains normal abelian subgroup index corollary theorem schur proved holds function subgroup gln char provided chapter clearly holds group involved group claim large enough alt involved indeed two primes larger large subgroup alt since alt contains order every subgroup index equal also alt involved least one contradiction corollary congruence kernel contains copy proof immediate conclusion corollary aut contain copy thus intersection copy aut trivial thus contains normal closed subgroup theorem contains copy required remarks open problems end paper several remarks open problems denote free solvable group derived length generators combining results bfm theorem klm theorem ker aut aut inn every arguments ker mapped onto obtain sequence natural question whether inequalities strict equivalent reformulation question following cosets kernels ker characteristic index subgroups provide basis topology called congruence topology respect weaker equal topology stronger equal classical congruence topology latter equal question equivalent question whether topologies strictly weaker whether topology given strictly weaker example theorem states equivalent statement congruence topology induces equal topology considering theorem deduce proof gave one decide whether equivalently decide whether shown quite surprisingly theorem equivalently proof suggested conjecture every explicit construction congruence subgroup gave gives counter example proposition every equivalently every proof denote theorem reiner every congruence subgroup classical manner hand applying explicit construction given theorem obtain index normal subgroup solvability length ker shows congruence subgroup respect congruence topology induced equivalently required remark one wants characteristic need replace procedure change solvability length proposition suggests following conjecture conjecture every equivalently particular every remark even know decide whether know congruence subgroup property holds note proofs theorems claiming satisfy csp based two facts aut large hence every group involved aut every group involved aut free solvable group generators solvability length upart valid every proof every abelian every part valid cases hand part still true every part infact proposition let free solvable group generators solvability length every finite group involved aut proof arguments deduced gaschutz lemma every surjective homomorphism homomorphism aut aut ker ker aut surjective thus proving proposition show quotient involved aut cayley theorem subgroup sym later subgroup sln every prime thus next lemma due robert guralnick proof proposition lemma every exists prime finite group generated two elements solvability length three sln involved aut proof fix prime using dirichlet theorem pick prime divides consider general group order addition contains unit roots order one consider diagonal matrix embed sending element diagonal matrix giving rise subgroup element permutation matrix normalizes sending dba module dimension every also direct sums one dimensional acts transitively deduce irreducible module denote using obvious action claim generated two elements description clear generated two element one denote one let standard basis denote note every follows every power sends also thus write sends appear entry except one observe diagonals considered column vectors form basis matrix vandermonde matrix therefore invertible thus linear combination observe acts conjugation via action thus obtain action via action every entry except one shows irreducible copy inside group generated similar way copies generated generated two elements acts obvious way thus normal involved aut let remark know answer congruence subgroup problem free solvable groups two generators solvability rank unless situation free nilpotent groups two generators easier proposition every free nilpotent group two generators congruence kernel contains copy free profinite group countable number generators proof known group kernel map aut aut thus free nilpotent group arbitrary class arguments brought exists group involved aut hand free nilpotent group two generators aut large mapped onto thus subgroup aut hence contains copy last remark csp subgroups automorphism groups considering classical congruence subgroup problem one take subgroup gln commutative ring ask whether every index subgroup contains subgroup form ker gln index ideal direction generalization classical csp studied intensively second half century rag rap one ask parallel generalization automorphism groups outer atomorphism groups let aut resp every index subgroup contain principal congruence subgroup form ker aut resp ker characteristic index subgroup general kernel map aut strictly contains inn let fundamental group surface genus punctures injective map pure mapping class group chapter thus one ask csp subgroup considering problem known theorem every csp cases proved ddh cases proved cases proved shown every free group generators thus cases give answer various subgroups outer aoutomorphism group generated free groups though csp full still unsettled references andreadakis automorphisms free groups free nilpotent groups proc london math soc asada faithfulness monodromy representations associated certain families algebraic curves pure appl algebra bachmuth automorphisms free metabelian groups trans amer math soc congruence subgroup problem free metabelian group two generators groups geom dyn congruence subgroup problem free metabelian group generators preparation birman braids links mapping class groups princeton university press princeton university tokyo press toyko boggi congruence subgroup property hyperelliptic modular group open surface case hiroshima math ber bux ershov rapinchuk congruence subgroup property aut proof asada theorem groups geom dyn bfm bachmuth formanek mochizuki certain groups algebra bryant gupta automorphism groups free nilpotent groups arch math basel ddh diaz donagi harbater every curve hurwitz space duke math farb margalit primer mapping class groups princeton mathematical series princeton university press princeton grunewald lubotzky linear representations automorphism group free group geom funct anal klm kropholler linnell moody applications new theorem soluble group rings proc amer math soc lubotzky free quotients congruence kernel algebra lubotzky combinatorial group theory pure appl algebra lubotzky van den dries subgroups free groups large israel math magnus theorem marshall hall ann math mcreynolds congruence subgroup problem pure braid groups thurston proof new york math congruence kernel group russian dokl akad nauk sssr mks magnus karrass solitar combinatorial group theory presentations groups terms generators relations interscience publishers new rag raghunathan congruence subgroup problem proc indian acad sci math sci rap rapinchuk congruence subgroup problem algebra groups education new york reiner normal subgroups unimodular group illinois math remeslennikov sokolov properties magnus embedding algebra logika russian english translation algebra logic wehrfritz linear groups account grouptheoretic properties groups matrices ergebnisse der matematik und ihrer grenzgebiete band new institute mathematics hebrew university jerusalem israel
| 4 |
may chapter bounded distributed flocking control nonholonomic mobile robots thang hung vahid numerous studies problem flocking control multiagent systems whose simplified models presented terms elements meanwhile full dynamic models pose challenging problems addressing flocking control problem mobile robots due nonholonomic dynamic properties taking practical constraints consideration propose novel approach distributed flocking control nonholonomic mobile robots bounded feedback flocking control objectives consist velocity consensus collision avoidance cohesion maintenance among mobile robots flocking control protocol based information neighbor mobile robots constructed theoretical analysis conducted help function graph theory simulation results shown demonstrate efficacy proposed distributed flocking control scheme introduction collective behavior organisms constitutes flocking coherent motion flock inspires various research flocking control multiagent systems typical objective achieve desired collective motion produced constructive flocking control procedure numerous models described simplest models models actual physical models design protocols systematically proposed multiagent systems several control strategies also addressed noisy environments agent position affected noise advanced robotics automation ara lab department computer science engineering university nevada reno usa thangnthn advanced robotics automation ara lab department computer science engineering university nevada reno usa hla school electrical computer engineering georgia institute technology atlantic drive atlanta usa faculty electrical electronics engineering ton duc thang university chi minh city vietnam hanthanhtrung swarm intelligence concepts applications models problem flocking control multiple agents addressed typical results reported wide range engineering applications extensive studies flocking control mobile robots done various scenarios chapter study problem distributed flocking control mobile robots bounded feedback takes consideration nonholonomic nature mobile robots well implementation issue posed physical limit motor speed flocking control problem employs full dynamic model mobile robot derived similar due nonholonomic property dynamics mobile robots proposed design framework constructed achieve velocity consensus modular words consensuses linear speed orientation angles obtained separately chapter interested agents nonholonomic dynamics boundedness constraints specifically coordination function proposed ensure induced attractive repulsive forces bounded hence incorporated bounded velocity control using results barbalat lemma graph theory theoretical analysis conducted shows maximal value coordination function determines basin attraction flocking convergence chapter graph theory employed case nearest neighbor communication employ velocity control law reported decentralised sense helps avoid collision maintain linear speed consensus addition orientation consensus achieved using modified approach inspired one input constraint taken account organization chapter follows summarises research work literature related topic chapter section control problem flocking nonholonomic mobile robots preliminaries introduced section describes main results modular design framework proposed bounded velocity control bounded orientation control theoretical analyses introduced section description obstacle avoidance scheme presented section shows simulation results section concludes chapter conclusions notations sets real numbers nonnegative real numbers respectively del operator two vectors scalar product atn absolute value scalars euclidean norm vectors related work many applications mission carried single complicated robotic system equivalently completed coordination mobile robotic system much simpler configurations whose advantages lie scalability flexible deployment cheaper costs reliability etc therefore sophisticated tasks bounded distributed flocking control nonholonomic mobile robots fulfilled using group small mobile robots lower cost higher efficiency complex unit see references therein flocking control mobile robots widely addressed different control schemes see references therein recently new measuretheoretic approach systematically provides framework obtaining flocking protocols mobile robots reported common assumption many papers availability information agents communication numerous control protocols mobile robots constructed based assumption centralized communication control architecture yields inflexibility large computation costs controller agent especially number agents large meanwhile distributed control protocol offer ease implementation less computational burden element system needs information neighbor agents direction range decentralized control schemes mobile robots proposed wide range engineering applications cohesion maintenance collision avoidance cmca properties mobile robotic system importance reported attractive repulsive forces included control cmca mobile robots possible agents desired attractive repulsive forces cmca mobile robots achieved using new rearrangement strategy graph theory employed generate control protocols maintain cmca multiagent systems double integrator models distributed flocking control approach proposed constraints control inputs imposed work considers bounded feedback flocking control problem nonholonomic mobile robots without flocking desired heading angle problem interest chapter address bounded control nonholonomic dynamic mobile vehicles also achieves cmca obstacle avoidance also consider flocking desired heading angle reveals collective flocking behaviour problem formulation similarly investigate collective system identical autonomous mobile robots whose respective equations motion respectively position heading angle robot inertial frame oxy linear speed unit vector cos sin angular speed control inputs swarm intelligence concepts applications following vein define flocking control problem construct control inputs bounded functions collective state distributed fashion satisfy following multiple goals velocity consensus lim collision avoidance kqi cohesion maintenance similarly following definition definition control system said bounded finite constant achieve goals consider coordination function satisfies following properties constant continuously differentiable lim lim link agents flock aim maintain without loss generality assume hence well defined rii interested function dead zone since even distribution agents may achievable common coordination function accordingly use zone free alignment function satisfying requirements shown figure bounded control shall use linear saturation functions continuous nondecreasing functions satisfy given positive constants iii similarly works distributed multiagent systems graph theory utilised address problem digraph associated called set denoted node set set defined edge set addition denotes neighbor set node bounded distributed flocking control nonholonomic mobile robots figure coordination function extracted description edge presented follows given defined kqi kqi kqi kqi following results employed main results lemma let function satisfying holds true proof since undirected graph swarm intelligence concepts applications hence implies remark lemma plays important role theoretical analysis main results lemma similar one however considers communication multiagent system problem chapter focused distributed fashion requires employment neighbour set robot lemma linear saturation functions satisfy proof without loss generality suppose since nondecreasing functions implies furthermore nondecreasing property imply multiplying obtain main results constructive strategy design achieve consensus achieve consensus design derived construction built based approach note kqi symmetric function result write understanding control protocols constructed based lyapunov theory specifically positive definite function presented time derivative negative definite function regarding distribution control problem graph theory employed show connectivity preservation multiagent system apply lasalle invariance principle conclude desired consensuses bounded distributed flocking control nonholonomic mobile robots similarly initial state collective system agents chosen graph connected parameters graph chosen follows speed consensus connectivity preservation derivation subsection essentially similar control design linear speed hence presented completeness consider energy function system assume designed let number links initial graph simplest connected graph agents tree whose number links hence let note symmetric function compute derivative respect control law speed consensus protocol chosen linear saturation function introduced section substituting obtain swarm intelligence concepts applications following speed consensus theorem theorem suppose collective system subject protocol initiated following properties hold iii connected exists collision avoidance guaranteed kqi lim proof assume switches time hence words prove using control law according lemma odd function thus deduces definition hence continuity shows kqi implies existing links deleted time collision avoidance achieved result new links must added current graph switching time assume new links added network time since number current links switching complete graph edges hence possesses due thus bounded distributed flocking control nonholonomic mobile robots induction therefore shows edges lost time result size set links forms increasing sequence bounded number links complete graph thus exists finite integer hence next show linear velocities agents converge value using fact properties deduce kqi shows collision takes place among agents since barbalat lemma graph connected lim remark proof theorem follows similar approaches graph theory employed means proving connectivity mobile networks potential chapter similar one sense bounded contrast potential function used goes infinity singularities note mobile robots work nonholonomic addressed double integrator systems remark first sum consists gradients unit vector bounded definition second sum comprised linear saturation function defined section hence whole control law agent bounded satisfies objective boundedness control input theorem shows design achieves speed consensus goals next subsection design orientation consensus completing goal orientation consensus motivated orientation consensus design method presented shall develop bounded control approach employs saturation function section swarm intelligence concepts applications define orientation trajectory error agent desired orientation flock thus angle difference two agents similarly following lemma employed convergence analysis lemma suppose flock possesses graph trajectory error signals group following property proof proof similar one employed lemma following orientation consensus theorem theorem assume desired orientation first second derivation bounded collective system subject following protocol number neighbors robot positive parameter mobile robots eventually reach consensus heading angles sense lim proof consider following lyapunov function candidate bounded distributed flocking control nonholonomic mobile robots according theorem exists derivative respect given using lemma obtain since nondecreasing function defined section since bounded control law implies bounded barbalat lemma also since bounded therefore control law implies proves theorem remark boundedness control law guaranteed properties linear saturation function fact demonstrates proposed control scheme meets requirement physical limits control inputs remark noted control law orientation consensus similar one boundedness control input taken account scheme chapter also shares objective one offers simple form implementation combining theorems following bounded flocking theorem theorem suppose collective system subject bounded protocols suppose initial configuration collective system connected multiple flocking goals velocity consensus cohesion maintenance collision avoidance achieved proof proof straightforward results theorems swarm intelligence concepts applications avoidance obstacles problem obstacle avoidance extensively studied literature section employ idea derive control algorithm agents able pass obstacles shown convex obstacle extrapolated round shape rectangle convex obstacle presented circle used work let coordinate robot jobs xobs yobs projection point robot onto obstacle nobs nobs number obstacles centre obstacle jobs kqr kqr radius obstacle projection point following velocity jobs sin kqr jobs velocity agent angle heading robot straight line connects robot centre obstacle projection point moves direction jobs jobs otherwise fact projection point possesses position velocity orientation enables agent descriptions robot obstacle demonstrated figure following orientation consensus theorem theorem following control protocol guarantees robot avoids obstacles arbitrary boundary shapes nobs nobs nobs set obstacles nobs number obstacles nobs jobs proof proof derived using approach remark noted speed control law enjoys boundedness due saturation function different one since saturation function bounded reveals heading control law also bounded bounded distributed flocking control nonholonomic mobile robots obstacle jobs robot figure illustration robot convex obstacle simulation conducted simulation system mobile robots model bump function used generate smooth coordination function control invokes gradient forces designed coordination function form compact support function given exp exp otherwise design parameters parameters coordinate function parameter control law initial positions mobile robots randomly distributed three circles coordinates sin cos swarm intelligence concepts applications otherwise initial values randomly chosen initial value desired orientation flock obtained simulation results shown figures shown figures heading angles converge angular speeds agents converge linear speeds depicted figure convergence agents takes place figure angular speeds converge seconds hence figure figure demonstrate consensuses orientation linear speed mobile robots obtained speed control steering control signals shown bounded figure figure minimum distance among agents described figure shows collision avoidance guaranteed flocking behavior shown figure collision occurred time figure orientation consensus next consider simulation multiagent system obstacle obstacle circle whose coordinate radius configuration multiagent system case desired group heading chosen agent chosen leader group bounded distributed flocking control nonholonomic mobile robots time figure linear speed consensus time figure angular speed swarm intelligence concepts applications time figure translational force time figure steering control bounded distributed flocking control nonholonomic mobile robots dmin time figure minimum distance among agents collision avoidance mechanism leader inactive speed control law designed drive group constant speed speed control leader linear speed first seconds evolution mobile robots encounter obstacle control laws enable avoid potential collisions obstacle neibouring agents orientations robots figure converge seconds similarly figure linear speeds agents converge seconds figure angular speeds converge faster seconds figs demonstrate control inputs bounded finally figure shows collision takes place evolution robots evolution multiagent system shown figure robots cooperate form flocking avoid obstacle conclusions chapter presented bounded decentralized control protocol flocking problem mobile robots systematic fashion control laws require information neighbor agents proposed scheme modular designing speed control steering control separately theoretical numerical results shown using proposed method collective system mobile swarm intelligence concepts applications figure distributed flocking mobile robots bounded distributed flocking control nonholonomic mobile robots time figure orientation consensus obstacle avoidance case time figure linear speed consensus obstacle avoidance case swarm intelligence concepts applications time figure angular speed obstacle avoidance case time figure speed control obstacle avoidance case bounded distributed flocking control nonholonomic mobile robots time figure steering control obstacle avoidance case dmin time figure minimum distance among agents obstacle avoidance case swarm intelligence concepts applications figure distributed flocking mobile robots obstacle avoidance case robots achieves multiple objectives flocking control velocity consensus cohesion maintenance collision avoidance future work would consider shape size mobile robot obstacle avoidance similar context studied noisy uncertain environments affect performance proposed scheme robustness analysis improved methods proposed address issue acknowledgement work partially supported national science foundation grant national aeronautics space administration nasa grant issued nevada nasa research infrastructure development seed grant university nevada reno references references toner flocks herds schools qualitative theory flocking phys rev sheng distributed sensor fusion scalar field mapping using mobile sensor networks ieee trans cybernetics april espinoza quesada adaptive consensus algorithms operation systems affected switching network events international journal robust nonlinear control available http sheng flocking control multiple agents noisy environments ieee international conference robotics automation sheng motion control cluttered noisy environments journal communications dang horn distributed formation control autonomous robots following desired shapes noisy environment ieee international conference multisensor fusion integration intelligent systems mfi flocking dynamic systems algorithms theory ieee trans autom control mar tanner jadbabaie pappas flocking fixed switching networks ieee transactions automatic control cucker smale emergent behavior flocks ieee trans autom control may dong huang flocking connectivity preservation multiple double integrator systems subject external disturbances distributed control law automatica ghapani mei ren fully distributed flocking moving leader lagrange networks parametric uncertainties automatica moshtagh jadbabaie distributed geodesic control laws flocking nonholonomic agents ieee transactions automatic control han flocking autonomous vehicles systematic design ieee trans autom control aug sheng chen cooperative active sensing mobile sensor networks scalar field mapping ieee trans systems man cybernetics systems jan amar mohamed stabilized feedback control unicycle mobile robots int adv robot syst swarm intelligence concepts applications han dinh flocking mobile robots bounded feedback ieee international conference automation science engineering case nguyen han distributed flocking control mobile robots bounded feedback annual allerton conference communication control computing allerton jadbabaie lin morse coordination groups mobile autonomous agents using nearest neighbor rules ieee trans autom control jun liang lee avoidance multiple obstacles mobile robot nonholonomic constraints asme proceedings dynamic systems control riley hobson bence mathematical methods physics engineering cambridge cambridge university press tanner jadbabaie pappas flocking teams nonholonomic agents kumar leonard morse editors cooperative control vol lecture notes control information sciences springer liang lee decentralized formation control obstacle avoidance multiple robots nonholonomic constraints american control conference nguyen formation control multiple rectangular agents limited communication ranges international symposium advances visual computing springer international publishing nguyen jafari formation control multi vehicle system issat international conference modeling complex systems environments cruz mcclintock perteet decentralized cooperative multivehicle platform research networked embedded systems ieee control systems nguyen distributed formation control nonholonomic mobile robots bounded feedback presence obstacles arxiv preprint pham distributed control framework team unmanned aerial vehicles dynamic wildfire tracking arxiv preprint sheng flocking control mobile sensor network track observe moving target ieee international conference robotics automation sheng moving targets tracking observing distributed mobile sensor network american control conference references sheng adaptive flocking control dynamic target tracking mobile sensor networks international conference intelligent robots systems nguyen teague compressive collaborative mobile sensing scalar field mapping robotic networks annual allerton conference communication control computing allerton sheng chen cooperative active sensing mobile sensor networks scalar field mapping ieee international conference automation science engineering case sheng cooperative sensing mobile sensor networks based distributed consensus spie optical applications international society optics photonics nguyen nguyen optimal flocking control mobile sensor network based moving target tracking ieee international conference systems man cybernetics lim sheng hybrid system reinforcement learning flocking control domain proc conf theoretical appl comput sci lim sheng cooperative flocking learning multirobot systems predator avoidance ieee international conference cyber technology automation control intelligent systems jafari sengupta bebis boyle parvin editors adaptive flocking control multiple unmanned ground vehicles using uav cham springer international publishing available http swarm cooperative scalar field mapping handbook research design control modeling swarm robotics sheng flocking control algorithms multiple agents cluttered noisy environments robotic systems springer berlin heidelberg carrillo fornasier rosado asymptotic flocking dynamics kinetic model siam journal mathematical analysis lim sheng multirobot cooperative learning predator avoidance ieee trans control syst technol jan sepulchre paley leonard stabilization planar collective motion communication ieee trans autom control may zavlanos tanner jadbabaie hybrid control connectivity preserving flocking ieee transactions automatic control swarm intelligence concepts applications dong flocking multiple mobile robots based backstepping ieee transactions systems man cybernetics part cybernetics moshtagh michael jadbabaie distributed control laws motion coordination nonholonomic robots ieee trans robotics aug han dinh flocking mobile robots bounded feedback ieee international conference automation science engineering case khalil nonlinear systems upper saddle river prentice hall zavlanos jadbabaie pappas flocking preserving network connectivity ieee conference decision control nguyen formation control obstacle avoidance multiple rectangular agents limited communication ranges ieee transactions control network systems
| 3 |
koszulity homology moduli spaces stable curves genus zero nov natalia iyudu abstract prove koszulity homology moduli spaces stable curves genus zero using presentation due keel priddy criterion koszulity establish potential find expression potential based prove koszul introduction paper due keel dedicated intersection theory moduli space stable curves genus zero shown canonical map chow ring homology isomorphism presentation ring generators relations derived variables presentation parameterized subsets consisting two elements variables corresponding complimentary sets coincide relations looks follows four distinct elements unless one following holds use presentation examine koszulity ring suggest following choice linearly independent variables generators take sitting sides pentagon rewrite variables relations via presentation wakes associations cluster algebras using obtained relations compute presentation koszul dual noncommutative algebra turns presented one relation form find change variables new relations square term thus algebra presented quadratic basis hence due priddy criterion conclude koszul hence well consider change generators smaller set variables span space modulo linear relations able show dual algebra potential find expression potential construct potential complex coincide koszul complex show exactness checking exact one places numeric koszulity implies koszulity according lemma methods could compared appeared vast literature koszulity various algebras given relation originated geometric objects proved name series papers dedicated koszulity algebras associated graphs sell complexes papers koszulity objects associated coexter group action etc koszulity consider first presentation due keel generators corresponding linear monomial quadratic relations let spank suggest choose linear basis following set via basis going express elements rewrite quadratic relations terms variables allow later pass koszul dual algebra could shown koszul presentation obtain suitable change variables quadratic bases due priddy pbw criterion imply koszulity hence step expression via linear relation quadruple pairwise distinct elements particular say get set elements original generators presented geometrically sides diagonals pentagon analogously presentation cluster variables sides correspond variables diagonals appropriate call two types generators side type diagonal type generators respectively hence expressing diagonal type variables via nearby indexes via side type variables express generators via new basis step calculation quadratic relations terms initial quadratic relations shape means variables side type relation contains side diagonal type generators shape pentagon picture could interpreted fact side intersect diagonal new variables relations look like case relations contain two diagonal variables also means since obviously see follows hence relations variables formed system relations equivalent following one pair hilbert series clearly also implies codimension second graded component ideal using system relations additional information dimension space orthogonal one write generating relation koszul dual algebra generating system yet form basis ideal make change variables make relation free square one variables say make presentation algebra combinatorially free new relations basis basis quadratic due priddy pbw criterion implies koszul proved theorem algebra koszul koszulity case goal section following theorem algebra koszul canonical generating set described keel presentation denote aij sijk subsets subsets generators aij aji generators subsets convenient use following notation ijr irj defining relations follows linear relations aij akm aik ajm quadratic relations aij irj aij aik distinct remark note automorphisms corresponding elements group permutations points indeed natural action set aij sijk generators extends action automorphisms follows fact action set generators preserves way corresponding sets intersect therefore preserves set defining relations consider set variables aij goal show theorem linear span aij linear span coincide modulo linear span linear relations furthermore linearly independent modulo finally expressions aij via modulo aij first need following lemma lemma space spanned apq furthermore six pqr vectors vectors form linear basis proof first note introduced new variables orthogonal gives six linearly independent elements next search vectors orthogonal form apq apq scalar products vector vectors prq pqr equal see precisely belongs obviously vectors linearly independent hand hilbert series known therefore dimension number generators hence dimension hence vectors form linear basis proof theorem shall express aij linear combination modulo going use basis lemma shall try find aij modulo equivalent inclusion aij turn equivalent orthogonal lemma orthogonality reads equation orthogonality equivalent equation orthogonality satisfying ijr reads equation finally orthogonality satisfying irj equivalent equation solving system linear equations obtain precisely hence aij modulo statements theorem follow via elementary dimension arguments results immediately yield following presentation theorem given generators quadratic tij irj tij tik distinct tij theorem dual algebra potential symmetric potential also invariant corresponding permutations furthermore corresponding potential written terms generators form distinct proof let linear span generators let subspace quadratic relations space quadratic relations since hilbert series dim hence codimension equals follows let unique scalar multiple element spanning fact contains commutators generators easily implies symmetric particular cyclicly symmetric aka potential fact spans together yields spanned first derivatives since invariant follows invariant action finally since two distinct never feature monomial follows must form stated shall compute theorem dual algebra potential potential generators given formula distinct proof theorem distinct derivatives relations must orthogonal quadratic relations listed theorem note automatically orthogonal tij commutators well products distinct orthogonality also obvious orthogonality tij irj easily seen equivalent yields orthogonality tij tik ijr ikr reads taking account orthogonality tij tik ijr irj irj ikr irj irk reads taking account orthogonality tij irj reads taking account orthogonality tjk jrk pairwise distinct reads taking account equations far form closed system linear equations space solutions consists satisfying orthogonality tij tik pairwise distinct reads taking account orthogonality tij tjk pairwise distinct reads taking account finally orthogonality tjk tjm pairwise distinct reads taking account note covered options system last three equations onedimensional space solutions consisting satisfying remains plug number choose kill denominators use obtain required formula consider potential complex easily seen coincide koszul complex order prove koszul equivalent koszulity enough show potential complex exact calculating hilbert series applying following lemma main part calculation hilbert series establish numerical koszulity theorem hilbert series dual algebra first find minimal series variety potential algebras generators potential degree use proposition potential complex exact general position exists exact potential complex could shown way theorem construct exact potential complex example consider potential algebra need potential relations follows xyj easy see ambiguity resolves relations form quadratic basis thus algebra pbw due priddy criterion koszul thus exact potential complex according proposition example general position since generic series minimal standard arguments see example exactness potential complex defines series uniquely via recurrence relation know minimal series example equal proposition need show equality opposite direction holds consider algebra field characteristic coefficients relations considered elements pattern basis relations becomes much simpler namely highest words made exactly example possible check words form basis hence hilbert series equal standard arguments show passing series become bigger hence proposition combining hilbert series prove combine lemma note potential complex exact one terms acknowledgements grateful ihes mpim hospitality support excellent research atmosphere would like thank asking question constant encouragement work funded erc grant partially supported project references type theorems potential algebras preprint ihes theory moduli spaces stable curves genus zero trans ams remarks koszul algebras quantum groups ann inst fourier grenoble polishchuk positselski quadratic algebras university lecture series american mathematical society providence koszul resolutions trans amer math soc robert laugwitz vladimir retakh algebras coordinates koszul vladimir retakh shirlei serconek robert lee wilson koszulity splitting algebras associated cell complexes vladimir retakh shirlei serconek robert lee wilson class koszul algebras associated directed graphs graal system computations noncommutative graded algebras ulyanovsk brunch moscow state university supervised natalia iyudu school mathematics university edinburgh james clerk maxwell building king buildings peter guthrie tait road edinburgh scotland addresses
| 0 |
sep weighted locally bounded split graphs cographs partial abstract fixed number colors show split graphs cographs graphs bounded one determine polynomial time whether proper vertices graph total weight vertices color equals given value part fixed partition vertices exists also show result tight sense providing hardness results cases one assumptions hold variant also studied well special cases cographs split graphs keywords locally bounded dynamic programming npcompleteness maximum flows split graphs cographs introduction proper coloring given graph assignment colors integers vertices two vertices linked edge take different colors given color set vertices taking color called color class coloring problem non global constraints sizes color classes studied precisely following problem considered locallyboundedcoloring instance graph partition vertex set list integral bounds npk nij question decide whether exists proper proper coloring using colors number vertices color nij previous paper shown problem solved log time tree link scheduling problem consisting processing set unit tasks set processors various unavailability constraints also presented cnam cedric laboratory rue paris france moreover shown following problem vertex must take color list possible colors global constraints size color classes tractable several classes graphs boundedlistcoloring instance graph list integral bounds vertex list possible colors question decide whether exists proper number vertices color equal color bound color belongs vertex precisely authors described efficient algorithms solve problem graphs bounded cographs defined graphs induced path four vertices notion recalled formally needed viewed measure graph equals forests graphs edges graphs also called partial paper generalize results show solve efficiently problem non global weighted constraints sizes color classes graphs bounded also cographs precisely problem shall study formally defined follows weightedlocallyboundedlistcoloring instance graph weight function partition vertex set list integral bounds wpk wij vertex list possible colors question decide whether exists proper total weight vertices color wij color belongs vertex scheduling application mentioned weight vertex would correspond amount resource needed process associated task moreover apart problems constrained coloring problems related studied literature see instance review associated known results need recall useful ones needed equitable coloring problem consists finding coloring sizes two color classes differ one bounded coloring problem consists finding coloring size color class exceed given bound common color classes moreover without requirement problem studied defined capacitated coloring problem hence bounded coloring problem special case capacitated coloring problem color bounds bounds sizes color classes equal observed equitable coloring problem fact special case capacitated coloring problem one color bounds actually reached setting mod number vertices also observed instance bounded coloring problem equivalent instance equitable coloring problem number colors adding sufficiently many isolated vertices destroys neither property bounded one cograph split graph graph whose vertex set partitioned clique independent set fact problems including problem problem vertex special cases boundedlistcoloring true weightedlocallyboundedlistcoloring well graphs shall consider take instance boundedlistcoloring keep color bounds set weight vertex add isolated vertices weight take color total number vertices reaches numerous applications problem capacitated coloring problem described respectively many papers also consider precolored variant equitable capacitated coloring problem vertices already colored observe also achieved considering instance problem defining list possible colors vertex already colored singleton corresponding given color similarly one transform listcoloring instance precolored one adding pendant vertices adjacent vertex pendant vertices colored colors operation destroy property bounded destroy one cograph split graph depending whether lies clique independent set finally considering instances weightedlocallyboundedlistcoloring fixed vertex precoloring vertices achieved putting vertices one set partition contains vertex defining wij appropriately present paper organized follows begin providing section tight hardness results weightedlocallyboundedlistcoloring describe section dynamic programming algorithm problem runs polynomial time whenever cases covered section graph fixed vertex weights polynomially bounded section prove problem tractable cographs similar assumptions weaker assumptions solved polynomial time several subclasses cographs section devoted split graphs show graphs problem solved pseudo polynomial time even fixed provided provide additional tractable special cases finally last section paper extend edge colorings results concerning vertex colorings graphs studied sections proofs section prove weightedlocallyboundedlistcoloring clearly even restricted special cases detailing special cases note weightedlocallyboundedlistcoloring general graphs unbounded even vertex deciding whether graph colored using three colors also note latter result hold case coloring problem trivial input graph bipartite otherwise answer actually weightedlocallyboundedlistcoloring solved polynomial time provided vertex weights polynomially bounded next subsection deal case indeed connected component graph bipartite otherwise answer hence admits two possible colorings one one check whether coloring valid associated connected component implies whole problem solved dynamic programming algorithm similar one given theorem consider connected components one keeping track one possible weights vertices color set combining weights whenever new component considered total amount information need keep track thus maxh whc polynomial assumptions vertex weights arbitrary first give short proof weightedlocallyboundedlistcoloring general even two colors vertex partition contains one set consider following weakly problem partition instance positive integers question decide whether exists given instance partition define following instance weightedlocallyboundedlistcoloring graph consists isolated vertices define also define vertex take color solution admits solution total weight vertices color vertex take color implies theorem weightedlocallyboundedlistcoloring weak sense graphs consisting isolated vertices even vertex take color note instance made connected setting adding two edges new vertices color graph obtain tree fact chain also note connected solving weightedlocallyboundedlistcoloring trivial task since connected bipartite graph two proper vertex colorings using two colors bodlaender fomin managed show equitable coloring problem solved polynomial time graphs bounded even number colors fixed next subsections rule possibility proving even vertex weights polynomially bounded weightedlocallyboundedlistcoloring remains graphs bounded either fixed number colors fixed first look case number colors fixed consider following strongly problem instance set positive integers integer question decide whether partitioned disjoint sets three elements since problem strongly assume polynomially bounded construct instance weightedlocallyboundedlistcoloring graph consisting isolated vertices defining easy see equivalence solutions two instances given vertex take color hence theorem weightedlocallyboundedlistcoloring strong sense graphs consisting isolated vertices even vertex weights polynomially bounded vertex take color case theorem make instance connected instance obtaining chain star defining adding new vertices associated edges take color note however reduction need non uniform vertex weights actually unavoidable following fact problem becomes tractable graphs edges hence also stars simple reduction sake completeness give proof fact proposition vertex weights weightedlocallyboundedlistcoloring solvable graphs consisting isolated vertices proof given instance graph construct bipartite graph follows vertex vertex whc vertices ujh edge ujh since implies whc number vertices side bipartition exists feasible coloring initial weightedlocallyboundedlistcoloring instance admits perfect matching bipartite perfect matching computed maximum flow indeed define following equivalence vertex color edge ujh belongs perfect matching one hand vertex incident one edge matching hence take one color construction color hand vertex ujh incident one edge matching hence exactly whc vertices color nevertheless prove strong weightedlocallyboundedlistcoloring graphs bounded arbitrary using complex reduction actually reduction already given theorem precolored variant equitable coloring problem reduction instance set trees height together additional isolated vertices leaves fact precolored vertices used restrain set possible colors vertices hence consider ignore remove leaves simply define suitable lists possible colors vertices yields instances consisting trees height leaves open case connected component tree height unlike trees height also cographs describe alternative reduction needs colors makes use stars trees height prove useful sections assume given instance construct instance weightedlocallyboundedlistcoloring follows define vertex graph contains vertices consists vertexdisjoint stars isolated vertices denote star sij central vertex vij vij take colors sij leaves take colors moreover vertex take colors also define following holds lemma solution solution proof shall prove following equivalence holds vij takes color assume given solution instance vij takes color leaves sij take color vij takes color leaves sij take color moreover takes color implies vertices color assuming vertices color conversely assume given solution weightedlocallyboundedlistcoloring instance let vij takes color observe exactly one vij takes color fact possible colors taken vij means leaves sij must take color leaves sij must take color since hence exactly three vij color two implies vertices color contradiction least four vertices implies least vertices color contradiction sum three concludes proof hence proved theorem weightedlocallyboundedlistcoloring strong sense star forests even corollary weightedlocallyboundedlistcoloring strongly npcomplete cographs even make instance connected adding new color well one vertices take new color obtain tree height caterpillar recall direct consequence proposition weightedlocallyboundedlistcoloring tractable stars trees height moreover one features reduction vij must take different colors useful section however one use simpler reduction vertex see remove set vertex sij color bounds proof theorem similar reasons vij takes color note reductions leave open case chains cographs except also stars vertex weights fixed reduction given theorem closes gap shows problem strongly case even however reduction rather complicated particular since chains play role give different kind reduction prove result well additional restriction lists possible colors theorem weightedlocallyboundedlistcoloring strong sense linear forests even vertices chain share one common possible color proof given instance instead defining stars theorem define disjoint chains also define vertices first last vertices take colors vertices odd index different take colors iii vertices even index different take colors hence vertex end reduction define note feasible coloring exactly one vertex either first one last one color vertices color chain since equal implies feasible coloring exactly vertices color vertices color either vertices odd index vertices even index hence chain remaining vertices either color color moreover since equal implies feasible coloring exactly integers vertices color thus define following equivalence two instances vertices color exactly three reasons similar ones mentioned proof theorem concludes proof conclude pointing turn previous instance chain adding new color new vertices take new color fixed prove case fixed considering following strongly problem monotoneoneinthreesat instance set boolean variables set clauses one containing exactly three non negated boolean variables question decide whether exists truth assignment variables every clause exactly one variable equal true theorem weightedlocallyboundedlistcoloring strong sense star forests even vertex weight take color proof given instance monotoneoneinthreesat construct instance weightedlocallyboundedlistcoloring follows graph consists stars one star per variable denote occ leaves occ central vertex ith star number occurrences set clauses define vertex partition vertices target weights given uai ubj uck consider clause consists ath occurrence variable bth occurrence variable cth occurrence variable easy check following equivalence initial monotoneoneinthreesat instance weightedlocallyboundedlistcoloring instance variable equal true color concludes proof corollary weightedlocallyboundedlistcoloring strongly npcomplete cographs even vertex weight take color case theorem make previous instance connected obtaining tree height caterpillar defining adding new vertices associated edges link stars together take color instance define include new vertices weight new keeping unchanged reduction prove useful sections thanks corollary leaves open case chains fixed next theorem closes gap theorem weightedlocallyboundedlistcoloring strong sense linear forests even vertex weight take color proof given monotoneoneinthreesat instance define weightedlocallyboundedlistcoloring instance follows graph consists chains one per variable isolated vertices verocc occ order occ tices ith chain defined proof theorem chain also add occ isolated occ define occ vertices vertex remains define partition vertices target weights uai ubj uck consider hth clause consists ath occurrence variable bth occurrence variable cth occurrence variable occ define vil occ proof theorem variable equal true color indeed vij must color uji must color useful ensure freely take color theorem follows case theorem make previous instance connected obtaining chain defining adding new vertices associated edges take color also note reductions used prove theorems need colorings although need make use proofs theorems algorithm graphs bounded section deals weightedlocallyboundedlistcoloring partial given graph tree decomposition pair bag vertices tree every edge lies path width given tree decomposition graph equal graph denoted minimum width tree decomposition taken tree decompositions note trees hence chains stars without loss generality also assume tree decomposition nice rooted node binary nodes node two children join node node one child either forget node introduce node given two vertices use notation denote fact either descendant respect given node let xis subset induced vertices moreover let let subgraph induced order design standard dynamic programming algorithm solves weightedlocallyboundedlistcoloring use following function true exists vertex color total weight vertices color false otherwise describe algorithm simply need write induction equations defining values type nodes values computed fashion starting leaves note definition tree decompositions two vertices linked edge must belong least one common bag tree decomposition condition implies coloring proper bag also proper whole graph provided vertex color bag belongs moreover condition ensures subgraph induced bags given vertex belongs connected hence bags vertex color change color whenever move one bag adjacent one forget node let denote child introduce node let denote child assume join node let denote two children let whc total weight vertices qpk qpk qpk whc qhc leaf node case check coloring function provides valid locally bounded root value answer true false initial instance obtained root computing following value wpk lemma values computed algorithm correct proof order show vertex proper first notice preliminary remark suffices show proper listcoloring color vertex remains moving one bag adjacent one show correctness equations considering possible node type assume forget node since true true coloring vertex must keep color moving vertex takes color assume introduce node true true weight color total weight vertices color vertex keeps color moving color defines valid subgraph induced assume join node true qpk true qpk qpk obviously whc qpk qhc qhc whc qhc whc qhc since weights vertices counted twice vertex weight counted assume leaf node true coloring function provides valid locally bounded finally root value obtained requiring subgraph induced concludes proof running time let wmax maxh whc wmax running time given node depends type given node type forget introduce join leaf running time wmax nodes possible colorings given bag wmax possible running algorithm nwmax values compute since computing optimal value takes time overall running time nwmax wmax together lemma implies theorem graphs bounded weightedlocallyboundedlistcoloring solved pseudopolynomial time fixed polynomial time fixed vertex weights polynomially bounded observe results best possible sense theorems dropping assumptions vertex weights leads also generalize results close section mentioning approach adapted solve optimization version weightedlocallyboundedlistcoloring precisely one associate profit function vertices profit vertex depends color slightly modifying dynamic programming algorithm one compute valid weighted locally bounded maximum minimum profit case value longer equal true false maximum total profit listcoloring vertex color total weight vertices color order compute value new function must make changes equations provide without proofs arguments quite similar ones used proof lemma note convention infeasible solutions value maximize forget nodes root value becomes max introduce nodes equal false otherwise join nodes becomes max becomes add value end line leaf nodes equal condition true otherwise locally bounded cographs section study tractability weightedlocallyboundedlistcoloring cographs cographs defined section characterized several ways instance graph cograph associated cotree leaves vertices internal nodes either union nodes join nodes subtree union node root corresponds disjoint union subgraphs associated children node subtree join node root corresponds complete union subgraphs associated children node add edge every pair vertices one vertex subgraph moreover cotree easily transformed linear time binary cotree nodes first note weightedlocallyboundedlistcoloring still npcomplete cographs even arbitrary vertex weight indeed one hand proved problem complete bipartite graphs cographs fixed hand bounded coloring problem proved cographs reduction bin packing also shows respect cographs however instances used large otherwise bounded coloring problems tractable since star forests isolated vertices cographs corollary shows remains true even case polynomially bounded vertex weights covered theorem corollary shows assumption cograph case treewidth discussed later section also true soon provided arbitrary even vertex weight take color finally case partial theorem shows allowing arbitrary vertex weights leads weak cographs even vertex take color moreover instances previous reductions made connected adding new vertex adjacent vertices must take new color increases particular graph reduction consisted isolated vertices becomes star however fixed design efficient dynamic programming algorithm based standard techniques solve weightedlocallyboundedlistcoloring cographs using associated binary cotrees order describe algorithm define following function true exists subgraph induced leaves subgraph rooted node total weight vertices induced subgraph color false otherwise value computed fashion follows join node let denote two children qpk qpk qijoin qpk qhc qpk qhc qhc union node let denote two children qpk qpk qiunion qpk qhc leaf node let vertex corresponding leaf lemma values computed algorithm correct proof order show vertex proper whole graph sufficient prove valid step computation value node cotree assume join node graph induced leaves subgraph rooted node complete union two graphs sum weights vertices color two graphs must equal however taking complete union two subgraphs vertices belonging subgraph color otherwise coloring would proper assume union node graph induced leaves subgraph rooted node disjoint union two graphs thus sum weights vertices color two graphs must equal assume leaf node valid vertex associated takes one color belongs algorithm runs nwmax wmax time wmax maxh whc together lemma yields following result theorem weightedlocallyboundedlistcoloring solved pseudopolynomial time cographs fixed polynomial time cographs fixed vertex weights polynomially bounded note result generalizes ones one associate profit function vertices modify slightly dynamic programming algorithm replacing max union join nodes returning leaf nodes conditions satisfied otherwise order compute feasible weighted locally bounded maximum profit note return qiunion qijoin theorem also used show following proposition proposition weightedlocallyboundedlistcoloring solved pseudo polynomial time graphs consisting isolated vertices proof since edges case coloring proper hence set partition considered independently hence solving instance weightedlocallyboundedlistcoloring equivalent solving independent instances partition contains one vertex set since fixed well instance solved pseudo polynomial time thanks theorem close section study weightedlocallyboundedlistcoloring particular cographs begin studying complete bipartite graphs cographs represented binary cotrees containing union nodes one join node root problem trivial shall assume since complexity weightedlocallyboundedlistcoloring graphs consisting isolated vertices stars graphs theorems imply weightedlocallyboundedlistcoloring complete bipartite graphs fixed vertex weights polynomially bounded even vertex take color moreover proved problem without constraints sizes color classes strongly complete bipartite graphs number colors arbitrary however adding isolated vertices yield complete bipartite graphs result directly apply problem show complex reduction partly inspired one given obtained weightedlocallyboundedlistcoloring case theorem weightedlocallyboundedlistcoloring strongly npcomplete complete bipartite graphs even vertex weights proof given monotoneoneinthreesat instance clauses variables construct weightedlocallyboundedlistcoloring instance complete bipartite graph follows left side vertices associated occ clauses occ vertices associated variable occ number occurrences set clauses right side occ vertices occ associated variable set vertex occ define vij wij furthermore set assuming hth clause finally set occ following equivalence solutions wij take color variable takes value true let justify equivalence first assume know solution denote hth clause vertex takes color xil variable value true hth clause moreover vij take color value true color otherwise meaning number vertices color exactly number occurrences variables value true easily checked yields feasible solution since exactly vertices color clauses exactly one true literal assume solution consider wij takes color construction vertex take color hence vertices take color wij way occ vertices color vertex wij takes color implies turn way occ vertices color vertex vij takes color wij takes color construction vertex vij must take color way occ vertices color vertex wij takes color implies turn way occ vertices color occ vertices take color vertices take color occ ones associated clauses appears must take color short either wij take color case occ vertices vij take color occ vertices corresponding clauses appears take color wij take color case vertex takes color occ vertices vij take color hence whenever vertex associated clause takes color means wijl take color take color words exactly one literal value true clause positive side show weightedlocallyboundedlistcoloring solved pseudo polynomial time complete bipartite graphs fixed note graph vertices given color belong side bipartition fixed guess colors side enumerating possibilities possibilities configuration assignment colors sides enumerate consider side bipartition independently one check whether configuration feasible note side consider independently since interact left set independent instances fixed vertices isolated vertices solve one pseudo polynomial time thanks proposition yields theorem weightedlocallyboundedlistcoloring solved pseudo polynomial time complete bipartite graphs turn case complete graphs cographs represented binary cotrees containing join nodes union node notice complete graph nodes must take different colors must feasible coloring one colors used possible whc color thus remove hence know necessarily color appear exactly particular since disjoint implies color whc solve weightedlocallyboundedlistcoloring case arbitrary values arbitrary vertex weights reducing matching instance bipartite graph one vertex vertex one vertex color edge vertex color whc iii vhc due rule used define edges linking vertices compatible colors easy see solution weightedlocallyboundedlistcoloring instance perfect matching vertex linked color matching color therefore weightedlocallyboundedlistcoloring solved polynomial time case using instance efficient algorithm computing maximum flow since bipartite yields theorem weightedlocallyboundedlistcoloring solved polynomial time complete graphs one associate profit function vertices obtain feasible solution weightedlocallyboundedlistcoloring maximum profit finding perfect matching maximum weight equivalently solving linear assignment instance optimally weight edge vertex color locally bounded split graphs section study tractability weightedlocallyboundedlistcoloring split graphs recall split graph graph whose set vertices partitioned vertex set inducing clique vertex set inducing stable independent set sake simplicity denote resp number vertices resp section weightedlocallyboundedlistcoloring remains npcomplete split graphs theorems weakly vertex weights arbitrary even vertex take color strongly arbitrary even vertex weights polynomially bounded vertex take color however fixed solved pseudo polynomial time theorem weightedlocallyboundedlistcoloring solved pseudo polynomial time split graphs proof since looking proper vertex coloring vertex neighbors hence contain vertices one enumerate constant time possible colorings one must check proper color vertex coloring remove update value whc list vertex accordingly way obtain instance graph consisting isolated vertices proposition instance solved pseudo polynomial time concludes proof problem complete bipartite graphs thus cographs strongly actually reduction used one side bipartite instances replaced clique hence proof also holds split graphs one theorem despite fact former one partly inspired latter one adding initial vertex new isolated vertices take color yields instances equitable problem however straightforward adaptation proof theorem yields stronger result theorem weightedlocallyboundedlistcoloring strongly npcomplete split graphs even vertex degree vertex part inducing independent set one proof start reduction described proof theorem add clique vertices namely vij two vertices receive different colors original reduction obtain equivalent instance independent set contains leaves sij hence vertex degree one concludes proof note however problem easy split graphs every vertex degree remove one possible color update possible colors accordingly use optimal flow color vertices appropriately theorem give vertex color used unique neighbor neighbor exists moreover lists possible colors play major role reduction one may wonder happens vertex take color arbitrary vertex following result holds theorem weightedlocallyboundedlistcoloring strongly npcomplete split graphs even whc vertex weight take color proof take instance consists clauses size defined boolean variables variable associate two vertices jth clause associate three vertices add edges following way order obtain split graph vertices induce clique vertices vertices induce independent set vertices let jth clause xil also add edge yil finally let define wii well xil otherwise note whc equal vertex define following equivalence solutions two instances true color note vertex must take color yil takes color cil hence literal true cil cil otherwise concludes proof actually even prove stronger result get rid assumption must arbitrary allowing vertex take color theorem weightedlocallyboundedlistcoloring strongly npcomplete split graphs even color vertex weight take color proof reduce problem given three sets elements list triples one wants decide whether exists covering elements given instance define following instance weightedlocallyboundedlistcoloring vertices take color weight one vertex triple one vertex element one vertex element one vertex element moreover edge vertices except xih yjh zlh provided hth triple composed element element element particular define finally define color bounds follows easy see following equivalence holds hth triple takes color indeed color must appear exactly moreover vertex non adjacent exactly three vertices hence takes color three vertices non adjacent must take color colors must cover vertices vertices vertices takes color corollary capacitated coloring problem split graphs note contrary bounded coloring problem easy split graphs end section providing three last tractable special cases first one obtained assuming case enumerate potential colorings one must check proper color vertex coloring remove update values whc lists vertices accordingly left instance complete graph solve polynomial time thanks theorem second one obtained assuming vertex without latter assumption problem theorems case enumerate potential colorings one must check proper color coloring remove update values whc list accordingly left instance set isolated vertices vertex weight solve efficiently thanks proposition third one complements corollary obtained assuming vertex weight take color colors except constant number whc common constant assumption less restrictive recall proved bounded coloring problem solved polynomial time split graphs means reduction intermediate problem solved using maximum flow algorithm construction use solve third special case number colors arbitrary also solves bounded coloring problem direct reduction maximum flow problem namely prove theorem weightedlocallyboundedlistcoloring solved polynomial time split graphs vertex weight take color colors except constant number called singular colors whc constant proof let consider instance split graph let vertices vertices start guessing singular colors appear color unique vertex uses done time enumeration vertex take arbitrary different color whc singular colors may appear also set whc whc whc vertex takes color whc whc otherwise since otherwise solution always end proper coloring define list possible colors exists adjacent takes color remove vertices left instance weightedlocallyboundedlistcoloring graph consisting isolated vertices weight subgraph induced vertices color bounds whc solve instance polynomial time maximum matching problem bipartite graph hence maximum flow problem thanks proposition concludes proof note generalizes solvability case bounded coloring problem split graphs moreover without assumption vertex take color problem becomes hard even indeed take reduction theorem color add three isolated vertices take color way obtain instances color hence similarly adapting proof theorem shows without assumption problem arbitrary whc however vertex clique part list possible colors arbitrary weight proof theorem easily adapted show problem remains tractable provided vertex weight allowed take color indeed guess part give appropriate non singular colors remaining uncolored vertices solving another matching instance greedy way anymore way similar one used proof theorem extension edge colorings last section extend edge colorings previous results concerning vertex cographs split graphs graphs bounded edge coloring graph assignment colors integers edges way two edges sharing vertex take different colors problem consider section thus following weightedlocallyboundedlistedgecoloring instance graph weight function partition edge set list integral bounds wpk wij edge list possible colors question decide whether exists edge coloring total weight edges color wij color belongs edge proofs edge colorings prove problem using reduction partition similar one theorem isolated vertex becomes isolated edge define make instance connected obtain chain setting adding new edges color however problem trivial connected since maximum degree case hence either path even cycle two possible main goal section prove weightedlocallyboundedlistedgecoloring tractable graphs considered previous sections assumptions vertex coloring counterpart however shall first justify case vertex colorings dropping assumptions leads without looking possible special cases sake brevity previous paragraph settles case arbitrary edge weights let look cases input graph neither cograph split graph unbounded weightedlocallyboundedlistedgecoloring even edge indeed deciding whether edges graph properly colored using three colors known arbitrary need adaptation proof theorem theorem weightedlocallyboundedlistedgecoloring strongly graphs connected component either cycle length four single edge even edge edge take color proof given instance monotoneoneinthreesat construct graph precisely variable occ isolated edges denoted eji occ cycles length four occ number occurrences set clauses occ let denote aji bji cji dji four edges jth cycles assume instance aji cji bji dji moreover edge weight take two colors intuitively jth cycle associated variable edge bji used interact one edge cji used interact one edge dji useful obtain cograph color edge aji reflect value true false let define formally partition use associated color bounds occ set occ otherwise occ well bji occ moreover occ set dji eji occ occ note hence jth cycle length four associated variable edges aji cji must take color edges bji dji must take color way defined far aji must take color end reduction defining one set partition clause lth clause consists occurrence variable occurrence define variable occurrence variable occ implies occ yields following equivalence solutions two instances true aji color concludes proof number colors arbitrary weightedlocallyboundedlistedgecoloring strongly shown easy reduction similar one theorem isolated vertex becomes isolated edge define note graphs used reductions described consist vertexdisjoint cycles length four complete bipartite graphs two vertices side isolated edges words cographs graphs moreover easily turn instances used case number colors arbitrary split graphs see start reduction define instead choose one endpoint edges clique vertices yields split add graph edges denoted take color define assume without loss generality simply multiply two original instance order obtain equivalent instance implies color used one one edge shows equivalence two reductions hence shall assume fixed otherwise weightedlocallyboundedlistedgecoloring following subsection give efficient algorithms solve problem efficient algorithms edge colorings start considering graphs bounded assume graph given together minimum width fixed otherwise weightedlocallyboundedlistedgecoloring theorem define following function showing compute node type true edge color total weight edges color false otherwise forget node let denote child introduce node let denote child denote edges incident edges assume edges share common vertex join node let denote two children let whc total weight edges qpk qpk qpk whc qhc leaf node case check coloring function provides valid locally bounded root value answer true false initial instance obtained root computing following value wpk proof correctness similar one given lemma analysis running time similar one given stating theorem therefore omitted yields following result theorem graphs bounded weightedlocallyboundedlistedgecoloring solved pseudopolynomial time fixed polynomial time fixed edge weights polynomially bounded one associate profit function edges modify slightly dynamic programming algorithm order compute feasible weighted locally bounded maximum profit turn attention towards cographs assume fixed otherwise result holds theorem since looking proper edge coloring maximum degree graph bounded number colors cograph contains induced path four vertices diameter connected components two words vertex neighbors neighbors neighbors vertex connected component belongs implies connected component contains vertices hence edges therefore connected component possible edge colorings one one easily check whether defines proper listcoloring component moreover fixed colorings combined one one using efficient dynamic programming algorithm similar one outlined beginning section needs store pseudo polynomial amount information namely maxh whc finally let study case split graphs since looking proper edge coloring maximum degree graph bounded number colors isolated vertices play role case removed hence vertex clique part neighbors implies contain vertices edge incident one vertices vertices incident edges graph contains edges therefore hence weightedlocallyboundedlistedgecoloring trivial split graphs conclusion open problems conclude giving overview results concerning weightedlocallyboundedlistcoloring cographs split graphs graphs bounded three following tables summarize results one devoted one classes graphs npc means npcomplete means solvable polynomial time means arbitrary denotes input graph poly poly poly poly poly poly poly poly graphs bounded result npc weak sense theorem npc theorem yes proposition yes proposition yes npc theorems npc theorems yes theorem cographs result npc weak sense theorem npc theorem yes npc corollary without npc corollary yes theorem yes npc complete bipartite graphs theorem yes complete bipartite graphs theorem yes complete graphs theorem split yes yes graphs result npc weak sense theorem npc theorem npc theorem npc theorem even whc npc theorem singular colors theorem theorem following table summarizes results concerning arbitrary graphs poly yes result section npc since highlight section similar results edge colorings including specific hardness proofs one theorem provided cographs split graphs graphs bounded one may actually wonder weightedlocallyboundedlistcoloring behaves bipartite graphs mention yet instances used proofs theorems graphs relevant question would problem solvable bipartite graphs vertex weights polynomially bounded unfortunately answer indeed bodlaender jansen proved theorem equitable coloring problem strongly bipartite graphs theorem stated bounded coloring problem proof actually holds case well without modification even colors weightedlocallyboundedlistcoloring vertex weight take color one may also interested fpt algorithms weightedlocallyboundedlistcoloring polynomially bounded vertex weights unfortunately results theorems best possible problem respect graphs also cographs split graphs even see take instance unary bin packing bins respect build equivalent instance target problem items becomes vertex weight color bounds equal common bin capacity add isolated vertices weight finally main open question would like mention worth studying direct consequence results sections indeed theorems leave open case vertex weight take color words complexity capacitated coloring problem defined graphs bounded arbitrary bodlaender fomin managed prove bounded coloring problem tractable case question open long time finally settled capacitated coloring problem general probably harder even case one able prove far actually reduction capacitated coloring problem bounded equitable coloring problem see take instance former problem color bounds add isolated vertices total number vertices reaches add sets vertices inducing independent sets way ith sets contains isolated vertices two vertices two different sets linked edge obtain instance latter problem setting common color bound checked vertices ith set must take color however general reduction preserves neither property bounded one split graph although preserve one cograph also recall capacitated coloring problem cographs split graphs arbitrary corollary respectively references baker coffman mutual exclusion scheduling theoretical computer science bentz picouleau locally bounded trees rairoro bodlaender fomin equitable colorings bounded treewidth graphs theoretical computer science bodlaender jansen restrictions graph partition problems part theoretical computer science bonomo mattia oriolo bounded coloring graphs pickup delivery tour combination problem theoretical computer science spinrad graph classes survey siam monographs discrete mathematics applications society industrial applied mathematics philadelphia dror finke gravier kubiak complexity restricted problem discrete mathematics fiala golovach kratochvil parameterized complexity coloring problems treewidth versus vertex cover theoretical computer science garey johnson computers intractability guide theory freeman new york gravier kobler kubiak complexity list coloring problems fixed total number colors discrete applied mathematics hansen hertz kuplinsky bounded vertex colorings graphs discrete mathematics holyer siam journal computing jansen kratsch marx schlotter bin packing fixed number bins revisited journal computer system sciences jansen scheffler generalized coloring graphs discrete applied mathematics kloks treewidth computations approximations lecture notes computer science
| 8 |
quantifying topological invariants neuronal morphologies lida pawel martina ran julian kathryn henry blue brain project datashape inria saclay laboratory topology neuroscience brain mind institute epfl institute mathematics university aberdeen dated march mar nervous systems characterized neurons displaying diversity morphological shapes traditionally different shapes qualitatively described based visual inspection quantitatively described based morphometric parameters neither process provides solid foundation categorizing various morphologies problem important many fields propose stable topological measure standardized descriptor morphology encodes skeletal branching anatomy specifically barcode branching tree determined spherical filtration centered root neuronal soma topological morphology descriptor tmd allows discrimination groups random neuronal trees linear computational cost keywords topological analysis neuronal structures trees morphologies analysis complex tree structures neurons branched polymers viscous fingering fractal trees important understanding many physical biological processes yet efficient method quantitatively analysing morphology trees difficult find biological systems provide many examples complex trees nervous system one complex biological systems known whose fundamental units neurons sophisticated information processing cells possessing highly ramified arborizations structure size neuronal trees determines input sources neuron range target outputs thought reflect involvement different computational tasks order understand brain functionality fundamental understand neuronal shape determines neuronal function result much effort devoted grouping neurons distinguishable morphological classes categorization process important many fields neurons come variety shapes different branching patterns frequency branching branching angles branching length overall extent branches etc classifying different morphologies traditionally focused visually distinguishing shapes observed microscope method inadequate subject large variations experts studying morphologies made even difficult presence enormous variety morphological types objective method discriminating neuronal morphologies could advance progress generating neurons nervous system reason experts generate digital version cells structure reconstructed model neuron studied computationally reconstructed morphology encoded set points along branch edges ing pairs points reconstruction forms mathematical tree representing neurons morphology figure illustration separation similar tree structures distinct groups using topological analysis colored pie segments show six distinct tree types three neuronal types upper half three artificial ones lower half thick blue lines show topological analysis reliably separate trees groups accurate trees neuronal morphologies dashed green lines show classification using improper set features distinguish correct groups general properties geometric trees described set points rigorously studied two extreme cases limit full complexity trees entire set points used opposite limit feature space typically small number selected morphometrics statistical features branching pattern extracted digital morphology form input standard classification algorithms topological data analysis tda shown reliably identify geometric objects built pieces spheres cylinders tori based point cloud sampled object suffers however deficiency reliable classification complex geometric tree structures standard tda methods rips complexes requires thousands sampled points expensive terms computational complexity memory requirements thus feature extraction currently feasible solution establishing quantitative approach morphologies approach efficiently used image recognition wide diversity neuronal shapes even cells identified type made difficult isolate optimal set features reliably describe neuronal shapes indeed alternative sets morphometrics result different classifications illustrated fig statistical features commonly overlap even across markedly different morphological types turn traditional feature extraction results significant loss information dimensionality data reduced result neither methods suitable study complex tree structures describe physical biological objects algorithm propose overcomes limitations constructing topological morphology descriptor tmd tree embedded distance inspired persistent homology defined distinguish trees alternative methods computing distances trees edit distance blastneuron distance functional distortion distance general computationally expensive see distances trees algorithm preserves overall shape tree reducing computational cost discarding local fluctuations takes input set key functional points tree branch points nodes one child leaves nodes children transforms intervals real line known persistence barcode fig interval encodes lifetime connected component underlying structure delimited points branch first detected birth connects larger subtree death information equivalently represented persistence diagram fig times points real plane advantage structure simplifies mathematical analysis data method applicable structure demonstrate generality first applying collection artificial random trees see random trees groups neuronal trees see data provenance results show tmd tree shapes highly effective reliably efficiently distinguishing random neuronal trees fig figure application topological analysis neuronal tree showing largest persistent component red persistence barcode represents component horizontal line whose endpoints mark birth death units depend choice function used ordering nodes tree case radial distance nodes soma units microns largest component shown red together birth death barcode equivalently represented points persistence diagram birth death component coordinates point respectively red diagonal line guide eye marks points whose birth death simultaneous note persistence barcode arbitrary ordering components along axis prevents calculation average barcode corresponding persistence diagram however physical coordinates axes allowing average persistence diagram defined methods operation extract barcode embedded tree described algorithm let rooted therefore oriented tree embedded note operation described fact generalizable trees embedded metric space denote set nodes union set branch points set leaves case neuron root point representing soma node references parent first node path towards root children nodes parent called siblings let function defined set nodes purpose study radial distance root another possible choice would path distance based function construct function defined set nodes largest value function leaves subtree starting ordering siblings defined based siblings younger algorithm initialized setting value equal value following path leaf towards root oldest respect siblings killed branch point siblings age equivalent terminate either one killed component one interval added persistence barcode fig oldest sibling survives merges larger component subsequent branch point operation applied iteratively nodes root reached point one component oldest one still alive algorithm extracting persistence barcode tree require empty list contain pairs real numbers set active nodes every parent children randomly choose one add remove add add return invariant rotations root rigid translations tree common topological metric used comparison persistence diagrams bottleneck distance denoted bottleneck distance infimum infinity norm maximum distance matching points possible matchings two persistence diagrams prove operation stable respect standard topological distances bottleneck distance see bottleneck wasserstein stability barcode construction modifications tree respect hausdorff distance see distances metric spaces persistence diagram modified long function note requirement always satisfied radial distance result method robust respect small perturbations positions nodes deletion small branches distance tree pairs branches outgoing radial distance origin smaller radial distance terminal point branch operation equivalent filtration concentric spheres decreasing radii centered fig case birth time component supremum radii spheres contain entire component death time infimum radii spheres contain branch point component merges longer one computational complexity operation linear number nodes note statement line algorithm critical linear complexity number currently active children saved parent node avoid quadratic complexity process results set intervals real line represents lifetime one component tree operation associates persistence barcode tree invariant rotations translations long function also purposes paper radial distance hierarchical clustering based dendrogram figure application topological analysis set artificial trees generated using stochastic process three sets trees shown set differs others tree depth group contains individuals shown clarity individual generated using tree parameters different random number seed tmd separates group accurately using distance metric barcodes trees described main text seen separation colors colormap supported clear distinction three groups resulting simple hierarchical clustering algorithm however none standard topological distances persistence diagrams appropriate grouping neuronal trees bottleneck distance well distances based bottleneck persistence distortion distance see distances trees distinguish diagrams differ short components nevertheless important distinction neuronal morphologies therefore define alternative distance dbar space barcodes use classify branching morphologies barcode generate density profile follows value histogram iii figure topological comparison neurons different animal species row corresponds species cat dragonfly iii drosophila mouse rat trees several exemplar cells species shown first column representative persistence barcodes cells shown second column structural differences trees clearly evident barcodes iii clusters short components clearly distinct largest component bars distribution decreasing lengths also barcodes iii show empty regions dense regions bars indicating existence two clusters morphologies barcodes dense overall persistent image representative barcode superimposed persistence diagram shown third column combining persistence diagrams several tress define average persistent image order study qualitative differences sets morphologies trees first row cat tightly grouped second row dragonfly two clusters visible dragonfly trees considering rows extension elliptical peak perpendicular diagonal line reflects variance mentioned earlier single cell barcode note trees barcodes persistent images shown scale see scale bar case number intervals contain number components alive point distance two barcodes defined sum differences density profiles barcodes distance stable respect hausdorff distance distance aware succeeds capturing differences distinct neuronal persistence barcodes finally persistence diagram converted persistent image introduced method based discretization sum gaussian kernels centered points persistence diagram discretization generate matrix pixel values encoding persistence diagram vector called persistent image machine learning tools decision trees support vector machines applied vectorization classification persistence diagrams results demonstrate discriminative power tmd applying three examples increasing complexity first applied artificial random trees provide test case explore method performance generated stochastic algorithm described random trees therefore properties precisely modified next analyzed two datasets biological relevance neurons different species downloaded distinct types trees obtained several morphological types rat cortical pyramidal cells see data provenance last example interesting although biological support separation distinct groups consistent objective classification previously performed persistence diagrams trees example datasets classified correctly high accuracy indicating good performance algorithm across examples utpc stpc expert randomized accuracy fraction mathematical random trees defined set parameters constrain shape tree depth branch length bifurcation angle degree randomness see generation random trees defined control group set trees generated predefined parameters independent random seeds parameter varied individually generate groups trees differed control group one property trees extracted persistence barcode using algorithm comparison distances dbar tree barcode barcodes trees every groups constitutes one trial trial successful minimum barcode distance occurs tree group generated parameters tmd method accurately separates groups random trees different tree depth seen fig accuracy see random trees results distance matrix fig shows three distinct groups simple hierarchical clustering algorithm correctly assigns tree group tmds effectiveness demonstrated accuracy assigning individual trees originating group tmd distinguishes random trees generated varying parameters accuracy respectively results given random trees results next algorithm used quantify differences neuronal morphologies neurons serve distinct functional purposes exhibit unique branching patterns morphologies neuronal trees different species seen fig classified different groups purposes study used cat drosophila dragonfly mouse rat neurons qualitative differences neuronal tree types evident individual geometrical tree shapes fig well extracted barcodes fig superposition persistence diagrams set trees constructed persistent images group fig note persistence barcode arbitrary ordering trees components along prevents calculation average barcode corresponding persistence diagram associated persistent image physical coordinates axes allowing average persistence diagram defined representation useful quantifying differences tree types trivial identify fig areas high branching density different radial distances soma position clusters since density usually correlated connection probability identify anatomical parts trees important functionality connectivity patterns different cell types finally applied tmd case difficult distinguish utpc utpc stpc utpc stpc stpc figure comparison topological analysis apical dendrite trees extracted several types rat neuron four cell types shown utpc stpc left right morphological differences cell types subtle persistent images clearly reveal particularly presence two clusters cell types persistent images train decision tree classifier groups cells perform binary classification trees taken pair groups blue bars show performance classifier using groups order decreasing accuracy red bars obtained random assignment trees groups significantly worse expert assignment shows persistent images objectively support expert classification gies pyramidal cell morphologies fig rat appear superficially similar persistent images reveal fundamental morphological differences four neuronal types related existence shape apical tuft apical tuft pcs known play key role integration neuronal inputs synapses higher cortical layers therefore key indicator functional role cell therefore tmd successfully discriminate functionally distinct neuronal types discussion morphological diversity neurons supports complex information processing capabilities nervous system major challenge neuroscience therefore objectively classify neurons shape techniques feature extraction far unable reliably distinguish cell types without expert manual interaction reasonable computational time introduced topological morphology descriptor derived principles persistent topology retains enough topological information trees able distinguish linear computational time ability efficiently separate neurons distinct groups high accuracy provides objective support neuronal classes may lead comprehensive understanding relation neuronal morphology function technique applied rooted tree equipped lipschitz function defined nodes biological examples include botanic trees roots plants method restricted trees however generalized subset metric space persistence barcode extracted using filtration concentric spheres centered enabling efficiently study shape complex multidimensional objects another generalization use different function path distance root serve reveal shape characteristics independent radial distance thus captured current approach static neuronal structures presented paper biologically interesting method also used track morphological evolution trees topological study growth embedded tree could addressed multidimensional persistence spherical filtration identifying relevant topological features tree could enriched second filtration representing temporal evolution application could useful agriculture study growing roots trees would also extend method neurons developing brain acknowledgments jcs supported blue brain project supported part blue brain project grant partial support provided advanced grant european research council gudhi geometric understanding higher dimensions supported snf nccr synapsy author contributions conceived study developed topological algorithm contribution contributed topological ideas used analysis suggested biological datasets analyzed jcs gave conceptual advice jcs wrote main paper authors discussed results commented manuscript stages authors declare competing financial interests supplementary information demonstration algorithm figure illustration algorithm idea algorithm presented figure root denoted shown red nodes tree labelled initialization algorithm leaves inserted list alive nodes algorithm iterates members follows node first element list first step algorithm begins finding parent since algorithm computes therefore interval added list intervals representing lifetime node nodes removed added next vertex list algorithm begins next step finding parent node processed stage however since sibling next node processed therefore children parent oldest component therefore interval added represent lifetime node node added removed list alive components consists node processed since sibling next node processed therefore whose parent children case interval added nodes removed added next node whose parent since children compute add interval root added removed list since root alive loop algorithm terminates last step adds interval definition distances distances metric spaces order analyze stability algorithm need define notion distance trees embedded metric space well distance persistence diagrams use standard hausdorff distance defined follows measure distance trees given trees embedded metric space metric hausdorff distance given formula max distances persistence diagrams recall various representations persistence diagrams possible notions distance also provide reference software computes distances considered exist metrics summarized applied directly output algorithm classical distances used topological data analysis bottleneck wasserstein distances given two persistence diagrams may assume without loss generality contain points diagonal infinite multiplicity construct matching bijection define two numbers sup standard euclidean distance note simply longest distance shifts point sum powers lengths line segments joining infimum possible matchings bottleneck distance infimum possible matchings distance given two persistence diagrams bottleneck distance denoted distance one implementation distances found faster approximation please follow link paper implementation persistence diagram also represented persistence landscape piecewise linear function given two persistence landscapes compute distance space implementation described one also represent persistence diagrams persistence images introduced idea place gaussian kernel every point diagram discretize distribution obtained image straightforward compute distance two persistence images using common techniques representation used classification morphological types neurons experimental section paper aware publicly available implementation approach implementation provided software paper distances trees classic metric compare trees edit distance based transformation one tree another sequence operations deletion insertion vertices cost edit distance defined supremum total cost possible transformations however edit distance relevant problem since involve geometric information tree structure known metric useful distinguishing neuronal trees proposed set morphological features extracted trees initial estimation distance defined distance extracted features alignment algorithm applied pairs trees order identify local similarities local alignment requires comparison pairs branches making computation expensive method designed efficient matching trees highly similar structures high variability within groups rat cortical neurons allow similar trees grouped together local alignment since local structures often altered another important class distances trees distances merge trees introduced applied merge trees sublevel sets functions function metric space sublevel set level differences captured merge trees considerably subtle differences captured persistent homology function sublevel sets authors provide examples pairs simple merge trees persistence diagrams nonzero distance apart clear particular case algorithm would provide persistent homology sublevel sets function therefore rescaling difference distances used papers distances used current paper get arbitrarily large another relevant metric persistence distortion distance two graphs expressed terms shortest paths every every given shortest path metric persistence distortion defined minimal bottleneck distance persistent homology modules superlevel sets distance functions persistence diagrams obtained process computing persistence distortion distance conceptually close diagrams get algorithm case obtain significant computational advantage working rooted trees since always unique path every pair vertices moreover logical choice initial vertices compute shortest paths take root trees considered case persistence diagram arising computing persistence distortion distance one would get algorithm function path distance root computational cost distortion distance considerable general case linear case however since based bottleneck distance suffers metric limitations shortest components important neuronal morphologies taken account code compute persistence distortion distance available functional distortion distance first introduced distance compare reeb graphs since distance generally compares graphs equipped function defined edges consider neural tree comparison let tree viewed function monotone edges following give metric space structure two points necessarily vertices let unique path height defined height particular edge since monotone edges height distance two points defined height function defines metric whenever path constant function value two points otherwise let two trees equipped functions every pair continuous functions let sup functional distortion distance defined max section use properties functional distortion distance prove algorithm robust respect specific types perturbation input data proof stability bottleneck wasserstein stability barcode construction prove operation transform tree persistence diagram robust small perturbations space metric trees use hausdorff distance defined section since input consists leaves branch points respective connectivity perturbing tree perturb points space persistence diagrams consider bottleneck wasserstein distances defined section distances persistence diagrams result method robust respect noisy data given tree function defined set nodes construct function defined set nodes largest value function leaves subtree starting see methods order prove robustness method require function lipschitz constant purpose study radial distance root result proof stability carried three steps first prove stability case branch point position modified consider removal addition short branches end unite results prove main theorem figure perturbation node position moved figure addition small branch added close denote perturbed tree function defined nodes consider following types perturbations perturbation branch points positions location branch point modified illustrated fig node branching structure tree preserved case value function node tree modified node tree since since connectivity modified transformation persistence diagram using algorithm result number points difference position point depend modification node assuming correspondence nodes known points modified matched corresponding points since distance corresponding point modified persistence diagram case bottleneck distance bounded wasserstein distance bounded number points influenced modification node generalize concept perturbation number nodes tree maximum difference persistence diagrams occur nodes component moved opposite directions case distance two points therefore bottleneck distance bounded wasserstein distance bounded number leaves equal number points note function depends location root example case based radial distance need double lipschitz constant analysis node tree moved value function root may change respectively case since position root node tree perturbed therefore distance root node change value function node change small branches added tree case branches length added tree locations nodes tree remain fixed simplest case single branch added component contains node presented fig new branch result one point corresponding persistence diagram create matching points assuming correspondence nodes known points modified matched corresponding points new point represent termination either component depending node value function smaller cases new point matched diagonal distance less therefore bottleneck wasserstein distances bounded generalize concept addition small branches tree case new points added persistence diagram rest points matched points corresponding branches using previous argument bottleneck distance bounded wasserstein distance bounded number new long branches small branches removed tree another perturbation remove small branches length tree case inverse previous case result stability case implied stability previous case generalization previous results finally discuss general case perturbation branch points positions well addition deletion small branches previous cases using triangle inequality easy show bottleneck distance bounded wasserstein distance bounded number branches length added removed number branch points note persistence intervals introduced algorithm also give rise decomposition tree distinct collection branches example let consider tree figure point denotes root leaves branch point tree function depends radial distance root figure assume slightly away case algorithm processes node branch originating terminate branch originating continue gives rise branch decomposition illustrated right corner however leaves slightly perturbed figure resulting branch decomposition lower right corner different previous case even though persistence intervals close two cases example illustrates persistence diagrams stable corresponding branch decompositions since algorithm extracts persistence diagram tree instability branch decomposition concern current paper stability proof based functional distortion distance section use properties functional distortion distance prove sampling errors input neural tree result significant difference associated persistence diagram space persistence diagrams equipped bottleneck distance figure illustration region tree corresponding persistence intervals computed algorithm recall neural tree represented tree selected vertex called root function associating vertex radial distance root algorithm tree structure represented set branch points set leaves pairwise connectivity section instead order use results tree viewed regular intuitively glued homeomorphic image interval two adjacent vertices data structure algorithm accordance denote persistence diagram sublevel set filtration persistence diagram superset filtration equivalent let trees equipped functions theorem states persistence diagram obtained algorithm equivalent considering disjoint union persistence diagrams particular corresponds points diagonal identifies leaves closer root corresponding branch point points instead diagonal represent leaves farther root corresponding branch point diagonal persistence diagram considered infinite multiplicity implies matching corresponds pair matching matching therefore sufficient consider optimal matchings upper diagonal lower diagonal part diagram given theorem conclude use property prove bottleneck distance persistence diagrams trees stable respect adding branches length perturbing position branching point lemma let tree tree obtained adding edge length functions associate point rooted tree euclidean distance root proof let natural inclusion let defined definition triangle inequality furthermore since length edge therefore proof immediately generalised case finitely many edges length added tree lemma let homeomorphism euclidean distance functions associate point rooted tree euclidean distance root proof since euclidean distance triangle inequality implies let inverse since distance triangle inequality implies definition also true follows proved adding branches length moving points tree results tree functional distortion distance section also showed bottleneck distance persistence diagrams trees stable respect functional distortion distance combining two facts guarantees mentioned bottleneck distance persistence diagrams random trees bifurcation angle degree randomness branch leng tree depth generation random trees figure definition growth parameters artificial random trees random trees used testing algorithm generated software developed within blue brain project bbp tree consists branches paths two branch points generated based random walk position walk step given weighted sum predefined direction simple random walk step size defines randomness step branch straight line branch simple random walk srw number steps given preselected branch length number steps reached tree bifurcates two new branches created angle initial points branches defined bifurcation angle tree generated way binary every leaf depth since new branches added every branch point preselected tree depth number nodes leaf root node reached total number branches tree example tree fig consists branches set parameters defines global properties tree random trees generated set parameters share common morphometric properties unique spatial structures due stochastic component growth allowed check effectiveness algorithm identifying sets trees generated input parameters differ random seed random trees results defined control group set trees generated predefined parameters independent random seeds varied parameter individually generate groups trees differed control group one property trees extracted persistence barcode using algorithm comparison distances dbar tree barcode barcodes trees every groups constitutes one trial trial successful minimum barcode distance occurs tree group generated parameters overall accuracy tmd separate groups trees generated different values described parameters given following table software availability software used extraction persistence barcodes made available upon publication license details https code accessible https data provenance artificial random trees used fig fig generated software developed bbp made available upon request biological morphologies used fig fig fig provided lnmc epfl biological morphologies used fig downloaded http particular cat neurons provided dragonfly neurons drosophila neurons mouse neurons rat neurons alexandrowicz growth shape branched polymers aggregates percolating clusters physics letters agam bettelheim wiegmann zabrodin viscous fingering shape electronic droplet quantum hall regime may mandelbrot freeman san francisco fractal geometry nature earth surface processes landforms garst alternate realities mathematical models nature man journal computational chemistry jan branching mechanisms dendritic arborization nature reviews neuroscience may cuntz borst segev optimization principles dendritic structure theor biol med model ferrante migliore ascoli functional impact dendritic branch point morphology neurosci jan silver neuronal arithmetic nat rev neurosci jul honey thivierge sporns structure predict function human brain neuroimage computational models brain defelipe new insights classification nomenclature cortical gabaergic interneurons nature markram wang gupta silberberg interneurons neocortical inhibitory system nat rev neurosci oct halavi hamilton parekh ascoli digital reconstructions neuronal morphology three decades research trends frontiers neuroscience ping petilla interneuron nomenclature group petilla terminology nomenclature features gabaergic interneurons cerebral cortex nat rev neurosci jul lyons budynek akamatsu automatic classification single facial images ieee transactions pattern analysis machine intelligence dec masseroli bollea forloni quantitative morphology shape classification neurons computerized image analysis computer methods programs biomedicine dieter accurate reconstruction neuronal morphology computational neuroscience frontiers neuroscience crc press nov carlsson topology data bull amer math pedrero valencia lobato alonso feature extraction method based morphological operators automatic classification leukocytes artificial intelligence micai seventh mexican international conference pages oct blackman grabuschnig legenstein sjstrm comparison manual neuronal reconstruction biocytin histology imaging morphometry computer modeling frontiers neuroanatomy wan long xiao hawrylycz myers peng blastneuron automated comparison retrieval clustering neuron morphologies neuroinformatics edelsbrunner harer persistent homologya survey volume pages american mathematical society providence rhode island scorcioni polavaram ascoli tool analysis comparison search digital reconstructions neuronal morphologies nat protoc ling kalil morphology classification distribution projection neurons dorsal lateral geniculate nucleus rat plos one schurer experimental comparison different feature extraction classification methods telephone speech interactive voice technology telecommunications applications second ieee workshop pages sep bille survey tree edit distance related problems theoretical computer science bauer wang measuring distance reeb graphs proceedings thirtieth annual symposium computational geometry socg acm knuth art computer programming volume seminumerical algorithms reading massachusetts dey shi wang comparing graphs via persistence distortion socg chepushtanova emerson hanson kirby motta neville peterson shipman ziegelmeier persistence images alternative persistent homology representation corr scott kernel density estimators pages john wiley sons ascoli donohue halavi central resource neuronal morphologies ward hierarchical grouping optimize objective function amer statist carlsson zomorodian theory multidimensional persistence discrete computational geometry wang siopongco wade yamauchi fractal analysis root systems rice plants response drought stress environmental experimental botany morozov dionysus http kerber morozov nigmetov geometry helps compare persistence diagrams chapter pages bubenik statistical topological data analysis using persistence landscapes mach learn bubenik dlotko persistence landscapes toolbox topological statistics journal symbolic computations feragen structural syntactic statistical pattern recognition joint iapr international workshop sspr spr hiroshima japan november proceedings chapter complexity computing distances geometric trees pages springer berlin heidelberg berlin heidelberg beketayev yeliussizov morozov weber hamann topological methods data analysis visualization iii theory algorithms applications chapter measuring distance merge trees pages springer international publishing cham morozov beketayev weber interleaving distance merge trees discrete computational geometry dey shi persistencedistortion software romand wang markram morphological development layer pyramidal cells rat somatosensory cortex frontiers neuroanatomy rose jones nirula corneil innervation motoneurons based dendritic orientation journal neurophysiology peng yang georgopoulos olberg eight pairs descending visual neurons dragonfly give wing motor centers accurate population vector prey direction proceedings national academy sciences chiang lin chuang chang hsieh yeh shih jjg wang chen chen ching lee lin lin hsu huang chen chiang yeh hwang reconstruction wiring networks drosophila resolution current biology badea nathans morphologies mouse retinal ganglion cells expressing transcription factors analysis wild type mutant cells using sparse labeling vision research
| 8 |
completeness logical relations monadic types david arxiv dec institute informatics warsaw university warszawa poland research center information security national institute advanced industrial science technology tokyo japan project everest inria france abstract software security ensured specifying verifying security properties software using formal methods strong theoretical bases particular programs modeled framework interesting properties expressed formally contextual equivalence observational equivalence furthermore imperative features exist software nicely expressed computational lambdacalculus contextual equivalence difficult prove directly often use logical relations tool establish already defined logical relations computational previous work devote paper study completeness contextual equivalence computational introduction contextual equivalence two programs contextually equivalent observationally equivalent observable behavior outsider distinguish interesting properties programs expressed using notion contextual equivalence example prove program leak secret secret key used atm communicate bank sufficient prove change secret observable behavior change whatever experiment customer makes atm guess information secret key observing reaction atm another example specify functional properties contextual equivalence example sorted function checks list sorted sort function sorts list list want expression sorted sort contextually equivalent expression true finally context parameterized verification contextual equivalence allows verification instantiations parameter reduced partially supported rntl project aci informatique rossignol aci jeunes chercheurs informatique protocoles cryptographiques intrusions aci cryptologie partially supported polish grant european community research training network games work performed part author stay lsv work mainly done author phd student menrt grant aci cryptologie funding doctorale sciences pratiques cachan verification finite number instantiations see logical relations one essential ingredients logical relations contextual equivalence difficult prove directly universal quantification contexts logical relations powerful tools allow deduce contextual equivalence typed aid basic lemma one easily prove logical relations sound contextual equivalence however completeness logical relations much difficult achieve usually show completeness logical relations types first order hand computational proved useful define various notions computations top partial computations exceptions state transformers continuations particular moggi insight based categorical semantics categorical models standard cartesian closed categories cccs computational requires cccs strong monad logical relations monadic types particularly introduced moggi language derived construction defined soundness logical relations guaranteed however monadic types introduce new difficulties particular contextual equivalence becomes subtler due different semantics different monads equivalent programs one monad necessarily equivalent another accordingly makes completeness logical relations difficult achieve computational particular usual proofs completeness first order contributions propose paper notion contextual equivalence computational logical relations language defined according general derivation explore completeness prove partial computation monad exception monad state transformer monad logical relations still complete types case monad need restrict subset types corollary prove strong bisimulation complete contextual equivalence monadic like previous work using logical relations study contextual equivalence models computational effects focus computations local states work paper based general framework describing computations namely computational particular different forms computations like continuations studied local states plan rest paper structured follows devote section preliminaries introducing basic knowledge logical relations simple version typed section move computational rest model particular section sketches proof scheme completeness logical relations monadic types shows difficulty getting general proof switch case studies explore section completeness computational list common monads partial computations exceptions state transformers continuations last section consists discussion related work perspectives logical relations simply typed simply typed let simple version typed types terms ranges set base types booleans integers etc set constants set variables write result substituting term free occurrences variable term typing judgments form typing context finite mapping variables types say whenever write typing context agrees except maps typing rules standard consider set theoretical semantics semantics type given set sets set functions types map every element write environment agrees except maps write environment mapping let term derivable denotation given usual element write jtk instead irrelevant closed term given value say definable exists closed term derivable jtk let obs subset base types called observation types booleans integers etc context term derivable observation type spell standard notion contextual equivalence denotational setting two elements contextually equivalent written context obs derivable jck jck say two closed terms type contextually equivalent whenever without making confusion shall use notation denote contextual equivalence terms also define relation every pair values definable logical relations essentially binary logical relation family type relations one type related functions map related arguments related results formally family type relations every constraint relations base types relations base types fixed condition forces type uniquely determined induction types might complex types products variations general relations complex types also uniquely determined relations type components instance pairs related elements pairwise related unary logical relation also called logical predicate basic lemma comes along logical relations since plotkin work states derivable two related every constant related two related logical relation every basic lemma crucial proving various properties using logical relations case establishing contextual equivalence implies every context derivable obs jck jck every pair related values equality jck jck briefly every logical relation type equality every observation type logically related values necessarily contextually equivalent type completeness states inverse logical relation type complete every contextually equivalent values related logical relation every type completeness logical relations hard achieve even simple version like usually able prove completeness types first order order types defined inductively ord base type ord max ord ord function types following proposition states completeness logical relations types first order proposition exists logical relation type partial equality observation types derivable type first order proof let type logical relation induced every base type show complete types first order proof induction case obvious let take two terms type related let assume jbk related therefore since clearly thus definable say terms respectively context obs derivable jck jck jck since since since since hence moreover therefore definable respectively induction hypothesis arbitrary conclude note equivalent way state completeness logical relations say exists logical relation type partial equality observation types types logical relations computational computational section discussion based another language moggi computational moggi defines language one express various forms side effects exceptions etc general framework computational denoted extends types terms val let extra unary type constructor introduced computational intuitively type type computations type call monadic type sequel two extra constructs val let represent respectively trivial computation sequential computation typing rules val let note let construct confused pcf bind result term variable type must computation moggi also builds categorical model computational using notion monads whereas categorical models simply typed usually cartesian closed categories cccs model requires additionally strong monad defined ccc consequently monadic type interpreted using monad term unique interpretation morphism ccc strong monad semantics two additional constructs given full generality categorical setting denotations val construct let construct defined follwoing composites respectively val let particular interpretation terms computational must satisfy following equations jlet val jlet let jlet let jlet val shall focus moggi monads defined category set sets functions figure lists definitions concrete monads partial computations exceptions state transformers continuations shall write comp refer monad restricted one five monads partial computation jval jlet exception continuation jval jlet state transformer jval jlet jval jlet function pfin jval jlet fig concrete monads defined set computational strongly normalizing reduction rules called rules apart standard contains especially following two rules computations let val let let let let respect rules every term reduced term form considering also following rule monadic types let val write every term monadic type following form let let unkn val every either constant variable uij terms terms monadic types fact rules identify equations respectively lemma every term type exists term every valid interpretation interpretations satisfying equations proof computational strongly normalizing consider form term prove structural induction either variable constant application according equation jlet val particular application must either variable constant since therefore term let val form trivial computation val induction term every valid jval jval well sequential computation let since val let term must form either variable constant induction term every valid jlet latter form contextual equivalence argued standard notion contextual equivalence fit setting computational order define contextual equivalence consider contexts type observation type type indeed contexts allowed computations type could return values particular context derivable meant observe computations type observe anything typing rule let construct allows use computations build computations never values taking account get following definition definition contextual equivalence two values contextually equivalent written observable types obs contexts derivable jck jck two closed terms type contextually equivalent use notation denote contextual equivalence terms logical relations uniform framework defining logical relations relies categorical notion subscones natural extension logical relations able deal monadic types introduced construction consists lifting ccc structure strong monad categorical model subscone reformulate construction category set subscone category whose objects binary relations sets morphism two objects pair functions preserving relations lifting ccc structure gives rise standard logical relations given section lifting strong monad give rise relations monadic types write lifting strong monad given relation two computations exists computation computes pairs standard definition logical relation simply typed extended construction guarantees basic lemma always holds provided every constant related list instantiations definition concrete monads also given figure cites relations monads defined figure partial computation exception state transformer continuation set exceptions set states every fig logical relations concrete monads restrict attention logical relations type observation type obs rto partial equality relations called observational rest paper note require partial identity assume denotation val unit operation injective rto partial equality implies partial equality well indeed let basic lemma jval rto jval say injectivity theorem soundness logical relations tional logical relation every type type straightforward basic lemma toward proof completeness logical relations completeness logical relations much subtler due introduction monadic types expecting find general proof following general construction defined however turns extremely difficult although might impossible certain restrictions types example difficulty arises mainly different semantics different forms computations actually ensure equivalent programs one monad necessarily equivalent another instance consider following two programs let let val let let val closed term conclude equivalent monad return set possible results matter results produces case exception monad throw different exceptions obstacle shall switch effort case studies section explore completeness logical relations list common monads precisely monads listed figure let sketch general structure proving completeness logical relations particular study still restricted types defined following grammar ranges set base types similarly proposition section investigate completeness strong sense aim finding observational logical relation type derivable type first order briefly relation defined section proof proposition logical relation type induced base type prove completeness arbitrary monad note also check logical relation type induced observational partial equality observable type consider pair rto definition lifted monad exists computation idjok hence two projections jok function consequently proves rto partial equality usual proof completeness would induction show type cases identically difficult case induction step find general way show arbitrary monad instead next section prove cases monads figure except monad monad exceptional case completeness types subset studied separately section heart difficulty showing find issue definability monadic types model write def subset definable elements eventually show relation def def shortly def def monads consider paper crucial argument proving completeness logical relations monadic types show need different proofs different monads detailed section completeness logical relations monadic types definability model comp seen definability involved largely proof completeness logical relations types also case apparently needs concern due introduction monadic types despite find general proof hold concrete monads comp state formally let first define predicate elements induction types def every base type def def say constant type logical base type jck escn require contains logical constants note restriction comp valid predicates depend definability type typical logical constants monads follows comp partial computation constant type every denotes nontermination exception constant type every type every exception nothing raises exception state transformer constant updates type tunit every state unit base type contains dummy value updates simply changes current state jupdates continuation constant type bool every every continuation calls directly continuation behaves somehow likey goto command continuation rbool constant type every type takes two arguments returns randomly one introduces assume rest paper constants present comp note predicate elements equivalent say seen subset case monadic types def necessary subset fortunately prove monads preserves inclusions ensures predicate comp proposition monads preserve inclusions comp proof check every monad comp partial computation according monad definition every exception every element state transformer every continuation special case apparently subset since contain functions defined different domains shall consider functions coinciding smaller set equivalent say two functions defined domain coincide written every every also function every introducing constraint constants mainly proving let figure proof take arbitrary element def definition exists closed term type jtk evident def expecting show jtk def considering form since easy check constants related except continuations however still assume presence sake proving completeness able prove soundness note theorem theorem still hold speaking language strongly normalizing take partial computation monad example def def consider form let let unkn val shall make induction clear jtk def induction step hope closed term type would produce either definable result type substitute rest normal term result make use induction hypothesis constraint constants helps ensure substitution resulted term still proper form induction would following lemma shows every computation term jtk def particular form general form form lemma comp jtk def every closed computation term type following form let let wnkn val either variable closed term jti holds wij valid terms comp proof prove induction every monad partial computation def def clear jtk def holds must closed def jtk def otherwise assume closed term type assuming type according definition holds let another closed term val let wnk let either wij wij induction def holds furthermore jlet let wnkn val jlet let wnkn val jtk hence jtk def exception def def clearly jtk def holds def jtk def otherwise exactly case partial computation build term similarly prove jtk def induction state transformer def def every jtks jwk def hence jtk def every assume closed term type assuming type according definition holds let another closed term val let tsn wnk let tsi either wij wij induction def holds furthermore every jtks jlet let wnkn val jlet let wnkn val since jts def every jtks jts def hence jtk def def continuation def say element def every pair continuations jtk jwk def according definition continuation monad jtk wnkn jlet let wnkn val every continuations let jlet let wnkn val prove implies jtk jtk conclude jtk def every let closed term define another closed term val let tan wnk let tai either wij wij induction def jta jta def pfin def jtk jwk def every assume closed term type assuming type according definition holds let another closed term val let tan wnk let tai either wij wij induction def holds furthermore jtk jlet let wnkn val jlet wnkn val let jta jta def holds every jtk def lemma conclude immediately every closed computation term logical constants jtk def comp proposition def def holds model comp logical constants proof follows lemma considering terms define elements since strongly normalizing completeness logical relations comp types prove section partial computation monad exception monad state monad continuation monad write comp monad restricted one four monads proofs depend typically particular semantics every form computation common technique used frequently given two definable elements one find context distinguish programs type define two given elements context usually built based another context distinguish programs type lemma let type logical relation comp logical constants holds every type proof take two arbitrary elements prove every monad comp partial computation fact amounts following two cases one two values definable type proposition definable type either values definable type contextually equivalent context jck jck thus context let distinguish two values type symmetrically context let val true used distinguish cases exception fact amounts three cases suppose values definable type otherwise proposition must definable type similar case partial computation build context distinguishes values type context distinguishes values type consider following context let val true tbool substituted context evaluates different values namely boolean exception try context second case evaluate two different exceptions distinguished three cases state transformer exists either induction definable proposition definable either definable context jck jck state jck jck use following context let let let let let every let jck applied state return two different pairs context distinguish two values use context let val true tbool jlet val true true two functions equal since return different results applied state cases continuation first say two continuations every fact means two continuations every definable value def clearly coincide def suppose definable proposition hence consider context let bool every rjboolk let since context distinguishes two computations hence theorem comp constants logical particular following constants present updates state transformer monad continuation monad logical relations complete types strong sense exists observational logical relation type closed terms type first order proof take logical relation type induced base type prove induction types type particular induction step shown lemma completeness logical relations monad monad interesting case completeness logical relations monad hold types state consider following two programs type break completeness logical relations val true false bool tbool true true false bool tbool recall logical constant type every two programs contextually equivalent contexts apply functions arguments observe results matter many time apply two functions always get set possible values true false way distinguish context recall logical relation monad figure clearly denotations two programs related relation function true second program related function first however assume every base type equality test constant testb bool clearly testb holds logical relations monad complete set weak types compared types first order weak types contain monadic types functions immediately excludes two programs counterexample theorem logical relations monad complete weak types strong sense exists observational logical relation type closed terms weak type proof take logical relation induced base type prove induction types weak type cases identically standard typed monadic types suppose rtb means either value value related value assume every value definable otherwise obvious least one definable according proposition suppose value value related defined closed term type following context distinguish let testb tbool since every value contextually equivalent hence equal let state label base types label observation type whereas state using monad define labeled transition systems elements jstate label tstatek states jstatek labels jlabelk functions mapping states labels set states logical relation type state label tstate given rstate rlabel rstate rstate case rlabel equality logically related rstate strong bisimulation labeled transition systems sometimes explicitly specify initial state certain labeled transition system case encoding labeled transition system nondeterminism monad pair jstate state label tstate initial state transition relation defined logically related strongly bisimular rstate strong bisimulation two labeled transition systems rstate corollary soundness strong bisimulation let transition systems exists strong bisimulation contextually equivalent proof exists strong bisimulation therefore logically related theorem thus contextually equivalent order prove completeness need assume label junk sense every value jlabelk definable corollary completeness strong bisimulation let transition systems definable contextually equivalent label junk exists strong bisimulation proof let logical relation given theorem definable contextually equivalent moreover label junk rlabel equality rstate thus strong bisimulation conclusion work presented paper natural continuation authors previous work extend derive logical relations monadic types sound sense basic lemma still holds study contextual equivalence specific version computational cryptographic primitives show lax logical relations categorical generalization logical relations derived using construction complete paper explore completeness logical relations computational show complete types list common monads partial computations exceptions state transformers continuations case continuation completeness depends natural constant call show soundness pitts stark defined operationally based logical relations characterize contextual equivalence language local store work traced back early work translated special version computational modeled using dynamic name creation monad logical relations monad derived using construction also shown derived logical relations equivalent pitts stark operational logical relations types exceptional case completeness result monad logical relations complete types subset effectively show providing breaks completeness types indeed interesting case comprehensive study monad found jeffrey defines denotational model computational specialized proves model fully abstract relation notion contextual equivalence equivalence remains clarified recently lindley stark introduce syntactic computational prove strong normalization katsumata instantiates liftings set strong monads essentially different approach would interesting establish formal relationship two approaches look general proof completeness using references benton bierman paiva computational types logical perspective functional programming lasota nowak logical relations monadic types proceedings csl volume lncs pages springer lasota nowak zhang complete lax logical relations cryptographic proceedings csl volume lncs pages springer jeffrey fully abstract semantics functional language nondeterministic computation theoretical computer science katsumata semantic formulation logical predicates computational metalanguage proceedings csl volume lncs pages springer nowak unifying approach proceedings concur volume lncs pages springer lindley stark reducibility computation types proceedings tlca number lncs pages springer mitchell foundations programming languages mit press mitchell scedrov notes sconing relators proceedings csl volume lncs pages springer moggi notions computation monads information computation hearn tennent parametricity local variables acm pitts stark observable properties higher order functions dynamically create local names new proceedings mfcs number lncs pages springer pitts stark operational reasoning functions local state higher order operational techniques semantics pages cambridge university press plotkin power sannella tennent lax logical relations proceedings icalp volume lncs pages springer plotkin full type hierarchy curry essays combinatory logic lambda calculus formalism pages academic press sieber full abstraction second order subset language theoretical computer science stark categorical models local names lisp symbolic computation sumii pierce logical relations encryption computer security zhang cryptographic logical relations dissertation ens cachan france
| 6 |
robust fusion methods structured big data catherine aarona alejandro cholaquidisb ricardo fraimanb badih ghattasc apr clermont auvergne campus universitaire des france universidad facultad ciencias uruguay aix marseille cnrs centrale marseille umr marseille france abstract address one important problems big data namely combine estimators different subsamples robust fusion procedures unable deal whole sample propose general framework based classic idea divide conquer particular address detail case multivariate location scatter matrix covariance operator functional data clustering problems introduction big data arisen recent years deal problems several domains social networks biochemistry health care systems politics retail among many others new developments necessary address problems area typically classical statistical approaches perform reasonably well small data sets fail dealing huge data sets handle challenges new mathematical computational methods needed challenges posed big data cover wide range various problems recently considered huge literature see instance wang ahmed references therein address one problems namely combine using robust techniques estimators obtained different subsamples case computationally unable deal whole sample follows refer approaches robust fusion methods rfm general algorithm proposed spirit related well known idea consider case data belong finite infinite dimensional spaces functional data functional data analysis fda become central area statistics recent years gained much momentum work ramsay early since quantity quality results enjoyed marked growth addressing great diversity problems fda faces several specific challenges associated nature data recent important unavoidable references fda kokoszka ferraty vieu aneiros well recent surveys cuevas vardi zhand see instance aho well known technique dealing hugh fda setting tang also considered recently linear regression problem involving lasso problem addressed present paper focus general robust procedure different problems consistency robustness method studied general setting fda apply proposed algorithm statistical problems finite infinite dimensional settings namely location scatter matrix clustering impartial trimmed also new robust estimator covariance operator proposed start describing one simplest problems area toy example suppose interested median huge set iid random variables common density split sample subsamples size calculate median subsample obtain random variables take median set consider well known median medians case rfm estimator clear coincide median whole original sample close else say estimator regarding efficiency robustness particular case rfm estimator nothing median iid random variables different distribution given distribution median random variables density suppose simplicity density random variables given one hand empirical median med behaves asymptotically like normal distribution centred true median variance hand rfm median medians behaves asymptotically like normal distribution centred variance rfm explicitly calculate asymptotic relative loss efficiency rfm section generalize rfm idea study consistency robustness breakdown point efficiency section shows rfm may applied multivariate location scatter matrix estimation covariance operator estimation functional data robust clustering last section provides simulation results problems general setup rfm start introducing general framework rfm idea quite simple given sample iid random elements metric space instance statistical problem multivariate location covariance operators linear regression principal components among many others split sample subsamples equal size subsample compute robust solution statistical problem considered solution given rfm corresponds deepest point among solutions terms appropriate norm associated problem obtained subsamples order introduce notion depth use throughout paper following notation let random variable taking values banach space probability distribution let depth respect defined follows epx introduced chaudhuri formulated different way vardi zhand extended general setup chakraborty chaudhuri given sample let write empirical measure empirical version epn kxi although suggest using depth function statistical problems unsuitable instance clustering cases deepest point may replaced robust estimators show section summarize approach table general framework parameter estimation may easily applied situation robust estimators exist designed iid random elements banach space parameter estimate split sample subsamples xlm compute robust estimate subsample obtaining compute final estimate rfm rfm combining robust approach instance rfm deepest point average deepest points among table parameter estimation using rfm address consistency efficiency robustness computational time rfm proposals consistency robustness breakdown point rfm start proving given sample random element deepest point value maximizes converges almost surely value maximizes although similar results already obtained see instance chakraborty chaudhuri need necessarily empirical measure associated sample measure converging weakly probability distribution need following assumption probability measure defined separable hilbert space fulfils stands boundary set observe fulfilled random variables absolutely continuous random variable distribution theorem let sequence random elements common distribution defined separable hilbert space let probability distribution fulfilling assume weakly kep unique minimum arg max arg max order prove use following fundamental result proved billingsley still holds separable banach space see theorem example theorem billingsley suppose let class bounded measurable functions mapping suppose subclass functions sup dpn every sequence converges weakly sup lim sup sup open ball radii proof theorem consider subclass functions sup let observe lastly get since whenever every dominated convergence theorem implies entails continuous function maximum compact set attained let compact set denote arg let prove fixed case exists since compact assume considering subsequence follows indeed consider let define max finally lim contradict sup max sup therefore supy max small enough showing holds lastly consequence uniform convergence argmax argument following corollary states consistency rfm explained table sample distributed random variable distribution fulfilling corollary assume fulfils exists unique rfm recall sequence estimators qualitatively robust probability distribution exists probability distribution see hampel denotes prokhorov distance denotes probability distribution metrizes weak convergence following corollary corollary robustness rfm estimators hypotheses corollary rfm qualitatively robust remark qualitative robustness ensures good behaviour estimator neighbourhood however estimators still converge even far instance shorth defined average observations lying shortest interval containing half data property indeed consider case also case impartial trimmed estimators minimum volume ellipsoid redescendent compact support see subsection estimators subsample property rfm estimator inherit efficiency fusion section obtain asymptotic variance rfm method special case recall defined see section huber ronchetti implicit functional equation stands true underlying common distribution observations instance maximum likelihood estimator obtained log estimator given empirical version based sample well known asymptotically normal mean variance given integral square influence curve influence curve location problem get asymptotic efficiency defined eff asymptotic variance maximum likelihood estimator asymptotic variance built sample tnm mestimators calculated easily strong consistency model see huber entails rfm built consistent see corollary whenever empirical version implicit functional equation unique solution choice impact robustness estimator computation time indeed computation time computation time fusion step optimal choice breakdown point rfm following donoho consider breakdown point introduced donoho intuitively breakdown point corresponds maximum percentage outliers located worst possible positions sample estimate breaks sense arbitrarily large close boundary parameter space definition let unknown parameter lying metric space estimate based let set size elements common card card breakdown point max bounded also bounded away boundary analyse breakdown point rfm consider case breakdown point robust estimators high breakdown point estimators observation sample let outlier otherwise assume variables iid following bernoulli distribution parameter let number outliers subsample rfm estimator breakdown cases greater recall take glance behaviour breakdown point performed replications generated binomial random variables parameter split samples size randomly subsamples next calculated number subsamples contained outliers table report average number times replications number greater different values best result obtained table average replications estimator breakdowns different values fixed proportion outliers applications rfm section show rfm may used tackle three classic statistical problems large samples estimating multivariate location scatter matrix estimating covariance operator functional data clustering problem show apply approach given table solutions many problems may derived cases principal components example functional data robust fusion location scatter matrix finite dimensional spaces given iid random sample consider location scatter matrix estimation problem perform rfm need make explicit estimators used subsamples depth function fusion stage location parameters propose use simple robust estimates denoted see instance maronna martin yohai depth function propose use empirical version replacing empirical distribution euclidean distance equivalently scatter matrix use depth function robust estimators scatter matrix norm denotes empirical distribution simulation study presented section robust fusion covariance operator estimation covariance operator stochastic process important topic fda helps understand fluctuations random element well derive principal functional components spectrum several robust estimators proposed see instance chakraborty chaudhuri references therein order perform rfm introduce new robust estimator use subsamples implemented using parallel computing based notion impartial trimming space covariance operators defined introduced gordaliza shown successful tool robust estimation next rfm estimator defined deepest point among estimators impartial trimmed means corresponding subsample better understand construction new estimator first recall general framework used estimation covariance operators general framework estimation covariance operators let finite interval iid random eler ments taking values assume dsdt covariance function given well defined notational simplicity assume conditions covariance operator given diagonalizable eigenvalues moreover belongs space linear operators norm inner product given ihs respectively orthonormal basis particular given iid sample define operators hxi ixi let kxi standard estimator average operators consistent estimator law large numbers space replace average trimmed version space new robust estimator covariance operator proposal consider impartial trimmed estimator resistant estimator notion impartial trimming introduced gordaliza functional data setting considered fraiman one obtain asymptotic theory setting construction estimator needs explicit expression distances kwi derive using following lemma lemma kwi kxi proof let write hwi ihs hwi ihs ihs ihs hwi hxi ixi hxi ihxk follows identity hxi ihxk hxi given sample assumed mean zero notational simplicity provide simple algorithm calculate approximate impartial trimmed mean estimator covariance operator strongly consistent step calculate kwi khs using lemma step let consider set indices corresponding nearest neighbours among order statistic vector din step let argmin step impartial trimmed mean estimator given average nearest neighbours among average operators covariance function estimated observe steps algorithm performed using parallel computing final estimator given rfm may obtained taking deepest point average deepest points among estimators obtained algorithm norm used depth function case functional analogue robust fusion cluster analysis section describe robust fusion method clustering approach based use impartial trimmed itkm see gordaliza two steps first one apply itkm given trimming level subsamples obtain sets centres second step apply itkm trimming level set suggested gordaliza section start describing briefly itkm impartial trimmed given sample trimming level number clusters itkm looks set partition space minimizes loss function kxi set trimmed data cardinality let random vector distribution number clusters trimming proportion every define min set trimming functions level defined measurable fulfilling dpx functions natural generalization indicator functions pair let consider function dpx dpx lastly define inf inf corollary gordaliza proves exists pair necessarily unique attaining value moreover absolutely continuous lebesgue measure inf let denote empirical distribution based sample theorem gordaliza proves absolutely continuous lebesgue measure exists unique pair solving moreover sequence empirical trimmed denotes hausdorff distance clear case inf induce partitions respectively clusters defining cluster min cluster min points boundary clusters assigned arbitrarily functional version itkm found fraiman hand fusion step rfm done applying itkm set centres whole algorithm summarized table split sample subsamples recall subsample apply empirical version obtain one points apply empirical version set obtain output algorithm mrfm build clusters applying table rfm algorithm clustering simulation results describe simulations done rfm three applications described previous sections design simulation specific application describe separately simulations done using intel core cpu ram bit processor software package running ubuntu location scatter matrix finite dimensional spaces use simulations analyse location parameters scatter matrix robust estimator applied function covmest rrcov parameters given default draw samples centred gaussian distribution covariance matrix elements equal outliers use cauchy distribution independent coordinates centred test two contamination levels vary sample size within set number subsamples replicate simulation case times report average estimators obtained rfm values maximize depth functions given eqs location scatter matrix respectively case maximization done set estimates obtained subsamples mean squared error averaged replicates location problem given table estimators considered following average whole sample mle average robust location estimators avrob average deepest robust estimators deepest robust estimator rfm table location estimators mle avrob rfm mle avrob rfm see estimator obtained rfm behaves well depending structure outliers mean robust estimates may behave well even one subsamples contains high proportion outliers causing robust estimator break average robust estimators break hand deepest always behaves well performances estimators decrease general estimation errors covariance given table table compare mle estimator mle robust estimator based whole sample rob average robust scatter matrix estimators avrob average deepest robust estimators deepest robust estimator rfm also report average time seconds necessary global estimator whole sample estimator obtained fusion including computing estimators subsamples aggregating fusion since second step algorithm see point table parallelized practice computational time divided almost results rfm good covariance matrix well covariance operator generate data used simplified version simulation model used kraus panaretos sin cos random standard gaussian independent observations central observations generated using whereas table covariance estimators using mle robust estimates entire sample aggregating average trimmed average fusion subsamples estimators mle rob avrob rfm table covariance estimators using mle robust estimates entire sample aggregating average trimmed average fusion subsamples estimators mle rob avrob rfm outliers took sin used equally spaced grid points covariance operator process given cov sin cos computed comparisons varied sample size within set number subsamples proportion outliers fixed replicated simulation case times report average performance replicates report also average time seconds necessary global estimate whole sample estimate obtained fusion including computing figure simulated functions outliers estimates subsamples aggregating fusion compare classical estimator mle global robust estimate rob average robust estimates subsamples avrob robust fusion estimate rfm results shown tables two proportions outliers respectively table covariance operator estimator using classical robust estimators entire sample aggregating average fusion subsamples estimators mle rob avrob rfm table covariance operator estimator using classical robust estimators entire sample aggregating average fusion subsamples estimators mle cvrob avrob rfm proportion outliers moderate average robust estimators still behaves well better rfm increase proportion outliers rfm clearly outperforms estimators clustering performed simulation study large sample sizes using model three clusters outliers introduced gordaliza data generated using bivariate gaussian distributions following parameters clusters outliers respectively two dimensional identity matrix outliers generated sizes clusters fixed following values gordaliza outliers lying level confidence ellipsoids clusters replaced others belonging area outliers represent almost whole sample used base simulation varied whole sample size multiplying factor fac taking values smallest sample largest rfm output tkmeans output true clusters figure left panel true clusters middle panel results obtained itkm whole sample right panel output obtained rfm using subsamples outliers blue points value varied number subsamples within values restriction fac lastly applying trimmed samples tested three values trimming level whereas fusion fixed left hand panel figure shows example simulated middle panel shows results obtained itkm applied whole sample right hand panel shows output algorithm partitions obtained approach compared true clusters using matching error defined min set permutations true cluster observation cluster assigned algorithm results simulation given table compare rfm method itkm calculated whole sample columns give matching errors itkm applied whole sample rfm respectively also report average time seconds necessary global estimator whole sample estimator obtained fusion including computing estimators subsamples aggregating fusion finally time using parallel computing expected rfm matching errors often higher itkm applied whole sample loss performance small general increases smallest values large samples rfm almost performance values hand increasing value reduces considerably computation time rfm real data example example chosen mnist handwritten digits see https compare performance rfm clustering algorithm clustering procedure without splitting sample impartial trimmed digits centred image pixels consist training sample data test sample data explained aforementioned link classic dataset handwritten images served basis benchmarking classification algorithms however interested clustering use sample searching groups difficult task labels chosen random probability get least half data well identified extremely close zero cluster data using methods design previous simulations one hand cluster whole sample using impartial trimmed algorithm hand use rfm clustering method given table labels used calculate misclassification error rates defined results given table left table right show clustering problem difficult relative efficiencies rfm clustering procedures computational times fall drastically respectively efficiencies computational times fall respectively table rfm clustering using different values trimming parameter table robust clustering left right concluding remarks addressed fundamental statistical problems context big data namely large samples presence outliers location covariance estimation covariance operator estimation clustering proposed general robust approach called robust fusion method rfm shown may applied problems simulations gave good results mainly last two problems different statistical challenges problems approach may adapted task soon robust efficient estimate available corresponding problem addressed one important problems big data namely size large one needs split pieces setup think robustness mandatory provided general procedure robust fusion method deal problems method general applied different statistical problems high dimensional functional data robust methods reasonably simple order work large samples provided new robust method rfm estimate covariance operator functional data setting particular cases considered multivariate location problem scatter matrix covariance operator clustering methods illustrated simulated examples behaviour rfm problems different large sample sizes acknowledgement constructive comments criticisms associated editor two anonymous referees last author work partially supported ecos project references aho hopcroft ullman design analysis computer algorithms pub ahmed ejaz eds big complex data analysis methodologies applications berlin aneiros bongiorno cao vieu eds functional statistics related fields berlin billingsley uniformity weak convergence wahrs und verw gebiete chakraborty chaudhuri spatial distribution infinite dimensional spaces related quantiles depths annals statistics chaudhuri geometric notion quantiles multivariate data journal american statistical association fraiman impartial means functional data liu serfling souvaine eds data depth robust multivariate statistical analysis computational geometry amp amp applications vol dimacs series american mathematical society gordaliza trimmed attempt robustify quantizers annals statistics cuevas partial overview theory statistics functional data journal statistical planning inference donoho breakdown properties multivariate location estimators qualifying papers dept statistics harvard university ferraty vieu nonparametric functional data analysis springerverlag berlin goia vieu special issue statistical models methods high infinite dimensional spaces journal multivariate analysis gordaliza best approximations random variables based trimming procedures approx theory kokoszka inference functional data applications berlin huber ronchetti robust statistics wiley hoboken hampel general qualitative definition robustness annals mathematical statistics vol huber behavior maximum likelihood estimates nonstandard conditions proc fifth berkeley symp math statist prob vol univ press berkeley kraus panareto dispersion operators resistant functional data analysis biometrika maronna martin yohai robust statistics theory methods wiley hoboken tang zhou song method regularised generalised linear models big data https vardi zhang multivariate associated data depth proc nat acad sci usa wang chen schifano yan statistical methods computing big data statistics interface let data science ims bulletin online
| 10 |
locally recoverable codes small fields sep electrical pengfei eitan paul computer engineering university california san diego jolla computer science technion israel institute technology haifa israel pehuang psiegel yaakobi codes play important role storage systems prevent data loss work study class erasure codes called locally recoverable codes storage arrays compared previous related works focus construction small fields first develop upper lower bounds minimum distance main contribution propose general construction based generalized tensor product codes study properties decoding algorithm tailored erasure recovery given correctable erasure patterns identified prove construction yields optimal wide range code parameters present explicit small fields finally show generalized integrated interleaving gii codes treated subclass generalized tensor product codes thus defining exact relation codes ntroduction recently erasure codes local global erasurecorrecting properties received considerable attention thanks promising application storage systems idea behind erasures occur erasures corrected fast using local parities number erasures exceeds local capability global parities invoked paper consider kind erasure codes local global capabilities storage array row contains local parities additional global parities distributed array array structure suitable many storage applications example consider redundant array independent disks raid type architecture drives ssds scenario storage array represent total ssds contains flash memory chips within ssd erasure code applied chips local protection addition erasure coding also done across ssds global protection chips specifically let give formal definition class erasure codes follows definition consider code finite field consisting arrays row array belongs linear local code length minimum distance reading symbols linear code length dimension minimum distance say cally recoverable code thus locally correct erasures row guaranteed correct total erasures anywhere array work motivated recent work blaum hetzler work authors studied row maximum distance separable mds code gave code constructions field size max using generalized integrated interleaving gii codes definition generalizes definition codes requiring row mds code exist related works definition seen lrcs disjoint repair sets code called lrc every coordinate exists punctured code repair set support containing coordinate whose length whose minimum distance least although existing constructions lrcs disjoint repair sets generate definition use mds codes local codes require field size least large code length recent work gives explicit constructions lrcs disjoint repair sets field algebraic curves whose repair sets size partial mds pmds codes also related different definition general pmds code needs satisfy strict requirements array code called pmds code row mds code whenever locations row punctured resulting code also mds code minimum distance construction pmds codes field size known recently family pmds codes field size max constructed best knowledge however construction optimal small field field size less length local code even binary field fully explored solved goal paper study small fields propose general construction based generalized tensor product codes first utilized construct binary lrcs contributions paper extend previous construction scenario lrcs field result construction seen special case construction proposed paper contrast construction require field size max even generate binary derive upper lower bounds minimum distance show construction produce optimal respect new upper bound minimum distance present erasure decoding algorithm corresponding correctable erasure patterns include pattern erasures show construction based codes optimal certain correctable erasure patterns far exact relation gii codes generalized tensor product codes fully investigated prove gii codes subclass generalized tensor product codes result parameters gii code obtained using known properties generalized tensor product codes remainder paper organized follows section study field size dependent upper lower bounds section iii propose general construction properties codes studied erasure decoding algorithm presented section study optimal code construction give several explicit optimal different fields section prove gii codes subclass generalized tensor product codes section concludes paper throughout paper use notation denote set vector set vector denotes restriction vector coordinates set represents hamming weight vector transpose matrix written set represents cardinality set linear code length dimension minimum distance denoted simplicity code one codeword minimum distance defined pper ower ounds section derive field size dependent upper lower bounds minimum distance upper bound obtained used prove optimality construction following sections give upper bound minimum distance extending shortening bound lrcs let dopt denote largest possible minimum distance linear code length dimension kopt let denote largest possible dimension linear code length minimum distance lemma minimum distance satisfies dopt min dimension satisfies min kopt kopt proof see appendix asymptotic lower bound local mds codes given simply adapting gilbertvarshamov bound following lower bound finite length without specifying local codes lemma exists proof see appendix iii eneralized ensor roduct odes onstruction ecoding tensor product codes first proposed wolf family binary codes defined paritycheck matrix tensor product matrices two constituent codes later generalized section first introduce generalized tensor product codes give general construction generalized tensor product codes minimum distance constructed determined decoding algorithm tailored erasure correction proposed corresponding correctable erasure patterns studied generalized tensor product codes start presenting tensor product operation two matrices let matrix code length dimension trix considered row column matrix row column matrix elements fqv let vector elements fqv let paritycheck matrix code length dimension fqv denote elements fqv tensor product matrices defined htp products elements calculated according rules multiplication elements fqv matrix htp considered matrix elements thus defining tensor product code provide example illustrate operations example let following matrix code primitive element let following matrix hamming code representing elements htp using vector mapping represent htp htp defines binary code construction based generalized sor product codes define matrices follows matrix matrix matrix matrix code matrix matrix fqvi matrix qvi code define generalized tensor product code linear code matrix form following tensor product structure matrix htp level matrix obtained operations extension field denote code cgtp length dimension adapting theorem field directly following theorem minimum distance cgtp theorem minimum distance generalized tensor satisfies product code cgtp min proof see appendix construction present general construction based generalized tensor product codes construction step choose matrices trices fqvi satisfy following two properties matrix identity matrix matrices chosen step generate matrix according matrices constructed code corresponding matrix referred theorem code parameters proof according construction code parameters easily determined following prove since identity matrix theorem show let fqvi let fqvi first column since code matrix minimum distance exist columns say set positions linearly dependent thus columns positions linearly dependent erasure decoding correctable erasure patterns present decoding algorithm construction tailored erasure correction decoding algorithm error correction generalized tensor product codes found let symbol represent erasure denote coding failure erasure decoder consists two kinds component decoders described first decoder coset code paritycheck matrix denoted uses following decoding rule input vector syndrome vector without erasures agrees exactly one codeword entries values vector coset leader determined code syndrome vector ebit otherwise therefore input vector codeword less erasures syndrome vector erased decoder return correct codeword second decoder code matrix denoted fqvi fqvi vector contains entry considered symbol set fqn similarly symbol set fqn treated vector whose entries describe correctable erasure patterns use following notation let denote number erasures vector received word let theorem decoder correct received word satisfies following condition defined proof see appendix following corollary follows theorem corollary decoder correct received word less erasures proof see appendix uses following decoding rule input vector agrees exactly one codeword entries values fqvi wise therefore input vector codeword less erasures decoder successfully return correct codeword erasure decoder code summarized algorithm let input word length decoder component vector erased version codeword ptimal onstruction xplicit mall ields section study optimality construction also present several explicit optimal different fields algorithm decoding procedure decoder satisfies kopt matrix input received word output codeword decoding failure let let contains following steps otherwise step sij remains update contains end let output otherwise return algorithm use following rules operations involve symbol addition element multiplication element optimal construction show construct optimal bound adding constraints construction end specify choice matrices construction specification referred design follows matrix code code dopt minimum distances satisfy vector length fqvi matrix parity code minimum distance theorem code construction design optimal respect bound proof theorem code parameters determined kopt setting get min dopt proves achieves bound explicit construction construction flexible generate many different fields following present several examples local extended bch codes structure bch codes exists chain nested binary extended bch codes let matrices matrices respectively example construction use matrices also choose vector length theorem corresponding melrc parameters code satisfies requirements design thus theorem optimal bound local algebraic geometry codes use class algebraic geometry codes called hermitian codes construct construction hermitian codes exists chain nested hermitian codes let matrices paritycheck matrices tively let vector length example construction use matrices theorem corresponding parameters construction use matrices theorem corresponding parameters construction use matrices theorem corresponding parameters three families optimal bound local codes let primitive element choose matrix code namely choose also require let matrix qvi mds code exists whenever qvi note mds code minimum distance code length arbitrarily long example use chosen matrices construction corresponding melrc parameters field size satisfies max corresponding parameters field size needs satisfy since satisfies requirements design theorem optimal bound following theorem shows constructed example optimal sense possessing largest possible dimension among codes capability theorem let code length dimension codeword consists length assume corrects erasure patterns ing condition must dimension proof proof based contradiction let codeword correspond array index coordinates array row row number let set coordinates defined let set coordinates given defined let set coordinates array calculation code defined follows assume codewords exist least two distinct agree coordinates set erase values coordinates set erasure pattern satisfies condition since distinct erasure pattern uncorrectable thus tion violated remark construction blaum hetzler based gii codes generate constructed examples example since local code code construction also used produce code parameters melrc construction however construction requires field size satisfy max general larger construction elation eneralized ntegrated nterleaving odes integrated interleaving codes first introduced scheme data storage applications extended recently generalized integrated interleaving gii codes data protection main difference gii codes generalized tensor product codes generalized tensor product code defined operations base field also extension field shown contrast gii code defined field result generalized tensor product codes flexible gii codes generally gii codes used construct small fields binary field goal section study exact relation generalized tensor product codes gii codes show gii codes fact subclass generalized tensor product codes idea reformulate paritycheck matrix gii code form matrix generalized tensor product code establishing relation allows code properties gii codes obtained directly known results generalized tensor product codes start considering codes twolevel case gii codes illustrate idea integrated interleaving codes follow definition codes let linear codes primitive element according definition known matrix denotes kronecker product matrices matrices respectively matrix identity matrix matrix code following form remark matrix obtained operations field contrast matrix generalized tensor product code obtained operations base field extension field remark general codes chosen codes chosen binary codes see relation codes generalized tensor product codes reformulate following form splitting rows matrix matrix treated vector extension field correspondingly matrix treated identity matrix represents ith row matrix referring matrix matrix interpreted matrix generalized tensor product code thus conclude code generalized tensor product code using properties generalized tensor product codes directly obtain following result proved alternative way lemma code linear code length dimension minimum distance min proof let following paritycheck matrix define code clear properties generalized tensor product codes redundancy dimension using theorem minimum distance min min generalized integrated interleaving codes similar idea used previous subsection continue proof gii codes use definition gii codes consistency let codes primitive element let first define matrices used let matrix identity matrix let matrix let matrix represent matrix let matrix paritycheck matrix code following form make connection gii codes generalized tensor product codes reformulate matrix hgi follows minimum distances satisfy gii code cgi defined cgi according definition using matrices introduced matrix cgi hgi transformed form hgi hgi kis first level matrix treated vector extension field correspondingly matrix treated identity matrix kix hix represents yth row matrix hix referring matrix matrix seen matrix kis generalized tensor product code result directly obtain following lemma also proved different way lemma code cgi linear code length dimension minimum distance min dis proof following matrix hix kix let define dix code dix dix dix dix ties generalized tensor product codes easy obtain dimension theorem minimum distance satisfies min dis dis dis min dis remark prior works find generalized tensor product codes called generalized gel codes recently similarity gii codes gel codes observed however exact relation studied author also proposed new generalized integrated interleaving scheme binary bch codes called codes codes also seen special case generalized tensor product codes onclusion work presented general construction melrcs small fields construction yields optimal melrcs respect upper bound minimum distance wide range code parameters erasure decoder proposed corresponding correctable erasure patterns identified based codes shown optimal among codes capability finally generalized integrated interleaving codes proved subclass generalized tensor product codes thus giving exact relation two codes acknowledgment work supported nsf grants bsf grant western digital corporation eferences barg tamo vladut locally recoverable codes algebraic curves arxiv preprint blaum hafner hetzler codes application raid type architectures ieee trans inf theory vol blaum hetzler integrated interleaved codes locally recoverable codes properties performance international journal information coding theory vol bossert maucher zyablov results generalized concatenation block codes proc international symposium applied algebra algebraic algorithms codes springer cadambe mazumdar upper bound size locally recoverable codes proc ieee netcod calis koyluoglu general construction pmds codes ieee communications letters vol gabrys yaakobi blaum siegel constructions partial mds codes small fields proc ieee isit gibson redundant disk arrays reliable parallel secondary storage mit press gopalan huang simitci yekhanin locality codeword symbols ieee trans inf theory vol goparaju calderbank binary cyclic codes locally repairable proc ieee isit june hassner patel koetter trager integrated novel ecc architecture ieee trans vol huang yaakobi uchikawa siegel binary linear locally repairable codes ieee trans inf theory vol nov cyclic linear binary locally repairable codes proc ieee itw linear locally repairable codes availability proc ieee isit imai fujiya generalized tensor product codes ieee trans inf theory vol mar maucher zyablov bossert equivalence generalized concatenated codes generalized error location codes ieee trans inf theory vol oggier datta homomorphic codes distributed storage systems proc ieee infocom papailiopoulos dimakis locally repairable codes proc ieee isit prakash kamath lalitha kumar optimal linear codes property proc ieee isit july roth introduction coding theory cambridge university press tamo barg family optimal locally recoverable codes ieee trans inf theory vol aug tang koetter novel method combining algebraic decoding iterative processing proc ieee isit wolf codes derivable tensor product check matrices ieee trans inf theory vol apr generalized integrated interleaved codes ieee trans inf theory vol feb yang kumar true minimum distance hermitian codes coding theory algebraic geometry springer ppendix roof emma proof case trivial let represent set coordinates first rows array thus first consider code whose dimension denoted satisfies consider code since code linear size code linear code well moreover minimum distance code least thus get upper bound minimum distance proposition similarly also get upper bound dimension kopt kopt therefore conclude kopt kopt proof condition means codeword code defined matrix whose minimum distance therefore contradicts assumption thus ppendix roof emma proof construct melrc two steps use bound twice first exists array code size integer satisfies second exists code minimum distance least redundancy satisfies logq encode row code using code adding redundancy symbols resulting code let substitute producing ppendix roof heorem proof codeword cgtp vector denoted vector prove contradiction induction assume exists codeword first state proposition used following proof let thus vector considj ered element fqv let vector fqv whose components prove theorem need show min must vector consider second level step proposition contradicts assumption induction must last level level contradicts assumption also contradicts assumption thus assumption violated ppendix roof heorem proof proof follows decoding procedure decoder received word vector corresponds row array first level since correct syndrome vector vector thus rows number erasures less corrected second level remaining uncorrected row least erasures total number uncorrected rows indices less require condition thus correct syndrome vector obtained result rows number erasures less corrected similarly induction decoder runs level remaining uncorrected row least erasures total number uncorrected rows indices less require condition therefore correct syndrome vectors obtained hand remaining uncorrected row erasures since also require condition thus uncorrected rows corrected step correct syndromes ppendix roof orollary proof need show received word erasures satisfies condition theorem prove contradiction condition satisfied least integer therefore last inequality requirement construction thus get contradiction assumption received word erasures
| 7 |
sep new family mrd codes right middle nuclei fqn rocco trombetti yue zhou abstract paper present new family maximum rank distance mrd short codes minimum distance particular show corresponding semifield exactly semifield middle right nuclei mrd codes equal fqn also prove mrd codes minimum distance family inequivalent known ones equivalence two members new family also determined introduction let denote field set matrices form space denote define rank often called rank metric rank distance subset respect rank metric called code code contains least two elements minimum distance given min subspace say code dimension dimk defined dimension subspace let denote finite field elements max min singleton bound rank metric see equality holds call maximum mrd short code properties mrd codes found codes particular mrd codes studied since seen much interest recent years due wide range applications including storage systems cryptosystems spacetime codes random linear network coding dipartimento mathematica applicazioni caccioppoli degli studi napoli federico napoli italy college science national university defense technology changsha china corresponding author addresses rtrombet date september trombetti zhou finite geometry several interesting structures including quasifields semifields splitting dimensional dual hyperovals equivalently described special types codes see references therein particular finite quasifield corresponds mrd code minimum distance finite semifield corresponds mrd code subgroup see precise relationship many essentially different families finite quasifields semifields known yield many inequivalent mrd codes minimum distance contrast appears much difficult obtain inequivalent mrd codes minimum distance strictly less relationship mrd codes geometric objects linear sets segre varieties refer besides quasifields known constructions mrd codes first construction mrd codes given delsarte construction later rediscovered gabidulin generalized kshevetskiy gabidulin today family usually called generalized gabidulin codes sometimes also simply called gabidulin codes see section precise definition easy show gabidulin code always fqn recently another family found sheekey often call generalized twisted gabidulin codes family generalized additive mrd codes given constructions provide mrd codes minimum distance mrd codes minimum distance constructions first nonlinear family constructed cossidente marino pavese later generalized durante siciliano besides family constructions associated maximum scattered linear sets presented recently results concerning maximum scattered linear sets associated mrd codes see many different approach mrd codes construct canonical way get puncturing projecting mrd codes fqm new criterion punctured gabidulin codes presented small several constructions mrd codes equivalent obtained presented generic construction mrd codes using algebraic geometry approaches condition large enough compared approach derive mrd codes linear sets investigated nonlinear construction presented recently schmidt second author showed even gabidulin codes huge subset inequivalent mrd codes paper present new family mrd codes exist minimum distance particular show corresponding semifield exactly semifield found investigation middle right nuclei prove mrd codes new family inequivalent known constructions rest paper organized follows section introduce semifields describe codes via linearized polynomials introduce equivalence codes well dual codes adjoint codes section present new family mrd codes determine middle right nuclei based results show inequivalent new family mrd codes known mrd codes except one special case later excluded section another result section complete answer equivalence problem different members new family preliminaries roughly speaking semifield algebraic structure satisfying axioms skewfield except possibly associativity multiplication finite field trivial example semifield furthermore necessarily multiplicative identity called presemifield presemifield necessarily abelian first family semifields constructed dickson century ago knuth showed additive group finite semifield elementary abelian group additive order nonzero elements called characteristic hence finite semifield represented power prime additive group finite field written aij forms map refer recent comprehensive survey finite semifields geometrically speaking correspondence via coordinatisation pre semifields projective planes type see important equivalence relation defined pre semifields isotopism given two pre semifields fnp fnp exist three bijective linear mappings fnp fnp fnp called isotopic triple called isotopism albert showed two pre semifields coordinatize isomorphic planes isotopic every presemifield normalized semifield appropriate isotopism see given semifield multiplication define left middle right nucleus difficult prove semifield viewed left vector space left nucleus particular finite show actually finite field let assume size every map defines matrix left nucleus furthermore matrices together form rank metric code actually mrd code difference two distinct members equals always nonsingular mrd code usually called semifield spread set associated see next let turn codes working codes rather paper convenient describe code using language linearized trombetti zhou polynomials fqn polynomials set fact bijection results linearized polynomials refer mentioned introduction part family mrd codes called generalized gabidulin codes described following subset linearized polynomials fqn relatively prime obvious polynomials polynomial roots means minimum distance hence size meets singleton bound fqm let nqm denote norm fqm nqm following result follows theorem lemma let two relatively prime positive integers suppose fqm linearized polynomial roots nqsm nqsm sheekey applied lemma found new family mrd codes fqm satisfies nqsm nqm mrd code usually called generalized twisted gabidulin code clear allow equal viewed subfamily twisted gabidulin codes replacing field automorphism coefficient last term elements automorphism aut fqn aut fqn otal generalized family additive one several slightly different definitions equivalence codes paper use following notion equivalence definition two codes equivalent exist glm gln aut equivalent means transposition say isometrically equivalent equivalence map code also called automorphism additive equivalent difficult show choose particular semifield spread sets equivalent associated semifields isotopic theorem back descriptions linearized polynomials given two codes consist linearized polynomials equivalent exist permuting fqn aut stands composition maps new family mrd codes general difficult job tell whether two given codes equivalent several invariants may help distinguish given code middle nucleus defined right nucleus defined two concepts introduced viewed natural generalization middle right nucleus semifields called left idealizer right idealizer respectively general also define left nucleus however mrd codes containing singular matrices always means useful invariant see code given set linearized polynomials middle nucleus right nucleus also written sets linearized polynomials precisely middle nucleus defined rather always consider row vector multiplying matrix member code means similarly right nucleus played important role proving lower bound numbers inequivalent gabidulin codes middle right nuclei generalized twisted gabidulin codes together complete answer equivalence members family found define symmetric bilinear form set transpose delsarte dual code code one important result proved delsarte delsarte dual code linear mrd code still mrd considering mrd codes using linearized polynomials delsarte dual also interpreted following way define bilinear form trqn set fqn delsarte dual code also difficult show directly two linear codes equivalent duals equivalent let mrd codes obvious also mrd codes ranks trombetti zhou also interpret transposes matrices operation adjoint given aiq codes consisting adjoint code fact adjoint equivalent transpose matrix derived result found regarding adjoint delsarte dual operation proved proposition class mrd codes rest paper write instead short applying lemma get another family mrd codes theorem let two integers satisfying gcd satisfying define qis qks fqn mrd code proof clear need show polynomial roots means minimum distance hence mrd code way contradiction let assume roots implies nonzero lemma hence fqn nqn square however contradicts assumption let first look delsarte dual code straightforward compute qks qis fqn applying module get proposition delsarte dual code equivalent also readily verified following result adjoint code new family mrd codes proposition adjoint code equivalent theorem clear defines semifield multiplication fqn otherwise must square every write fqn assume certain fqn expanding bdq bcq bdq view vectors define semifield multiplication bdq bcq bdq fqn comparing theorem see exactly multiplication semifield also multiplication knuth semifield type lemma fqn right middle nucleus necessary sufficient condition derived proposition let multiplication defined denote associated semifield fqn fqn fqn next let investigate middle right nucleus mrd codes defined theorem important invariants respect equivalence codes use later show contains mrd codes equivalent known ones theorem let integer satisfying right nucleus fqn middle nucleus fqn proof isotopic semifield result derived proposition duality get result rest part assume assume element see fact nonzero obvious statement directly verified checking next consider also expansion coefficient coefficient since trombetti zhou take value fqn checking elements see must thus derive fqn proposition get equals fqn corollary mrd code equivalent generalized gabidulin code also equivalent generalized twisted gabidulin code proof generalized gabidulin code derived multiplication finite field never isotopic hugheskleinfeld semifield hence corresponding mrd codes equivalent moreover associates generalized twisted field presemifield mod isotopic semifield whose middle nucleus size gcd right nucleus size gcd otherwise mod presemifield isotopic finite field see thus equivalent simply consider equivalences delsarte dual codes already determined rest investigate equivalence problem according corollary middle right nucleus generalized gabidulin codes always hence different middle nucleus theorem means equivalent according corollary size gcd size gcd hence equivalent gcd gcd means middle nucleus size also equivalent mrd codes associated maximum scattered linear sets constructed equivalence section shown members mrd codes new respect equivalence codes last part section completely solve last open case whether equivalent first investigate equivalence different members family want determine isometric equivalence answer follows directly result equivalence map adjoint using knowledge middle right nucleus prove following results new family mrd codes lemma let satisfying gcd gcd let satisfying let equivalence map monomials proof proposition equivalent delsarte dual code two mrd codes equivalent delsarte duals equivalent prove statement according definition equivalence every must normalizer theorem right nucleus fqn follows certain result verified directly follows assume fqn always exists fqn implies means two coefficients nonzero certain argument also show let look image equivalence map calculation dciq cqi dhq cqi chq cqi cqi one coefficients must together condition permutation polynomials show means monomials coefficients nonzero exact one equals taken least two different values hence choose value case see must monomials following lemma proved using exactly argument omit proof lemma let satisfying gcd gcd let satisfying let trombetti zhou equivalence map monomials determine equivalence theorem let satisfying gcd gcd let satisfying let integer satisfying mrd code equivalent one following collections conditions satisfied mod exist aut mod exist aut proof proof lemma handle cases assume equivalence map lemma assume arbitrary dcqi follows gcd gcd assume means irt straightforward see either mod mod mod mod fqn applying onto obtain belongs fqn let denote automorphism fqn defined let see must solution mod mod fqn apply onto get fqn let belongs denote automorphism fqn defined let see must solution therefore proved necessary condition statement sufficiency routine verification cases covered theorem defines semifield whose autotopism group completely determined appears using new family mrd codes approach equivalence also determined hence rest section skip case moreover mrd code delsarte dual code semifield proposition also skip case equivalence problem case completely converted equivalence problem hugheskleinfeld semifields next investigate last case gcd gcd mod fact modulo theorem let satisfying gcd gcd let satisfying mrd code equivalent one following collections conditions satisfied mod exists aut mod exist aut chq dhq chq mod exists aut mod exist aut chq chq dhq proof proof still write instead equations even though assumed always mapped another monomial calculation theorem shows necessary sufficient conditions rest proof always assume mapped binomial taking see taken exact two possible value mod mod let consider case mod first assume mod derive coefficient belongs belongs fqn coefficient means dhq chq trombetti zhou hold every view polynomial comparing coefficients obtain similarly derive chq furthermore fqn derive conditions coefficient must zero plugging get dhq analogously checking coefficient fqn obtain chq mod proof similar checking coefficients obtain chq checking coefficient fqn get chq furthermore coefficient every fqn must dhq hence simply obtained switching respectively finish proof necessity part careful check previous calculations see satisfy simultaneously map indeed equivalence map therefore condition also sufficient case mod proof mod also get equations however switched omit details calculations remark possible conditions hold instance let root offer equivalence map remark theorem theorem automorphism group mrd codes also determined new family mrd codes recall corollary equivalence unique open case finally solve problem using approach appeared proofs theorems theorem let satisfying gcd gcd let satisfying equivalent proof corollary face case assume defines equivalence map going show map never exist without loss generality assume otherwise consider equivalence map separate proof two parts depending value proof quite similar theorem lemma assume arbitrary dcqi belong follows gcd gcd assume whence irt straightforward see mod mod matter applying see one coefficients zero one function contradicts assumption every fqn clear congruent modulo fact sufficient consider case equivalent middle right nuclei fqn assume first goal show must monomials assume way contradiction nonzero plugging get dhq chq belong proof theorem see take two possible value mod mod mod derive dhq chq trombetti zhou every means chq assumed nonzero two equations implies hence implies contradicting assumption mod proof analogous omit therefore proved case proved part routine expand check belong hence equivalence map acknowledgment work supported research project miur italian office university research strutture geometriche combinatoria loro applicazioni yue zhou supported national natural science foundation china references albert finite division algebras finite planes proc sympos appl vol pages american mathematical society providence albert generalized twisted fields pacific journal mathematics bartoli zhou exceptional scattered polynomials math bierbrauer projective polynomials projection construction family semifields designs codes cryptography apr biliotti jha johnson collineation groups generalized twisted field planes geometriae dedicata cossidente marino pavese maximum rank distance codes designs codes cryptography june marino polverino classes equivalence linear sets math july marino polverino zanella new family math marino polverino zullo maximum scattered linear sets math marino zullo new maximum scattered linear sets projective line math zanella equivalence linear sets designs codes cryptography zanella maximum scattered sets math may cruz kiermaier wassermann willems algebraic structures mrd codes advances mathematics communications delsarte bilinear forms finite field applications coding theory journal combinatorial theory series dembowski finite geometries springer dempwolff edel dimensional dual hyperovals apn functions translation groups journal algebraic combinatorics june dempwolff kantor orthogonal dual hyperovals symplectic spreads orthogonal spreads journal algebraic combinatorics may dickson commutative linear algebras division always uniquely possible transactions american mathematical society new family mrd codes donati durante generalization normal rational curve associated mrd codes designs codes cryptography jul durante siciliano maximum rank distance codes cyclic model field reduction finite geometries electronic journal combinatorics gabidulin theory codes maximum rank distance problems information transmission gabidulin cryptosystems based linear codes large alphabets efficiency weakness codes cyphers pages formara limited gadouleau yan properties codes rank metric ieee global telecommunications conference pages gow quinlan galois theory linear algebra linear algebra applications apr marshall new criteria mrd gabidulin codes rank metric code constructions math july appear advances mathematics communications hughes collineation groups planes seminuclear division algebras american journal mathematics hughes kleinfield seminuclear extensions galois fields american journal mathematics hughes piper projective planes new york graduate texts mathematics vol johnson jha biliotti handbook finite translation planes volume pure applied mathematics boca raton chapman boca raton kantor commutative semifields symplectic spreads journal algebra knuth finite semifields projective planes journal algebra koetter kschischang coding errors erasure random network coding ieee transactions information theory kshevetskiy gabidulin new construction rank codes international symposium information theory isit proceedings pages lavrauw polverino finite semifields storme beule editors current research topics galois geometry chapter pages nova academic publishers lidl niederreiter finite fields volume encyclopedia mathematics applications cambridge university press cambridge second edition liebhold nebe automorphism groups codes archiv der mathematik lunardon linear sets journal combinatorial theory series july lunardon trombetti zhou generalized twisted gabidulin codes math july lunardon trombetti zhou kernels nuclei rank metric codes journal algebraic combinatorics sep lusina gabidulin bossert maximum rank distance codes codes ieee transactions information theory oct morrison equivalence matrix codes automorphism groups gabidulin codes ieee transactions information theory neri randrianarisoa rosenthal genericity maximum rank distance gabidulin codes designs codes cryptography pages online first otal additive rank metric codes ieee transactions information theory jan ravagnani codes duality theory designs codes cryptography pages apr trombetti zhou roth array codes application crisscross error correction ieee transactions information theory mar schmidt zhou number inequivalent mrd codes math sheekey new family linear maximum rank distance codes advances mathematics communications taniguchi yoshiara unified description four simply connected dimensional dual hyperovals european journal combinatorics
| 7 |
actions euclidean retracts dec bartels abstract note surveys axiomatic results conjecture terms actions euclidean retracts applications gln relative hyperbolic groups mapping class groups introduction motivated surgery theory hsiang made number influential conjectures integral group rings torsion free groups conjectures often direct implications classification theory manifolds dimension good example following compact manifold two boundary components inclusions homotopy equivalences whitehead group quotient subgroup generated canonical units associated invariant whitehead torsion fundamental group consequence theorem dim trivial isomorphic product iff whitehead torsion vanishes hsiang conjectured torsion free thus many cases products borel conjecture asserts closed aspherical manifolds topologically rigid homotopy equivalence another closed manifold homotopic homeomorphism last step proofs instances conjecture via surgery theory uses vanishing result conclude product therefore two boundary components homeomorphic pioneered method using geodesic flow curved manifolds study conjectures created beautiful connection dynamics led among many results proof borel conjecture closed riemannian manifolds nonpositive curvature dimension moreover formulated proved many instances conjecture structure algebraic ktheory group rings even presence torsion group roughly conjecture states main building blocks varies family virtually cyclic subgroups implies number conjectures among hsiang conjectures borel conjecture dimension novikov conjecture homotopy invariant higher signatures kaplansky conjecture idempotents group rings see summary applications goal note twofold first goal explain condition formulated terms existence certain actions euclidean retracts implies conjecture condition developed joint work reich connection dynamics date december mathematics subject classification key words phrases conjecture group rings bartels extended beyond context riemannian manifolds prove farrelljones conjecture hyperbolic cat second goal outline condition used joint work reich bestvina prove conjecture gln mapping class groups common difficulty families groups natural proper actions associated symmetric space respectively space cocompact cases solution depends good understanding action away cocompact subsets induction complexity groups preparation mapping class groups also discuss relatively hyperbolic groups conjecture prominent relative conjecture topological group two conjectures formally similar methods proofs different particular conditions discussed section known imply classes groups two conjectures known differ example work lattice lie groups satisfy conjecture despite lafforgues positive results many property groups conjecture still challenge wegner proved conjecture solvable groups case amenable elementary amenable groups open contrast proved conjecture groups class groups contains amenable groups hand hyperbolic groups satisfy conjectures see lafforgue conjecture mentioned conjecture comprehensive summary current status conjecture reader directed acknowledgement pleasure thank teachers coauthors students many things taught work described supported sfb formulation conjecture classifying spaces families family subgroups group collection subgroups closed conjugation subgroups examples family fin finite subgroups family vcyc virtually cyclic subgroups subgroups containing cyclic subgroup subgroup finite index family subgroups exists following property isotropy groups belong unique gmap space unique unique equivalence informally one may think space encodes group relative subgroups often interesting geometric models space particular fin information space found example easy way construct infinite join closed supergroups finite index subgroup finite index also full simplicial complex also model denote model later formulation conjecture original formulation farrelljones conjecture used homology coefficients stratified twisted use equivalent formulation developed actions euclidean retracts given ring construct homology theory property let family subgroups group consider projection map induces map conjecture conjecture group ring assembly map vcyc isomorphism version conjecture stated original formulation farrell jones considered integral group ring moreover farrell jones wrote regard related conjectures estimates best fit know data time however conjecture still open still fit known data today transitivity principle informally one view statement assembly map isomorphism group ring statement assembled group homology family subgroups contains subgroups one apply slogan two steps relative relative implementation following transitivity principle theorem set assume assembly map isomorphism isomorphism iff isomorphism twisted coefficients often beneficial study flexible generalizations conjecture generalization fibred isomorphism conjecture alternative conjecture coefficients additive categories one allows additive categories action group instead ring coefficients version conjecture applies particular twisted group rings generalizations conjecture better inheritance properties two inheritance property stability directed colimits groups stability taking subgroups summary inheritance properties see thm often proofs cases conjecture use inheritance properties inductions reduce special cases mean statement satisfies conjecture relative assembly map bijective additive categories however technical point safely ignored purpose note theories conjecture discussed far analog appears surgery theory applications mentioned crucial example borel conjecture closed aspherical manifold dimensions holds fundamental group satisfies conjecture however proofs conjecture parallel recently techniques conjecture extended also cover waldhausen particular conditions discuss section known imply conjecture three theories bartels actions compact spaces amenable actions exact groups definition almost invariant maps let equipped metric say sequence maps almost sup gfn discrete group equip space prob probability measures metric inherits subspace metric generates topology convergence prob recall following definition definition action group compact space said amenable exists sequence almost equivariant maps prob group amenable iff action one point space amenable groups admit amenable action compact hausdorff space said exact boundary amenable class exact groups contains amenable groups hyperbolic groups linear groups prominent groups known exact mapping class groups group outer automorphisms free groups assembly map split injective exact groups implies novikov conjecture exact groups analytic result novikov conjecture sense known proof avoids conjecture corresponding injectivity result assembly maps algebraic survey amenable actions exact groups see finite asymptotic dimension results assembly maps algebraic often depend finite dimensional setting space probability measures replaced finite dimensional space write full simplicial complex vertex set space viewed space probability measures finite support equip metric inherits prob definition action say action group compact space exists sequence almost equivariant maps natural action countable group compactification iff asymptotic dimension thm condition also implies exactness therefore novikov conjecture groups finite asymptotic dimension addition classifying space realized finite alternative argument novikov conjecture translated integral injectivity results assembly maps algebraic injectivity results seen far reaching generalizations groups finite decomposition complexity actions constructions transfer maps algebraic often depend actions spaces much nicer good class spaces use conjecture euclidean retracts compact spaces embedded retract brouwer fixed point theorem implies action group euclidean retract cyclic subgroup fixed point difficult check obstructs existence almost equivariant maps assuming contains actions euclidean retracts element infinite order let family subgroups closed taking supergroups finite index let set left cosets members let full simplicial complex equip definition action say action compact space exists sequence almost equivariant maps action say finitely remark let isotropy groups dimension model obtain cellular map also continuous particular constant sequence almost equivariant therefore one view relaxation property isotropy dimension relaxation necessary obtain compact examples reasonably small compact finitely many cells particular cell isotropy group finite index would contain subgroups finite index theorem suppose admits finitely action euclidean retract satisfies conjecture relative remark proof theorem depends methods controlled long history introduction controlled algebra given introduction proof theorem found sketch special case methods needed assume euclidean retract pointed remark forces contain subgroups finite index contractible cellular chain complex provides finite resolution trivial note degree finite sum permutation modules finite index finitely generated projective obtain finite resolution module resolution finite sum modules form finite index equipped diagonal identified obtained first restricting inducing back particular image assembly map relative family follows also image therefore assembly map surjective argument use closed supergroups finite index example let hyperbolic group rips complex compactified euclidean retract natural action compactification finitely obtain examples finitely actions euclidean retracts helpful replace vcyc larger family subgroups groups act acylindrically hyperbolic tree admit finitely actions euclidean retracts family subgroups generated virtually cyclic subgroups isotropy groups original action tree relative hyperbolic groups mapping class groups discussed section remark natural question groups admit finitely actions euclidean retracts necessary condition action finitely bartels isotropy groups action virtually cyclic therefore related question groups admit actions euclidean retracts isotropy groups virtually cyclic groups admitting actions aware hyperbolic groups fact even know whether group admits action euclidean retract disk isotropy groups virtually cyclic actions disks without global fixed point consequence oliver analysis actions finite groups disks hand finitely generated groups actions euclidean retracts global fixed point homotopy actions generalization theorem using homotopy actions order applicable higher actions need homotopy coherent passage strict actions homotopy actions already visible work corresponds passage asymptotic transfer used negatively curved manifolds focal transfer used curved manifolds definition homotopy coherent action group space continuous map thought action map homotopy remaining data encodes higher coherences order obtain sequences almost equivariant maps homotopy actions useful also allow homotopy action vary definition homotopy coherent actions sequence homotopy coherent actions group said exists sequence continuous maps sup theorem suppose admits sequence homotopy coherent actions euclidean retracts uniformly bounded dimension finitely amenable satisfies conjecture relative remark groups satisfying assumptions theorem said homotopy transfer reducible original formulations theorems terms almost equivariant maps terms certain open covers recall formulation used actions formulation homotopy actions cumbersome subset said actions euclidean retracts collection subsets said point contained members said order dimension cover open said compact equip diagonal finite said action iff finite exists dimension prop translation covers maps also used point view covers connection asymptotic dimension natural think kind dimension action see difference formulations used references given conditions topology formulated differently certainly euclidean retracts satisfy condition example theorem applies cat vcyc family virtually cyclic subgroups application theorem gln discussed section remark method interesting groups one deduce conjecture using theorems inheritance properties however clear methods account currently known third method going back work farrellhsiang combines induction results finite groups controlled axiomatization method given important part proof conjecture solvable groups combination method theorem remark trace methods novikov conjecture concerns injectivity assembly maps algebraic lower bounds algebraic group rings integral group ring groups required satisfy mild homological finiteness assumption trace methods used obtain rational injectivity results latter result particular yields interesting lower bounds whitehead groups group ring ring schatten class operators proved rational injectivity assembly map groups result aware conjecture applies groups flow spaces construction almost equivariant maps often uses dynamic flow associated situation definition flow space group metric space equipped flow isometric flow commute write dfol mean example let fundamental group riemannian manifold sphere bundle equipped geodesic flow flow space fundamental group manifolds negative curvature bartels flow space heart connection dynamics used great effect example generalizations hyperbolic groups cat hyperbolic groups mineyev symmetric join flow space alternatively possible use coarse flow space hyperbolic groups see remark groups acting cat flow space constructed consists parametrized geodesics cat technically generalized geodesics flow acts shifting parametrization almost equivariant maps often arise compositions first map almost equivariant second map contracts following lemma summarizes strategy lemma let countable group let assume exists flow space satisfying following two conditions finite subset continuous map dfol continuous dfol holds action proof let finite need construct map let respect choose next choose respect required property remark constructions maps condition lemma negatively curved situations often constructed using dynamic properties flow briefly illustrate case already considered farrell jones let fundamental group closed riemannian manifold strict negative sectional curvature let universal cover sphere infinity action extends canonical identification unit tangent vectors every unit tangent vector determines geodesic ray starting corresponding point one say points geodesic flow following property suppose unit tangent vectors pointing point dfol depends uniformly still depending statement uses strict negative curvature closed manifolds sectional curvature vector chosen carefully depending necessitates use focal transfer respectively use homotopy coherent actions contracting property geodesic flow translated construction maps fix point define sending actions euclidean retracts unit tangent vector pointing define using contracting property geodesic flow difficult check roughly satisfying dfol remark course space used remark contractible therefore euclidean retract compactification disk particular euclidean retract homotopy type free even map dimension particular action difficult combine two statements deduce action finitely best done via translation open covers discussed remark see example important point formulation condition presence uniform action cocompact version lebesgue lemma guarantees existence uniform suffices construct remark long thin covers maps condition lemma best constructed maps associated long thin covers flow space long thin covers alternative long thin cell structures employed open cover said cover said containing construction maps condition lemma amounts finding given dimension depending cocompact flow spaces covers constructed relatively great generality cocompactness used guarantee cocompact flow spaces still find covers without uniform thickness provide maps needed remark coarse flow space outline construction coarse flow space hyperbolic group let cayley graph vertex set adding gromov boundary obtain compact space assume coarse flow space consists triples geodesic passes within distance informally coarsely belongs geodesic coarse flow space disjoint union coarse flow lines coarse flow lines uniform constants depending versions long thin covers remark bounded dimension direction coarse flow lines also coarse version map remark define fix base point sends belongs geodesic convenient extend map belongs bartels geodesic course coarsely well defined nevertheless used pull long thin covers back finite yields covers proof last statement uses compactness argument important point locally finite acts cocompactly covers infinity conjecture gln group gln cat group proper isometric action cat symmetric space gln fix base point let closed ball radius around ball retract via radial projection along geodesics inherits homotopy coherent action action gln let family subgroups generated virtually cyclic proper parabolic subgroups gln key step proof conjecture gln language section following theorem sequence homotopy coherent actions finitely particular gln satisfies conjecture relative theorem using transitivity principle conjecture gln proven induction induction step uses inheritance properties conjecture virtually groups satisfy conjecture verification theorem follows general strategy lemma variant homotopy coherent actions additional difficulty verifying assumption action gln symmetric space cocompact action flow space cocompact either general results reviewed section still used construct cover flow space however clear resulting cover uniformly remedy second collection open subsets construction starts meaning away cocompact subsets points symmetric space viewed inner products moving towards corresponds degeneration inner products along direct summands turn used define horoballs symmetric space one forming desired cover corresponding horoball invariant parabolic subgroup gln precisely horoballs precise properties cover follows lemma exists collection open subsets order lebesgue number compact gln containing around cover pulled back flow space provides cover flow space roughly one left cocompact subset flow space cover constructed first argument gln generalized gln finite fields gln finite set primes using suitable generalizations covers case parabolic subgroups slightly bigger actions euclidean retracts particular induction step uses conjecture holds solvable groups using inheritance properties building results conjecture verified subgroups gln lattices virtually connected lie groups relatively hyperbolic groups use bowditch characterization relatively hyperbolic groups graph fine finitely many embedded loops given length containing given edge let collection subgroups countable group hyperbolic relative admits cocompact action fine hyperbolic graph edge stabilizers finite vertex stabilizers belong subgroups said peripheral parabolic requirement fine encodes farb bounded coset penetration property bowditch assigned compact boundary follows set union gromov boundary set vertices infinite valency topology observer topology sequence converges topology given finite set vertices including almost geodesic misses general hyperbolic graphs topology hausdorff fine hyperbolic graphs main result hyperbolic relative satisfies conjecture relative family subgroups generated vcyc needs closed index two supergroups include version conjecture result obtained application theorem key step following theorem action finitely direct consequence propositions using characterization remark existence covers outline construction covers prepare mapping class group introduce notation pick proper metric set edges possible countable action finite stabilizers vertex infinite valency let set edges incident write restriction metric define projection set edges appear initial edges geodesics finite subset depends fineness fix vertex finite valence base point define projection distance set relative hyperbolic groups related quantity often called angle terminology chosen align better case mapping class group vary finite set open neighborhood fixed projection distance varies bounded amount useful following attraction property projection distances geodesic passes conversely geodesic misses uniform projection distances used control failure locally finite particular provided projection distances bounded constant variation argument hyperbolic groups using coarse flow space adapted provide covers part following precise statement bartels proposition depending finite exists collection open part satisfies vertices deal large projection distances explicit construction used similar case gln let consequence attraction property sufficiently large set consists vertices belong geodesic particular linearly ordered distance fixed vertex define interior set pairs minimal vertex closest collection pairwise disjoint open proposition let finite collection open order part vertex difficulty working fixed possible control exactly varies particular whether vertex minimal change small variation consequence attraction property useful proof proposition following suppose vertices segment linear order unchanged suitable variations depending remark motivating example relatively hyperbolic groups fundamental groups complete riemannian manifolds pinched negative sectional curvature finite volume hyperbolic relative virtually finitely generated nilpotent subgroups case work sphere universal cover splitting part part thought follows fix base point instead number choose cocompact subset small part consists pairs geodesic ray contained large part complement translation cover proposition thought cover moreover vertices infinite valency correspond horoballs projection distances time geodesic rays spend horoballs note action graph definition relative hyperbolicity used cocompact proper metric space conversely example action longer cocompact proper metric space similar trade cocompact action non proper space versus action proper space possible relatively hyperbolic groups assuming parabolic subgroups finitely generated mapping class group let closed orientable surface genus finite set marked points assume mapping class group mod group components group orientation preserving homeomorphisms leave invariant space space marked complete hyperbolic structures finite area mapping class group acts space changing marking thurston defined actions euclidean retracts equivariant compactification space space closed disk particular euclidean retract boundary compactification pmf space projective measured foliations key step proof conjecture mod following theorem let family subgroups mod virtually fix point pmf action mod pmf finitely follows quickly action finitely well applying theorem obtain conjecture mod relative passing finite index subgroups groups central extensions products mapping class groups smaller complexity using transitivity principle inheritance properties one obtains conjecture mod induction complexity additional input case conjecture holds finitely generated free abelian groups proof theorem uses characterization remark provides suitable covers mod similar relative hyperbolic case construction covers done splitting mod two parts natural refer parts thick part thin part thick part corresponds part relative hyperbolic case space natural filtration cocompact subsets part consist marked hyperbolic structures closed geodesics length action mod cocompact fix base point given pair mod unique ray starts pointing towards technically vertical foliation quadratic differential part mod defined set pairs ray stays important tool covering thick thin part complex curves celebrated result hyperbolic klarreich studied coarse projection map identified gromov boundary curve complex particular projection map extension pmf preimage gromov boundary extension continuous map complement still coarse map space hyperbolic thick part number hyperbolic properties masur criterion implies thick part moreover restriction klarreichs projection map pmf space injective result minsky geodesics stay contracting property share geodesics hyperbolic spaces nearest point projection maps balls disjoint uniformly bounded subsets geodesics thick part project curve complex constants depending properties eventually allow construction suitable covers thick part using coarse flow space methods hyperbolic case precise statement following proposition mod finite exists mod collection mod order stays bartels action mapping class group curve complex exhibit mapping class group relative hyperbolic group sense discussed fine nevertheless important replacement projections links used relatively hyperbolic case subsurface projections case projections links curve complex curve complexes subsurfaces hand often links curve complex exactly curve complexes subsurfaces theory however much sophisticated relatively hyperbolic case projections always defined sometimes projection points boundary projection distance used subsurface projections prove mapping class group finite asymptotic dimension work subsurfaces organized finite number families two subsurfaces family always intersect interesting way effect projections subsurfaces family interact controlled way family subsurfaces organized associated simplicial complex called projection complex vertices projection complex subsurfaces perturbation projection distances thought measured along geodesics projection complex behaves similar relative hyperbolic case particular attraction property satisfied projection complex allows application variant construction proposition projection complex eventually yield following proposition let finitely many families subsurfaces finite exists mod collection subsets mod order final piece needed combine propositions proof theorem consequence rafi analysis short curves along rays curve contained subsurface remark proved topological rigidity results fundamental groups curved manifolds addition latter condition bounds curvature tensor covariant derivatives manifold torsion free discrete subgroups gln fundamental groups manifolds similar examples discussed section key difficulty action fundamental group manifold universal cover cocompact general strategy employed seems however different particular involve induction kind complexity groups considered intermediate step polycyclic groups argument directly reduces uses computations polycyclic groups raises following question family subgroups theorem replaced family virtually polycyclic subgroups recall cover constructed flow space needed plausible exist thinner covers work family virtually polycyclic subgroups mapping class group family theorem chosen significantly smaller isotropy groups action appear actions euclidean retracts family one might ask whether exist actions sequences homotopy coherent actions mapping class groups euclidean retracts smaller family used theorem references adams boundary amenability word hyperbolic groups application smooth dynamics simple groups topology arzhantseva bridson januszkiewicz leary minasyan infinite groups fixed point properties geom bartels squeezing higher algebraic bartels proofs conjecture topology geometric group theory volume springer proc math pages springer cham bartels coarse flow spaces relatively hyperbolic groups compos bartels bestvina conjecture mapping class groups preprint bartels farrell jones reich isomorphism conjecture algebraic topology bartels borel conjecture hyperbolic cat ann math bartels method revisited math bartels geodesic flow cat geom bartels reich equivariant covers hyperbolic groups geom bartels reich conjecture hyperbolic groups invent bartels reich group rings gln publ math inst hautes bartels reich coefficients conjecture adv baum connes geometric lie groups foliations enseign math baum connes higson classifying space proper actions group san antonio volume contemp pages amer math providence bestvina bromberg fujiwara constructing group actions applications mapping class groups publ math inst hautes bestvina guirardel horbez boundary amenability preprint bestvina mess boundary negatively curved groups amer math hsiang madsen cyclotomic trace algebraic spaces invent bowditch relatively hyperbolic groups internat algebra carlsson goldfarb integral novikov conjecture groups finite asymptotic dimension invent davis spaces category assembly maps isomorphism conjectures dress induction structure theorems orthogonal representations finite groups ann math enkelmann pieper ullmann winges conjecture waldhausen preprint farb relatively hyperbolic groups geom funct farrell hsiang space form problem invent farrell jones dynamics ann math bartels farrell jones isomorphism conjectures algebraic amer math farrell jones topological rigidity compact curved manifolds differential geometry riemannian geometry los angeles volume proc sympos pure pages amer math providence farrell jones rigidity aspherical manifolds glm asian fathi laudenbach thurston work surfaces volume mathematical notes princeton university press princeton translated french original djun kim dan margalit grayson reduction theory using semistability comment math groves manning dehn filling relatively hyperbolic groups israel guentner higson weinberger novikov conjecture linear groups publ math inst hautes guentner tessera notion geometric complexity application topological rigidity invent guentner willett dynamic asymptotic dimension relation dynamics topology coarse geometry math hambleton pedersen identifying assembly maps math geometry mapping class groups boundary amenability invent higson bivariant novikov conjecture geom funct higson kasparov groups act properly isometrically hilbert space invent higson roe amenable group actions novikov conjecture reine angew hsiang geometric applications algebraic proceedings international congress mathematicians vol warsaw pages pwn warsaw kammeyer conjecture arbitrary lattices virtually connected lie groups geom kasprowski groups finite decomposition complexity proc lond math soc kasprowski long thin covers cocompact flow spaces preprint kasprowski ullmann wegner winges conjecture virtually solvable groups preprint kida mapping class group viewpoint measure equivalence theory mem amer math klarreich boundary infinity complex curves relative space preprint knopf acylindrical actions trees conjecture preprint lafforgue bivariante pour les banach conjecture invent lafforgue conjecture coefficients pour les groupes hyperboliques noncommut survey classifying spaces families subgroups infinite groups geometric combinatorial dynamical aspects volume progr pages basel group rings proceedings international congress mathematicians volume pages hindustan book agency new delhi slides talk oberwolfach conjecture http reich conjectures ltheory handbook vol pages springer berlin reich rognes varisco algebraic group rings cyclotomic trace map adv actions euclidean retracts masur hausdorff dimension set nonergodic foliations quadratic differential duke math masur minsky geometry complex curves hyperbolicity invent masur minsky geometry complex curves hierarchical structure geom funct mineyev flows joins metric spaces geom mineyev conjecture hyperbolic groups invent minsky space reine angew mumford remark mahler compactness theorem proc amer math oliver sets group actions finite acyclic complexes comment math ozawa amenable actions applications international congress mathematicians vol pages eur math pedersen controlled algebraic survey geometry topology aarhus volume contemp pages amer math providence rafi characterization short curves geodesic geom ramras tessera finite decomposition complexity integral novikov conjecture higher algebraic reine angew reich varisco algebraic assembly maps controlled algebra trace methods preprint conjecture groups sawicki equivariant asymptotic dimension groups geom swan induced representations projective modules ann math ullmann winges conjecture algebraic spaces method preprint vogt homotopy limits colimits math wegner conjecture cat proc amer math wegner conjecture virtually solvable groups novikov conjecture groups finite asymptotic dimension ann math coarse conjecture spaces admit uniform embedding hilbert space invent novikov conjecture algebraic group algebra ring schatten class operators adv wwu mathematisches institut einsteinstr germany address url http
| 4 |
sally question conjecture shimoda sep shiro goto liam carroll francesc abstract shimoda connection question sally asked whether noetherian local ring prime ideals different maximal ideal complete intersections krull dimension two paper reduced conjecture case dimension three ring regular local dimension three explicitly describe family prime ideals height two minimally generated three elements weakening hypothesis regularity find achieve end need add extra hypotheses completeness infiniteness residue field multiplicity ring three second part paper turn attention category standard graded algebras geometrical approach via double use bertini theorem together result simis ulrich vasconcelos allows obtain definitive answer setting finally adapting work miller prime bourbaki ideals local rings detail technical results concerning existence standard graded algebras homogeneous prime ideals excessive number generators introduction classic result noetherian local ring existence uniform bound minimal number generators ideals equivalent krull dimension sally see sal extended result following way let noetherian local ring exists integer minimal number generators ideal bounded ideal associated prime dim krull dimension particular dim exists bound minimal number generators prime ideals remarked open question whether converse true words noetherian local ring exists integer prime ideals dim question remained open much progress made since time shimoda shi asked whether noetherian local ring prime ideals different maximal ideal complete intersections krull dimension observe primes maximal ideal complete intersections particular cardinalities sets minimal generators bounded krull dimension ring date september though question shimoda seems easier sally hypothesis first sight much stronger proved difficult answer sake simplicity call noetherian local ring shimoda ring every prime ideal punctured spectrum principal class minimal number generators every prime ideal equal height first observe one reduce conjecture consideration ufd local domain krull dimension positive answer shimoda conjecture amounts showing either dim else dim exhibiting prime ideal height minimally generated elements question attractive combination simplicity seeming difficulty indeed able produce full generality intersection prime ideal height ufd local domain krull dimension however section able give partial positive answers firstly ring regular local krull dimension explicitly describe family prime ideals height minimally generated elements ideals determinantal ideals geometric case precisely defining ideals irreducible affine space monomial curves next trying weaken hypothesis regularity find need add extra hypotheses completeness infiniteness residue field multiplicity ring case dim exhibit ideal height minimally generated elements minimal prime gorenstein enable conclude shimoda ring extra hypotheses krull dimension second part paper turn attention category standard graded algebras graded definition notion shimoda ring section find geometrical approach via double use bertini theorem together result simis ulrich vasconcelos see suv allows obtain definitive result final section section sketch adaptation miller arguments mil case standard graded rings interest allows somewhat technical hypotheses produce homogeneous prime ideals requiring arbitrarily large number generators also allows present mild generalization core result section shimoda conjecture rings small multiplicity let noetherian local ring krull dimension residue class field remainder section fix following notation ring called shimoda ring every prime ideal punctured spectrum principal class minimal number generators every prime ideal equal height remark let noetherian local ring krull dimension shimoda domain particular shimoda ring need gorenstein since local domain gorenstein eis moreover completion shimoda ring necessarily shimoda ring noetherian local domain whose completion domain eis remark let noetherian local ring krull dimension shimoda ufd converse holds suppose shimoda also indeed take prime ideal height since height generated regular sequence say see dav remark set since prime regular sequence length purpose section prove following result stands multiplicity respect theorem let shimoda ring krull dimension suppose addition either regular complete infinite note passing weaken hypothesis regular need add extra hypotheses complete contains infinite residue field note either case local domain use properties domains without mention first show reduce dimension remark let shimoda ring krull dimension let shimoda ring krull dimension moreover regular regular complete complete infinite chosen superficial element particular theorem one suppose proof remark take given prime ideal therefore height height shimoda krull dimension clearly regular regular complete complete moreover infinite exists superficial element see propositions finally suppose theorem true suppose exists shimoda ring krull dimension successively factoring appropriate elements one would get shimoda ring krull dimension theorem case one would deduce contradiction therefore exist shimoda rings krull dimension one prove theorem case let fix notations hold rest section see remark let local ring let regular sequence take let let matrix minors change sign consider determinantal ideal generated minors ideal height two minimally generated three elements simplicity called herzog northcott ideal associated also set note proof remark height follows minimally generated elements follows proof minimal resolution shown proof theorem divided two parts first state regular case proposition let regular local ring krull dimension let regular system parameters let ideal associated gcd prime particular shimoda taking proposition one obtains following result corollary let regular local ring krull dimension let regular system parameters height two prime ideal minimally generated three elements case small multiplicity following consider ideal associated proposition let complete gorenstein local domain krull dimension suppose addition infinite let minimal reduction minimal prime gorenstein particular shimoda let give proof theorem using propositions proof theorem using propositions let shimoda ring krull dimension theorem remark one suppose regular proposition suppose complete infinite remark since complete admits canonical module see remark ufd hence gorenstein see therefore proposition proving proposition note following reasonably elementary fact lemma let local ring krull dimension let system parameters let lengths lengths proof let since generated regular sequence natural graded isomorphism sending polynomial ring two indeterminates associated graded ring let denote initial form particular ideal let denote homogeneous ideal generated initial forms elements one see take observe consider short exact sequences lengths hence lengths thus lengths lengths lengths lengths let ideal generated initial forms since regular sequence permutable remark hence lengths lengths isomorphism one sees isomorphic free basis therefore lengths lengths remark result kiyek remark used proof lemma deals case regular sequence hand permutable says precisely base see hio respect one prove following fact uses generalization classic theorem rees theorem theorem let commutative ring let set let indeterminates homogeneous degree finite set monomials base proof result trivial rees theorem case suppose established result case replaced suppose homogeneous degree inductive hypothesis coefficients lie thus hence elements let let homogeneous degree rees theorem coefficients lie follows lies proves let monomials let degree due standard isomorphism sending let degrees occurring let denote ideal generated monomials among degree must show equivalently understanding whenever noting comaximal see lines prove induction item easily yields case monomials degree suppose holds replaced suppose first induction since case finally suppose hand case considered ideal generated monomials degree case required prove proposition going use formulae theory multiplicities see chapter section chapter sections completion since regular local proof proposition let maximal ideal generated regular system parameters generated regular system regular local ring maximal ideal let ideal associated parameters considered ideal associated since prove prime regarded prime therefore suppose complete notice assume complete local ring dimension generates reduction avoid repeating core aspects argument come prove proposition take associated prime hence height let onedimensional noetherian domain let integral closure quotient field since complete local ring nagata ring see hence finite particular noetherian ring since noetherian complete local domain integral closure local ring see die corollary therefore noetherian local integrally closed ring hence dvr let denote corresponding valuation abuse notation let denote image set applying equalities one gets following system equations third equation say sum first two reduce system two linearly independent equations considered whose solution see set gcd since gcd hence lmi clearly ideal set abuse notation consider regular sequence system parameters since annr lengthr lengths lemma equal lengths lengths lengthr krull dimension ideal moreover reduction since lengthr lengthr lengthr lengthd observe ideal local domain lengthd hand finite rankd rankd lengthd set degree extension residue fields lengthd lengthv finally since dvr lengthv therefore putting together equalities lengthr lengthr observe reasoning used complete local ring dimension generates reduction using regular deduce lengthr lengthr tensoring exact sequence one obtains exact sequence equality lengthr lengthr one deduces lengthr hence lemma nakayama implies turn proof proposition proof proposition first observe since infinite exist minimal reduction particular system parameters since complete noetherian local domain exists homomorphism power series ring three indeterminates via complete regular local ring finite extension see proof let defined indeterminate ker see proof polynomial case given set ker prime ideal set ideal associated take minimal prime suppose gorenstein reach contradiction since integral extension integral extension hence prime ideal height set formula free ranks ranks see base change follows free rank hence rank claim indeed suppose quotient field identified quotient field eis quotient field integral closure since quotient field finite hence integral extension also integral closure eis cit conductor equals maximal ideal since also ideal therefore therefore reduction maximal ideal since reduction hence integral closure ideal equals see since integral follows hence particular also ideal hence using fact gorenstein one lengthd lengthd lengthd lengthd lengthv contradiction thus local rings residue field hence since primary ideal noetherian local domain ranka regarding following ranka analogously since ranka hypothesis ranka associativity formula see letting vary minimal primes lengthrp number hence number equals unique lengthrp therefore contradiction since gorenstein gorenstein shimoda conjecture setting standard graded algebra connection shimoda property affine rings note following result proposition let affine domain krull dimension least contains prime ideal requires height generators proof since excellent domain regular locus reg subset spec hence exists element regular ring since hilbert ring prime ideal exists maximal ideal regular local ring height theorem proof exists spec dimension set since follows height yet requires least generators remark note however also standard graded algebra preceding result provide information whether resulting prime ideal homogeneous consider different analogous local case considered section influenced similarities theories local standard graded rings let standard graded algebra field graded integers sitting degree degree simplicity suppose infinite fact eventually suppose characteristic algebraically closed let denote irrelevant ideal take minimal number homogeneous generators see noting fix notation also suppose shimoda ring graded sense ring short meaning relevant homogeneous prime ideal generated height elements principal class note generators chosen homogeneous see finally suppose throughout krull dimension least proposition suppose ring krull dimension gorenstein domain satisfies serre condition proof taking minimal prime necessarily homogeneous condition see domain next let homogeneous prime maximal respect property relevant davis result dav remark generated regular sequence length height follows contains regular sequence length height hence see next since homomorphic image polynomial ring standard grading homogeneous morphism degree graded canonical module see eis may take homogeneous ideal gorenstein suppose hand proper ideal unmixed ideal height whose necessarily homogeneous associated primes therefore principal property let one associated primes say dvr localitzation principal ideal ptp say easily seen ptp since latter ideal associated primes agrees locally primes hence principal gorenstein follows finally show satisfies examine prime ideal height clearly need consider case first paragraph proof fle lemma see regular local ring result follows next consider effect noether normalization applied noting infinite result highlights another way present situation extent analogous section distinguish krull dimension denoted dimension projective scheme denoted thus see eis proposition let exists regular sequence homogeneous elements degree relabel free finitely generated graded module subring isomorphic polynomial ring variables standard grading natural homogeneous mapping degree proof see together discussion sta particularly note also remains invariant show reduce dimension rings proposition let homogeneous element ring lying ring proof first note element exists graded version nakayama lemma since field since homogeneous ideal set homogeneous generators lie previous observation pick suitable member set generators clearly standard graded ring least irrelevant ideal consider relevant homogeneous prime ideal unique relevant homogeneous prime ideal contains since fortiori hence minimal homogeneous generator height height result follows give main result theorem suppose ring base field characteristic proof suppose deduce contradiction finding homogeneous ideal height requires generating set cardinality proposition may suppose use without mention fact standard graded affine domain together properties domains catenary note hypotheses together fact char used remainder argument prove following standard graded affine domain field characteristic zero krull dimension relevant homogeneous prime ideal height step constructing prototype prime ideal seek since using proposition choose regular sequence homogeneous elements degree set let note homogeneous degree easily seen elements example note rad rad hence height way motivation note passing almost complete intersection northcott ideal vas also determinantal ideal ideal analyzed section also properties prove crucial analysis ideal focus sketch details follows setting letting denote matrix transpose det note regular sequence since element domain lie associated prime necessarily height height consequence equality rad rad follows since grade height note also given properties stated graded version theorem provides resolution resolution minimal hence also see using auslanderbuchsbaum formula hence ideal properties ideals see use develop suv along similar lines present setting consider polynomial extension note standard graded cohenmacaulay domain giving weighting set premultiplying adj one obtains relationship one easily checks result wish show prime ideal note homogeneous ideal generated quadrics ideal prototype prime ideal seek establish contradiction first show prime element note remarked regular element domain since next regular modulo otherwise would height would contradict fact rad rad rad height prime element follows kap hence also prime element since deduce regular sequence note also proper ideal since otherwise would follow would contradict fact actually regular modulo larger ideal hence unmixed height basic properties colon operation finite intersections primary ideals since easily seen radical contains elements since ideals height follows height hence regular modulo since unmixed height clear isomorphic domain hence domain indeed prime ideal recall ideal introduced let note det seen grade proper ideal hence theorem projective dimension grade hence height unmixed associated primes height argument established height actually showed height hence regular modulo localizing associated prime hence thus theorem setting minimal free resolution particular application formula shows ideal features seeking homogeneous prime ideal height minimally generated elements except projective variety consisting relevant homogeneous prime ideals containing lies proj proj employ double use bertini theorem project back proj step retracting back via double use bertini theorem standard graded algebra positive krull dimension note scheme proj integral integral domain use fact without comment setting scheme proj integral note apply fle satz role played images grade abuse notation continuing write respective images hence applying fle satz fact equals proj recall consists relevant homogeneous prime ideals contain make elementary observation ring element indeterminate natural induced isomorphism arising natural retraction maps identity map apply observation generic hyperplane section say proj generic hyperplane section say proj resulting finally integral subscheme proj hence fle satz set following standard notation fle section integral subscheme proj without loss generality may suppose using intersection sets note generated quadrics variety nondegenerate contained hyperplane set elementary observation therefore homogeneous prime ideal generated quadrics fix proceeding exists set depending set setting prime ideal grade height presentation carries give equality hence resolution carries give minimal presentation particular prime ideal seek yields desired contradiction remark note result optimal consider ring polynomial ring algebraically closed natural grading see projective nullstellensatz fact ufd ring technical results standard graded case adapt arguments used miller paper mil prove following results provide additional details necessary theorem mil corollary theorem let standard graded algebra field characteristic domain satisfies serre conditions suppose homogeneous prime ideals height principal possesses height homogeneous prime ideals requiring arbitrarily large number generators sketch proof let graded second syzygy minimal graded free resolution finitely generated graded finite projective dimension least free note syzygy theorem corollary subsequent remark rank let rank suppose graded presentation ignoring twists set giving indeterminate weight define matrix let coker hence presentation clearly rank note graded modules exact sequence graded homomorphism induced straightforward adaptation miller argument shows graded module easily seen homogeneous ideal graded isomorphism next show may suppose homogeneous ideal height suppose height obvious homogeneous analogue proof kap theorem given hypotheses homogeneous element unit unique product homogeneous elements generates homogeneous prime ideal note since domain product elements homogeneous element homogeneous hence taking finite set homogeneous generators find smallest homogeneous principal ideal homogeneous element contains clearly homogeneous ideal claim height least otherwise contained homogeneous prime height hypothesis homogeneous element would follow aba aba contradiction hence may suppose height least another straightforward adaptation miller argument shows precisely height fact prime finally replacing localization irrelevant maximal ideal directly apply argument mil prove corollary reference ideal zbd considered denotes irrelevant maximal ideal top note degree generators general combinations homogeneous forms degree next observe standard graded algebra irrelevant maximal ideal domain domain follows homogeneous ideal zbd considered prime ideal since requires least many generators result follows arguments proof step proof theorem easily adapted use following general situation adopt notation section theorem let standard graded algebra field characteristic domain suppose satisfies serre condition possesses homogeneous prime ideal height sketch proof light propositions straightforward adaptation original argument means need comment adapt details step viz application double use bertini theorem order finish proof replace localization irrelevant ideal say localization irrelevant ideal projective dimension since satisfies condition indeterminates depth formula therefore depth mimic argument part proof mil theorem cutting generic hyperplanes notation mil flenner bertini theorem mil analytically irreducible note depth hence another application bertini theorem allowed analytically irreducible short exact sequence quickly establish tord hence minimal free resolution afforded theorem descends via tensoring give minimal free resolution form repeating argument factoring shows full result follows remark suppose standard graded domain field every homogeneous prime ideal height respectively generated respectively elements easily follows first part proof theorem satisfies condition hence suppose characteristic satisfies hypotheses theorem follows theorem consequence theorem hand clear theorem also consequence theorem acknowledgment would like thank giral drawing paper mil attention thank roig useful discussions references bruns herzog rings cambridge studies advanced mathematics cambridge university press dav davis ideals principal class certain monoidal transformation pacific math die topics local algebra notre dame mathematical lectures university notre dame press notre dame ind eis eisenbud commutative algebra view toward algebraic geometry graduate texts mathematics new york fle flenner die von bertini lokale ringe math ann hio herrmann ikeda orbanz equimultiplicity blowing algebraic study appendix moonen berlin kap kaplansky commutative rings revised edition university chicago press chicago kiyek integral closure monomial ideals regular sequences rev mat iberoamericana matsumura commutative algebra second edition mathematics lecture note series publishing reading matsumura commutative ring theory translated japanese reid cambridge studies advanced mathematics cambridge university press cambridge mil miller bourbaki theorem prime ideals algebra northcott homological investigation certain residual ideal math ann carroll ideals type proc edinb math soc sal sally numbers generators ideals local rings lecture notes pure applied mathematics marcel dekker new york basel shi shimoda power stable ideals symbolic power ideals proceedings japan vietnam joint seminar commutative algebra hanoi vietnam december sta stanley hilbert functions graded algebras advances math suv simis ulrich vasconcelos jacobian dual fibrations amer math swanson huneke integral closure ideals rings modules london mathematical society lecture note series cambridge university press cambridge valabrega valla form rings regular sequences nagoya math vas vasconcelos computational methods commutative algebra algebraic geometry algorithms computation mathematics berlin shiro goto department mathematics school science technology meiji university tama kawasaki kanag japan goto liam carroll maxwell institute mathematical sciences school mathematics university edinburgh edinburgh scotland carroll francesc departament aplicada universitat catalunya diagonal etseib barcelona catalunya
| 0 |
statistical computational phase transitions spiked tensor estimation dec thibault marc florent krzakala lenka abstract consider tensor factorization using generative model bayesian approach compute rigorously mutual information minimal mean squared error mmse unveil informationtheoretic phase transitions addition study performance approximate message passing amp show achieves mmse large set parameters factorization algorithmically easy much wider region previously believed exists however hard region amp fails reach mmse conjecture polynomial algorithm improve amp study inscribes line research tensor decomposition problem many applications ranging signal processing machine learning consider model tensor noisy version randomly generated spike analyze bayesoptimal inference spike compute associated mutual information minimum error mmse also investigate whether mmse achievable known efficient algorithms particularly approximate message passing amp spiked tensor model one observes tensor created symmetric unknown vectors inferred tensor accounting noise denote matrix collects vectors observed tensor thus seen rank perturbation random symmetric tensor consider setting generated random known prior distribution core question considered paper best possible reconstruction one hope fact look even general noise additive one denote vector created aggregating ith coordinates vectors paper presented ieee international symposium information theory isit aachen germany institut physique cnrs cea saclay france laboratoire physique statistique cnrs pierre marie curie normale psl paris france informatique ens normale cnrs psl research university inria paris france assume generated independently probability distribution denote simplicity assume observe elements coefficients case coefficients observed directly deduced case observed tensor generated using noisy output channel pout pout simplest situation corresponds additive white gaussian noise awgn pout given generative model assuming prior distribution output channel pout known write estimator marginalization following posterior distribution pout normalization constant depending observed tensor defined analogously instead study tensor estimation problem limit dimension rank remains constant factor ensure inference problem neither trivially hard trivially easy one deals signals kxi noise magnitude order factor used convenient rescaling ratio related work summary results recently numerous works matrix version setting particular explicit characterization mutual information optimal bayesian reconstruction error rigorously established large part results rely approximate message passing amp algorithm matrix estimation problems amp introduced also derived state evolution formula analyze performance generalizing techniques developed generalization larger rank general output channel considered following theorem proven know indeed amp achieves minimum error mmse large set parameters problem however might exist region denoted hard case polynomial algorithms improving amp known comparison much less work bayesian tensor estimation statistical physics measure considered random components gaussian called spherical glass rademacher ising glass amp tensor estimation actually equivalent equations spin glass theory context tensor pca equations studied richard montanari maximum likelihood estimation interestingly showed hard phase particularly large tensor estimation case side information component direct noisy observation estimation problem becomes easier however kind side information strong rarely available applications tight statistical limits present tensor model also studied special case rademacher ising prior generic priors upper lower bounds known rigorously summary results contribution aim bridge gap known general pout bayesian estimation matrices known tensors present following contributions amp algorithm state evolution analysis tensor estimation see sections channel universality result allows map generic channel pout model additive gaussian noise see section rigorous formula asymptotic mutual information mmse thus generalizing matrix results see section identification statistical computational phase transitions fact show soon effect prior taken properly account hard region shrinks considerably making tensor decomposition problem much easier hitherto believed least algorithms take prior information account reliable prior information distribution components rather realistic applications instance constraints negativity membership clusters imposed presented sections amp algorithm channel universality discuss section approximate message passing amp algorithm bayesian version problem relatively straightforward generalization done matrix estimation case present setting general amp derived belief propagation taking account every variable corresponding graphical model large number neighbors since incoming messages considered independent one use central limit theorem represent message gaussian given mean covariance crucial property called channel universality shares matrix estimation allows drastically simplify problem tensor estimation generic output channel pout justification property follows closely matrix estimation case refer reader first define fisher score tensor associated output channel pout fisher information log pout log pout understood function log acts informally speaking channel universality property states mutual information problem defined output channel pout one awgn variance amp algorithm written inference tensors depends data tensor output channel pout trough tensor effective noise amp involves auxiliary function depends explicitly prior follows define probability distribution normalization factor amp uses function fin defined expectation fin well covariance matrix fin shall denote overlap amp written iterative update procedure estimates posterior means uses auxiliary variables bit fin bit fin bit denotes hadamard product matrices corresponding componentwise power theoretical analysis state evolution amp evolution amp algorithm limit large systems tracked via set equations called state evolution estimation state evolution used heuristic derivation present case general rank prior output pout follows line line matrix estimation case detailed inference written terms order parameter describing overlap amp estimator iteration ground truth defined reads fin independent hadamard power matrix shall present rigorous proof tensor estimation rely instead standard arguments statistical physics performance amp algorithm understood initializing fixed point initialize infinitesimally small number accounting fact random initialization amp finite size infinitesimally correlated ground truth denote mamp fixed point state evolution resulting iterations initialization error achieved mseamp mamp zero mean covariance matrix optimal inference next goal analyze performance possibly intractable inference evaluates marginals posterior probability distribution error achieved procedure denoted minimum error mmse formally defined mmsen inf infimum taken measurable functions order compute mmse instrumental compute mutual information quantity related free energy statistical physics see section compute limit quantities one traditionally applies replica method stemming statistical physics take advantage fact bayesoptimal inference replica symmetric version method yields correct free energy replica method yields sup log mkk defined independent random variables denotes set symmetric positive matrices section prove result case replica free energy provides limit mutual information thanks theorem similar yields value mmse tensor estimation see sec denoting argmaxm get mmse lim mmsen proved rigorously case odd values see sec notice estimation problem symmetric permutations columns expected true without assumptions statistical computational evaluation derivative respect one check critical points fixed points state evolution equations allowing results read curve global maximum gives mmse possibly local maximum reached iteration uninformative initialization yields mseamp discuss interplay mmse mseamp working hypothesis paper amp yields lowest mse among known polynomial algorithms depending parameters model order tensor rank prior distribution output channel pout appears via fisher information distinguish two cases easy phase asymptotically amp bayes optimal mmse mseamp hard phase mmse mseamp given mmse mseamp denote borders hard phase exists follows information theoretic threshold limsup highest mmse mseamp algorithmic threshold liminf lowest mmse mseamp another threshold used paper critical value defined smallest one mamp estimate one noise infinite one mamp note definition must cases hard phase exist consider existing results maximum likelihood estimation suggest tensor decomposition limit considered paper means spiked model tensor decomposition algorithmically hard compared matrix case authors give good account needs scale known polynomial algorithms work estimation situation seems first sight similar indeed whenever prior zero mean get hard phase consequently huge seen follows indeed mean prior zero state evolution equations fixed point expanding state evolution around fixed point find whenever fixed point stable hence priors zero mean closer look however shows situation pessimistic indeed soon mean prior longer fixed point state evolution solve state evolution equations observe either amp performing optimally hard phase completely absent amp optimal performance give examples priors section rigorous results present section rigorous results case mentioned universality property reduces computation mutual information case additive white gaussian noise consider probability distribution admits finite second moment observation model reduces case independent define hamiltonian xip xip also write dpx dpx posterior distribution given reads dpx ehn appropriate normalizing factor free energy defined minus logarithm boltzmann probability divided averaged particular interest since related mutual information see log case expression simplifies use section log dpx exp expectation respect independent random variables proof reduces following theorem theorem formula free energy let probability distribution finite second moment log sup frs define inf infimum taken measurable functions observations using theorem see fact tensor mmse achieved let write posterior mean given difficult verify arguments matrix case see corollary increases noise level function thus convex function frs pointwise limit consequently frs values frs differentiable almost every values one also verify maximizer unique refer detailed proof matrix case thus obtain following theorem theorem almost every admits unique maximizer threshold maximal value lim epx asymptotic performance achieved random guess obtain thus precise location threshold sup epx let sample posterior independently everything else extension theorem derived priors bounded support tensor case gives almost every pth overlap concentrates around leads theorem odd suppose bounded support odd almost every mmsen showing implies theorem need introduce fundamental property bayesian inference nishimori identity proposition nishimori identity let couple random variables polish space let let samples given distribution independently every random variables let denote expectation respect expectation respect continuous bounded function ehf ehf proof equivalent sample couple according joint distribution sample first according marginal distribution sample conditionally conditional distribution thus equal law use proposition prove theorem proof theorem let denote expectation respect posterior distribution let sample distribution independently everything else best estimator term error posterior mean hxi hxn therefore mmsen hxi hxi ehx another sample independently everything else apply nishimori identity proposition obtain ehx ehx gives mmsen ehx deduce ehx supposed odd concludes proof prove theorem matrix case proved explain adapted case prove limit one shows successively upper bound lim sup matching lower bound lim inf shown section one need prove theorem input distributions finite support assume situation adding small perturbation one key ingredient proof introduction small perturbation model takes form small amount side information kind techniques frequently used study spin glasses small perturbations forces gibbs measure verify crucial identities see context bayesian inference see small quantities side information breaks correlations signal variables posterior distribution let fix suppose access additional information ber symbol belong posterior distribution ehn appropriate normalization constant use notation thus obtained replacing coordinates revealed revealed values notation allow obtain convenient expression free energy perturbed model log log ehn next lemma shows perturbation change free energy order recall supposed support finite find constant lemma lemma follows direct adaptation proposition tensor case consequently suppose define ber independently everything else denotes expectation respect remains therefore compute limit free energy small perturbation shown perturbation forces correlations vanish asymptotically lemma lemma let write expectation respect let two independents samples independently everything else define notice random variable consequence lemma overlaps posterior distribution concentrates around lemma denotes expectation respect random variables lemma follows arguments section arguments presented section robust apply large class hamiltonians particular able apply sequel lemmas hamiltonians posterior distributions corresponding free energies guerra interpolation scheme lower bound obtained extending bound derived using interpolation already done tensors korada macris consider tensors special case rademacher lemma lim inf sup proof use interpolation let suppose observe given variables random variables define interpolating hamiltonian xip xip posterior distribution given reads exp appropriate normalization let log corresponding free energy notice let denote expectation respect posterior let sampled independently everything else using gaussian integration parts nishimori identity proposition one show see convexity function would would conclude like use inequality obtain proof lower bound lim inf lim inf lim inf however know almost surely bypass issue add sec small perturbation forces concentrates around value lemma without affecting interpolating free energy limit see arguments sec omit details rewriting previous calculations perturbation term concludes proof proving scheme going show arguments upper bound cavity computations approach extended tensor case lemma lim sup sup proof going compare system variables system variables define log log consequently lim sup lim sup thus going let variables variable decompose xip law one also decompose xip xip xip xip definition random variables independent everything else denote gibbs measure corresponding hamiltonian log rewrite log log sample independently everything else thus difference two terms correspond exactly terms sec show small perturbation system overlap planted configuration concentrates around value leads simplifications lim sup lim sup frs precise derivation reader invited report matrix case see sec since major difference tensor case point arguments presented commonly used study spin glasses analog cavity computations model developed sec concludes proof examples phase transitions stable unstable uninformative informative mse figure left panel comparison amp fixed point reached uninformative marked crosses informative strongly correlated ground truth marked pluses initialization fixed point equations stable fixed point blue unstable red data gaussian prior mean unit variance amp runs done system size central panel phase diagram order tensor factorization rank gaussian prior mean unit variance zone amp matches optimal performance mmse mse mseamp point amp zone mmse located xtri right tri xtri panel phase diagram order tensor factorization rank bernoulli prior function point located observed compare fig phase diagram presented matrix factorization case used state evolution eqs free energy compute values thresholds several examples prior distributions gaussian spherical spins rademacher ising spins bernoulli localization clustering tensor stochastic block model vector coordinate elsewhere examples values thresholds priors given table zero mean gaussian rademacher prior results indeed agree presented central right part fig present thresholds gaussian bernoulli prior function mean density respectively left part fig illustrates indeed fixed points state evolution agree fixed points amp algorithm prior gaussian rademacher bernoulli clusters log log table examples algorithmic thresholds tensor decomposition different priors factors gaussian case log converges large bernoulli case rescaling power convenience present quantities order one check describes large limit results gaussian prior section detail analysis state evolution rank gaussian prior mean variance pxgauss using one gets equation scalar inverse fisher information output channel turns soon equation exhibits multiple stable fixed points zero mean case one gets fixed point stable whatever noise therefore amp achieve performance better random guessing ref studies scaling amp algorithms succeed positive mean however amp algorithm able recover signal values alg xalg dyn xdyn xdyn xalg defined new threshold smallest state evolution unique fixed point know analytical formula figure computed numerically point curve meet located using expressions derive also compute limit get log scaling agrees large behavior derived results clustering prior interesting example prior rank tensor estimation pxclusters describes model clusters due channel universality prior also describes stochastic block model dense considered sparse model considered detail clustering prior mean also exhibits transition phase recovery clusters better chance possible phase analyze equations first notice stable fixed point form bir identity matrix matrix filled ones means estimate marginals carry information means perfect reconstruction state evolution becomes function defined studied taylor expansion notice always fixed point expanding first order one gets fixed point therefore becomes unstable analyzing prove fixed point rather finding fixed point iteratively equations allow draw fixed point stable unstable stable fixed point next question whether first second order phase transition answer one needs analyze whether fixed point close stable unstable compute get using therefore stable fixed point close system must first order phase transition discontinuity mseamp two clusters second order phase transition however analyzing state evolution numerically observed discontinuity later three clusters always detected discontinuities values three clusters values given table acknowledgment work supported erc european union grant agreement references michael aizenman robert sims shannon starr extended variational principle model physical review animashree anandkumar rong daniel hsu sham kakade matus telgarsky tensor decompositions learning latent variable models journal machine learning research maria chiara angelini francesco caltagirone florent krzakala lenka spectral detection sparse hypergraphs annual allerton conference communication control computing page jean barbier mohamad dia nicolas macris florent krzakala thibault lesieur lenka mutual information symmetric matrix estimation proof replica formula advances neural inf proc systems page mohsen bayati andrea montanari dynamics message passing dense graphs applications compressed sensing information theory ieee transactions andrzej cichocki danilo mandic lieven lathauwer guoxu zhou qibin zhao cesar caiafa huy anh phan tensor decompositions signal processing applications multiway component analysis ieee signal processing magazine crisanti sommers approach spherical spin glass model phys andrea crisanti sommers spherical interaction spin glass model statics zeitschrift physik condensed matter deshpande montanari optimal sparse pca ieee international symposium information theory pages francesco guerra broken replica symmetry bounds mean field spin glass model commun math dongning guo shlomo shamai sergio mutual information minimum error gaussian channels ieee transactions information theory satish babu korada nicolas macris exact solution gauge symmetric glass model complete graph journal statistical physics krzakala mutual information matrix estimation ieee inf theory workshop pages marc lelarge miolane fundamental limits symmetric matrix estimation lesieur krzakala mmse probabilistic matrix estimation universality respect output channel annual allerton conf control lesieur krzakala phase transitions sparse pca ieee int symp inf theory page thibault lesieur florent krzakala lenka constrained matrix estimation phase transitions approximate message passing applications journal statistical mechanics theory experiment ryosuke matsushita toshiyuki tanaka matrix reconstruction clustering via approximate message passing advances neural information processing systems page parisi virasoro theory beyond volume world scientific andrea montanari estimating random variables random sparse observations trans emerging tel dmitry panchenko model springer science business media amelia perry alexander wein afonso bandeira statistical limits spiked tensor models arxiv preprint sundeep rangan alyson fletcher iterative estimation constrained matrices noise ieee int symp inf theory pages emile richard andrea montanari statistical model tensor pca advances neural information processing systems page nicholas sidiropoulos lieven lathauwer xiao kejun huang evangelos papalexakis christos faloutsos tensor decomposition signal processing machine learning ieee transactions signal processing michel talagrand mean field models spin glasses volume basic examples volume springer science business media thouless anderson palmer solution solvable model spin glass phil magazine lenka florent krzakala statistical physics inference thresholds algorithms advances physics
| 10 |
rate region boundary siso channel improper signaling jun christian member ieee ignacio senior member ieee peter senior member ieee paper provides complete characterization boundary achievable rate region called pareto boundary interference channel interference treated noise users transmit complex gaussian signals allowed improper considering augmented complex formulation derive necessary sufficient condition improper signaling optimal condition stated threshold interference channel coefficient function interfered user rate allows insightful interpretations behavior achievable rates terms circularity coefficient degree impropriety furthermore optimal circularity coefficient provided closed form simplicity obtained characterization permits interesting insights improper signaling outperforms proper signaling singleantenna also provide discussion optimal strategies properties pareto boundary signaling interference channel pareto boundary ntroduction widely known proper gaussian signals capacityachieving different wireless communication networks broadcast channels use signaling scheme generally assumed study multiuser wireless networks property proper signaling stems maximum entropy theorem states entropy random variable power constraint maximized proper gaussian distribution however networks interference presents major limiting factor proper gaussian signaling recently shown suboptimal improper gaussian signaling also known asymmetric complex signaling proved outperform proper signaling different interference networks improper complex random variable differs proper counterpart real imaginary parts correlated unequal variance words random variable correlated complex conjugate signals arise naturally communications due gain imbalance branches due use specific digital modulations lameiro schreier signal system theory group paderborn germany email department communications engineering university cantabria spain binary phase shift keying bpsk gaussian minimum shift keying gmsk whenever received signal improper linear operations must replaced widely linear operations linear random variable complex conjugate order fully exploit correlation signal complex conjugate design widely linear receivers extensively studied literature see references therein however transmission improper signals handle interference effectively rather new line research would like add digital modulation schemes yield cyclostationary signals mean autocovariance complementary autocovariance funtions periodic shown exploiting property along impropriety leads improved performance example shows optimal transmit signal must also cyclostationary receiver corrupted cyclostationary gaussian noise first study benefits improper signaling interference management carried interference channel work showed improvement terms dof represent maximum number streams characterize asymptotic similar dof results derived mimo interfering broadcast channel mimo however improper signaling increases achievable dof also achievable rates networks optimal rate region boundary maximally improper transmissions perfect correlation real imaginary parts either zero real imagniary part derived showing substantial improvements proper signaling additionally proposed suboptimal design improper transmit parameters outperforms proper maximally improper scheme similar suboptimal design also proposed singleoutput miso improper signaling also applied reduce symbol error rate mixed approach addition use improper gaussian signaling also shown beneficial multiuser scenarios broadcast channel linear precoding communications underlay cognitive radio networks particular case also known difference respect fact one receivers affected capacity region known strong strong interference regimes achievable operation receiver less complex techniques also studied see characterization rate region boundary successive interference cancellation receivers even techniques known whether proper signaling optimal nevertheless convenient perform linear operations treating interference noise complexity reduced restricted linear operations improper signaling presents useful tool improve performance proper signaling improper signaling recently considered maximizing scheme derived closed form end considered model complex signals regarded real signals double dimension however despite remarkable efforts model insightful augmented complex model works signal complex conjugate example circularity coefficient measures degree impropriety quantity easily derived augmented complex formulation much difficult express counterpart zic improper signaling addressed journal version inclusion spatial dimension makes complete analytical assessment intractable authors proposed heuristic scheme optimize widely linear operation transmitter permits rates users way obtained achievable rate region larger obtained proper signaling although authors mainly focused representation also considered augmented complex formulation thus optimal transmission scheme interfered user obtained using former whereas interfering user latter work adopt augmented complex model provide complete insightful characterization optimal rate region boundary called pareto boundary users may transmit improper gaussian signals assuming interference treated noise main contributions summarized next extend results one point rate region boundary derived provide complete characterization pareto optimal boundary closed form show rate region boundary described threshold interference channel coefficient determines improper signaling optimal adopting augmented complex formulation provide point boundary notice different also conveys desired message see references therein expressions transmit powers circularity coefficients direct measure degree impropriety transmit signals permits insightful conclusions full assessment improvements improper signaling proper signaling thus analyze degree impropriety affects rate different boundary points investigate conditions must fulfilled improper signaling outperform proper signaling connection optimal circularity coefficients users indepth discussion characterization also provided rest paper organized follows section provides preliminaries improper random variables describes system model characterization rate region boundary derived section iii discussion results presented section along several numerical examples illustrating findings finally section concludes paper ystem model preliminaries improper complex random variables first provide definitions results improper random variables used throughout paper comprehensive treatment subject refer reader variance complex random variable defined absolute value expectation operator complementary variance complex random variable defined called proper otherwise improper furthermore valid pair variance complementary variance circularity coefficient complex random variable defined absolute value quotient complementary variance variance circularity coefficient satisfies thus measures degree impropriety proper otherwise improper call maximally improper system description consider siso symbols extensions without loss generality sake exposition standard form depicted fig denoting real channel coefficient transmitter receiver signal receivers modeled fig siso standard form model described three parameters power budgets interference channel coefficient transmitted signal noise ith user respectively additive white gaussian noise awgn variance assumed proper whereas transmitted signals complex gaussian random variables variance complementary variance thus rate achieved user function design parameters given variances received signals complementary variances received signals variances signals complementary variances assuming power budget ith user achievable rate region improper gaussian signaling union achievable rate tuples notice included constraint stated section condition must fulfilled valid pair variance complementary variance iii pareto boundary rate region pareto boundary rate region described comprises pareto optimal points defined follows definition call rate pair section characterize boundary deriving optimal transmission parameters achieve point boundary first notice since user interfere user optimal transmit strategy maximizes achievable rate consequently transmit power must maximized implies second valid pair variance complementary variance consequently complementary variance expressed circularity coefficient measures degree impropriety hence equivalent considerations expressed clear maximized minimized yields min observe user transmits proper signal user must also transmit proper signal setting similarly user transmits improper signal signal transmitted user must also improper according difference phases complementary variances desired interference signals receiver phase difference interpreted looking joint distribution real imaginary parts desired signal interference receiver level contours distributions ellipses whose major axes rotated respect signal interference power concentrated along orthogonal dimensions observe following optimal choice given effect compensated receiver sake brevity omit dependence design parameters relevant thus achievable rate user independent specific value furthermore since user affected interference also impact achievable rate hence without loss generality take considerations design parameters reduced transmit power circularity coefficient user manipulations achievable rates user user function design parameters given achievable rate region defined expressed order characterize boundary region defined notice achievable rate user bounded achievable rate user corresponding pareto optimal point given one maximizing rate user cast following optimization problem maximize subject fig example transmit power power constraints user shaded area set possible transmit powers value constraint given proof please refer appendix result lemma equivalently state problem given thus compute every point rate region boundary varying solving problem set constraints problem defines feasibility set design parameters consists two constraints affecting design parameters independently namely power budget constraint bounds circularity coefficient additional one jointly constrains latter expresses rate constraint user specific point region boundary determined computed given constraint essentially limits transmit power user consequently rewrite convenient form express following lemma lemma let rate constraint equivalent power maximize subject min sake illustration plot fig example two constraints affecting transmit power user namely obviously increasing since interference higher degree impropriety less harmful user tolerates higher amount interference power without reducing achievable rate achieve global maximum problem clear must set min explicitly expressed dependence consequently number design parameters reduced one simplified maximize subject expressed lemma let assume exists proof please refer appendix lemma leads following key result lemma let otherwise dependency described follows increases monotonically exists otherwise furthermore increases monotonically values proof please refer appendix illustrate characterization provided lemma plot fig rate user normalized proper signaling rate examples three cases described lemma use gives notice three curves vary slightly three chosen values close thresholds lemma describes interference constraint shapes dependency rate user degree impropriety transmitted signal thus notice expressing function also function key task determine optimal circularity coefficient second user words degree impropriety transmit signal achievable rate given maximized forthcoming lines analyze maximized improper signal derive optimal value cases want determine conditions must fulfilled improper signaling outperform conventional proper signaling since two different power constraints affecting namely power budget user interference power created user start dropping power budget constraint analyze interference constraint affects rate user function first present following lemma fig illustration lemma set yields drop power budget constraint alternatively power budget sufficiently high improper signaling outperforms proper signaling values values always suboptimal clearly seen impact power budget constraint cases maximizes however increasing meaningful also permits increasing transmit power power budget constraint considered may exist hence increasing beyond point permit increase transmit power see fig therefore improper signaling optimal see fig hand improper signaling optimal condition means proper signaling permit transmitting maximum power thus still power left exploited improper signaling improve achievable rate ingredients derive complete characterization optimality improper signaling scenario consequently region main result presented following theorem theorem let define max otherwise improper signaling required maximize max furthermore expression holds optimal circularity coefficient otherwise minimum value proof please refer appendix concluding section introduce following definition useful describe properties pareto boundary definition call region points rate region boundary alternatively call remaining points rate region region alternatively iscussion numerical examples section provides discussion derived characterization along numerical examples illustrating remarkable features improper signaling afterwards connection related works literature presented optimal strategies optimality proper signaling pointed beginning section iii proper signaling optimal strategy one users also optimal one means point region boundary achieved users employing signaling scheme either proper improper combination maximally improper signaling users optimal one boundary point noticed one boundary point users simultaneously transmit maximally improper signal happens one point due fact rate constraint fulfilled corresponds case user tolerates infinite amount maximally improper interference along orthogonal direction see second equation however user may increase rate increasing operating power budget since maximum circularity coefficients fig dependency optimal circularity coefficients transmit power purpose increasing increase well continuous function setting always suboptimal however reasoning applicable since turns discontinuous function case intuition behind behavior first user achieves corresponding rate maximally improper signal interference orthogonal signal subspace second user also transmits maximally improper signal consequently must hold furthermore since condition equivalent users transmit maximally improper signals boundary point illustrate property provide two simulation examples first example consider channel coefficient power budgets figure shows optimal circularity coefficients transmit power user example holds users transmit maximally improper signals observe discontinuity maximum transmit power user explained later notice approaches maximum value goes towards remains static although might seem violate statement section equal zero happens case circularity coefficient longer meaningful since transmit power zero therefore boundary points satisfy claim also clear according optimal value circularity coefficients maximum fig dependency optimal circularity coefficients transmit power second example consider case hence expected boundary points achieved users transmitting maximally improper signals observed fig depicts dependency circularity coefficients transmit power rate boundary point user chooses transmit signal maximally improper optimal circularity coefficient user equals fig also observe discontinuity maximum transmit power circularity coefficients approximately discontinuity different one observed fig explained follows point discontinuity max furthermore point means maximally improper signaling achieves rate proper signaling proper signaling starts outperforming improper signaling increases beyond point case optimal circularity coefficient jumps maximum transmit power jumps relationship expression also permits drawing insightful conclusions relationship circularity coefficients users according theorem implies hence holds ratio sir greater one case noticed signal transmitted first user never chosen maximally improper point pareto boundary behavior clearly observed fig hand alternatively sir equal lower one holds whenever seen fig signal transmitted user chosen maximally improper points boundary remain words implies first user decrease circularity coefficient however equals corresponds point degree impropriety first user start decreasing since rates achievable properties pareto boundary behavior regions region refers power limitation second user comprises boundary points satisfying alternatively region improper signaling always optimal long power budget sufficiently high words optimality improper signaling determined depends goes towards increases furthermore user achieve required rate maximally improper signal case interference along unused dimension impact achievable rate therefore region user must choose circularity coefficient smaller one achieve desired rate hence tolerated interference finite values result transmit power second user eventually limited grows bounds achievable rate words optimality improper signaling eventually determined independent power budget transition interferencelimited regions interesting feature may abrupt changes achievable rate user shift one region due jump tolerable interference power observed fig explained follows region transmit power always equals power budget improper signaling optimal however region transmit power dominated function power budget exceeds value function hence using easily seen may jump maximum transmit power discontinuity maximum transmit power user function whenever lower sir prominent power jump whereas jump observed sir equal greater discontinuity maximum transmit power implies similar jump maximum achievable rate user making illustrated figs pareto boundary rate region depicted previously considered examples respectively also depict figures rate region boundary proper signaling enlargement rate region due time sharing figure corresponds scenario sir one therefore observe pareto boundary discontinuous explained earlier result jump maximum proper signaling proper signaling time sharing improper signaling improper signaling time sharing proper signaling proper signaling time sharing improper signaling improper signaling time sharing maximum fig pareto boundary proper improper transmissions fig pareto boundary proper improper transmissions maximum transmit power user also observed fig specifically maximum transmit power interference limited region equals power jump notice discontinuity approximately corresponds rates interval achievable belong pareto boundary defined beginning section iii similarly rate pairs achievable belong pareto boundary therefore two line segments plotted dashed lines fig scenario corresponding fig presents sir greater one transition regions presents discontinuity jump transmit power see fig notice discontinuity maximum transmit power due falling threshold cause discontinuity pareto boundary previously explained point max holds proper improper signaling achieve rate therefore changes smoothly even though transmit power circularity coefficients jump figures also illustrate dependency region different region specifically decreases slowly former due fact examples holds entire region therefore decrease due increase region however decreases towards increases stronger effect achievable rate transition two different behaviors leads sharp bend pareto boundary observed figs corresponds transition regions holds entire region improper signaling optimal region conditions imply optimal transition point therefore satisfy rate first user must decrease move region changes behavior points region optimal increasing keeping constant permits achieving required move region reached hence dependency change value slightly shifted right figures also show enlargement rate region due improper signaling examples interference level significant achievable rate region improper signaling substantially larger proper signaling especially scenario shown fig also worth highlighting time sharing provides substantial enlargement rate region proper improper signaling still significant gap performance strategies fig rate region achieved improper signaling convexified time sharing extreme points pareto boundary correspond proper signaling transmissions point transition interferencelimited regions users transmit maximally improper signals contrast pareto boundary points region belong convex hull fig time sharing improves performance region operation regimes considering possibly suboptimal approach always treating interference noise may distinguish following operation regimes account optimality improper signaling strictly improper regime say strictly improper regime regime treating interference noise points satisfying achieved improper signaling selective regime max say selective regime regime treating interference noise subset points satisfying achieved improper signaling strictly proper regime max say strictly proper regime regime treating interference noise points achieved proper signaling note always belongs pareto boundary achieved proper signaling operation regimes described explain optimal strategies boundary points satisfying strictly improper regime obtained making use lemma establishes sufficient condition improper signaling optimal proper signaling allow maximum power transmission therefore max holds boundary points satisfying furthermore easily checked max since increasing function condition optimality improper signaling fulfilled strictly improper regime points except example operation strictly improper regime given fig corresponding fig figures correspond selective regime observe improper signaling optimal interval move region region max therefore proper signaling optimal strategy whole region case benefits improper signaling limited region notice increases condition converges relationship previous work finally would like connect results related works literature previous work studied similar scenario context underlay cognitive radio considered restriction first user transmits proper signals characterization maximum achievable rate user derived terms threshold specifically improper signaling shown optimal threshold strictly higher one obtained general easily seen agreement fact let first user optimize circularity coefficient rate achieved second user increase also considered transmit strategy maximizes derived closed form based model although model usually convenient optimization insightful augmented complex model since features improper signal easily captured case degree impropriety elegantly given circularity coefficient would like stress one point rate region characterized whereas work completely characterized boundary rate region furthermore since consider augmented complex model provided closedform formulas circularity coefficients thus providing insightful description improper signaling behaves scenario nevertheless conclusions drawn fall within characterization rate region boundary looking structure maximizing transmit strategies observe improper signaling chosen condition belongs strictly improper regime solution presented seen special case furthermore according improper signaling preferred maximization signal transmitted one users chosen maximally improper selection user depends whether condition holds agreement previous discussion pointed circularity coefficient first user always equal smaller second user whenever aforementioned condition fulfilled case pareto optimal point corresponds sumrate maximization point satisfies otherwise maximized first user transmits maximally improper signal case circularity coefficient second user smaller except mimo improper signaling considered due inclusion spatial dimension analytical characterization improper signaling similar theorem becomes intractable notice siso impropriety completely described circularity coefficient however mimo case description impropriety means complementary covariance matrix characterized set circularity cients unitary matrix authors proposed heuristic scheme design complementary covariance matrix transmitted signal user permits easy control degree impropriety interesting aspect fact completely stick particular model representation improper signals thus use model optimize transmission scheme user design scheme second user means augmentedcomplex formulation interestingly achievable rate regions obtained shape similar figs suggests conclusions might extended general mimo onclusion analyzed benefits improper signaling assumption interference treated gaussian noise derived complete insightful characterization pareto boundary rate region corresponding transmit powers circularity coefficients characterization derived analyzing circularity coefficients affect performance different boundary points specifically shown improper signaling optimal interference coefficient exceeds given threshold depends rate achieved interfered user shown rate region substantially enlarged using improper signaling especially relative level interference high acknowledgements work lameiro schreier supported german research foundation dfg grant schr work santamaria supported ministerio economia competitividad mineco spain projects rachel carmen authors would like thank anonymous reviewers valuable comments helped improve quality paper ppendix proof emma let first consider case equivalent quadratic inequality convex one positive root given real imaginary part means holds whenever implies hence simplified given second branch equivalent quadratic inequality equivalent power constraint given one roots equation order determine one two roots must considered use fact decreasing means least one roots violate condition given roots notice therefore convex one negative one positive root equivalent power constraint given latter satisfies otherwise rate expression must considered concave roots either positive negative complex last two cases none satisfies equivalent power constraint must obtained expression two roots positive monotonicity fulfilled largest root hence equivalent power constraint determined smallest root cases root must considered obtained taking positive square root manipulations yields given first branch concludes proof ppendix roof emma derivative respect let first consider case plugging obtain notice rate constraint user fulfilled maximally improper signal user achieve rate transmitting power holds values holds equality therefore evaluating taking derivative respect yields using expression holds plugging obtain denominator expression shown positive follows since assuming holds hence positive consequently obtain depend seen let side holds equality since increases consequently side holds strict inequality result derivative positive whenever concludes proof ppendix roof emma lemma increases monotonically long clearly also zero point thus apply yielding proves first case second case improper signaling outperforms proper signaling lemma also know improper signaling outperforms proper signaling rate improvement strictly positive therefore holds must evaluate holds first consider boundary points satisfying since points user achieve required rate maximally improper signal consequently always fulfilled yields second branch boundary points satisfying holds therefore using corresponding branch yields yields taking account yields first branch finally lemma yields third case concludes proof ppendix roof heorem lemma improper signaling optimal however condition sufficient since take power budget constraint account firstly since increasing improve rate permits increasing transmit power well must equivalent greater quantity also know lemma optimal otherwise clear improper signaling optimal optimal circularity coefficient satisfy let denote circularity coefficient using therefore evaluate condition fulfilling expression let first consider case plugging expression condition manipulations yields quadratic inequality respectively given quadratic expression convex one positive root hence condition given largest root expression however applicable holds threshold using holds otherwise thus obtain using first branch alternatively taking equality yields plugging expression obtain manipulations yields since quadratic expression convex consider largest root expected one roots corresponds maximum power allowed proper signaling since already considered threshold use second root simplified root largest one inequality satisfied exceeds value yields condition combining obtain must hold improper signaling optimal given however condition valid consequently improper signaling optimal greater dominating threshold max expression satisfied lemma must increased thus optimal circularity coefficient given concludes proof eferences neeser massey proper complex random processes applications information theory ieee transactions information theory vol jul cadambe jafar wang interference alignment asymmetric complex conjecture ieee transactions information theory vol lameiro siso interference channel improper signaling proc ieee int conf budapest hungary jun wang gou jafar optimality linear interference alignment mimo interference channel constant channel coefficients online available http shin park park lee new approach interference alignment asymmetric complex signaling multiuser diversity ieee transactions wireless communications vol mar yang zhang interference alignment asymmetric complex signaling mimo channels ieee transactions communications vol jorswieck improper gaussian signaling siso interference channel ieee transactions wireless communications vol zeng yetis gunawan guan zhang transmit optimization improper gaussian signaling interference channels ieee transactions signal processing vol jun zeng zhang gunawan guan optimized transmission improper gaussian signaling miso interference channel ieee transactions wireless communications vol nguyen zhang sun improper signaling symbol error rate minimization interference channel ieee transactions communications vol mar lagen agustin vidal coexisting linear widely linear transceivers mimo interference channel ieee transactions signal processing vol hellings joham utschick qos feasibility mimo broadcast channels widely linear transceivers ieee signal processing letters vol kim jeong sung lee asymmetric complex signaling relay channels proceedings international conference ict convergence ictc lameiro schreier benefits improper signaling underlay cognitive radio ieee wireless communications letters vol lameiro schreier analysis maximally improper signalling schemes underlay cognitive radio proceedings ieee international conference communications london jun amin abediseid alouini outage performance cognitive radio systems improper gaussian signaling proceedings ieee international symposium information theory isit hong kong china jun kurniawan sun improper gaussian signaling scheme channel ieee transactions wireless communications vol jul lagen agustin vidal improper gaussian signaling channel proceedings ieee international conference acoustics speech signal processing icassp florence italy may superiority improper gaussian signaling wireless interference mimo scenarios ieee transactions communications vol schreier scharf statistical signal processing complexvalued data theory improper noncircular signals cambridge cambridge univ press adali schreier scharf signal processing proper way deal impropriety ieee transactions signal processing vol random vectors channels entropy divergence capacity ieee transactions information theory vol may gerstacker schober lampe receivers widely linear processing channels ieee transactions communications vol buzzi lops sardellitti widely linear reception strategies layered wireless communications ieee transactions signal processing vol jun chevalier pipon new insights optimal widely linear array receivers demodulation bpsk msk gmsk signals corrupted noncircular saic ieee transactions signal processing vol mar schober gerstacker lampe blind stochastic gradient algorithms widely linear mmse mai suppression ieee transactions signal processing vol mar han cho capacity cyclostationary complex gaussian noise channels ieee transactions communications vol yeo cho asymptotic properizer block processing cyclostationary random processes ieee transactions information theory vol jul yeo cho lehnert joint transmitter receiver optimization stationary data sequence journal communications networks vol yeo han cho lehnert capacity orthogonal overlay channel ieee transactions wireless communications vol kim cho lehnert asymptotically optimal noise prediction complex interference ieee transactions wireless communications vol mar costa gaussian interference channel ieee transactions information theory vol prasad bhashyam chockalingam gaussian mimo channel gaussian mimo channel ieee transactions communications vol sato capacity gaussian interference channel strong interference ieee transactions information theory vol song ryu choi characterization pareto boundary symmetric gaussian interference channel ieee transactions communications vol hellings utschick matrices signal processing ieee transactions signal processing vol apr jorswieck larsson danev complete characterization pareto boundary miso interference channel ieee transactions signal processing vol
| 7 |
international journal computer trends technology ijctt volume number apr application machine learning techniques aquaculture akhlaqur sumaira department electrical electronic engineering uttara university bangladesh school engineering deakin university australia abstract paper present applications different machine learning algorithms aquaculture machine learning algorithms learn models historical data aquaculture historical data obtained farm practices yields environmental data sources associations different variables obtained applying machine learning algorithms historical data paper present applications different machine learning algorithms aquaculture applications keywords aquaculture machine learning decision support system introduction aquaculture refers farming aquatic organisms fish aquatic plants involves cultivating freshwater saltwater populations controlled conditions use sensor technologies monitor environment aquaculture operations take place recent trend sensors collect data aquaculture environment using farm managers decision making purposes literature states number activities related decision support systems aquaculture farm operations number decision support systems developed use machine learning methods sake completeness briefly discuss methods first detail machine learning based methods following section bourke developed framework water quality indicators well operational information displayed impact survival rate biomass production failure aquaculture species evaluated wang developed early warning system dangerous growing conditions padala zilber used expert system generated rules reduce stock loss increase size quality yields ernst focused managing hatchery production using rules calculations physical chemical biological processes silvert developed scientific model evaluate environmental impact halide used rules domain experts issn machine learning methods aquaculture applications section discuss different applications machine learning methods applied aquaculture domain shellfish farm closure prediction consumption contaminated shellfish cause severe illness even death humans authors developed sensor network based approach predict contamination event using machine learning methods authors presented machine learning methods obtain balance farm closure farm opening events authors presented feature ranking algorithm identify influential cause closure authors adopted time series machine learning approaches like pca principal component analysis acf auto correlation function predict closure event algae bloom prediction algae organisms grow widely throughout world provide food shelter organisms growth algae excessive cause oxygen depletion water kill fish authors developed machine learning based methods predict algae authors extracted set rules data gathered sensor networks find associations environmental variables algae growth authors designed ensemble method find relevant environmental variables responsible algae growth predict growth authors developed machine learning methods predict propagation algae patches along waterway missing values estimation sensor networks deployed field monitor aquaculture environments often suffer failure due sensor communication failure etc results missing sensor readings required machine learning based decision making systems results requirement deal missing sensor values authors designed multiple classifier based method deal missing sensor data instead imputing missing data prediction http international journal computer trends technology ijctt volume number apr events based available sensor readings ensemble approach shown perform better imputation methods authors assume equal weights features may true real world scenarios model relocation order train machine learning methods required provide historical labeled data however farmer aims setup new aquaculture firm historical farming data available location authors provided guideline utilize machine learning models trained different location new location based similarity two locations results show model relocation significantly reduce shortcoming generated data unavailability particular location benthic habitat mapping order get understanding aquaculture habitats seafloor researchers sometimes send autonomous underwater vehicles collect seafloor images images later visually analyzed researchers produce habitat map region authors developed image processing machine learning based method automatically produce habitat maps seafloor images results presented paper showed significant accuracy obtaining correct habitat maps sensor data quality assessment data produced sensors sometimes faulty decisions based faulty sensor reading result wrong conclusion authors presented novel ensemble classifier approach assessing quality sensor data base classifiers constructed random training data sampling process guided clustering inclusion cluster based learning shown improve accuracy quality assessment conclusions paper presented review machine learning methods aquaculture space future aim extend methods time series machine learning methods references bourke stagnitti mitchell decision support system aquaculture research management aquacultural engineering wang chen decision support system aquaculture water quality evaluation international conference computational intelligence security issn padala zilber expert systems use aquaculture rotifer microalgae culture systems proceedings workshop honolulu hawaii ernst bolte nath aquafarm simulation decision support aquaculture facility design management planning aquacultural engineering september silvert decision support systems aquaculture licensing journal applied ichthyology halide stigebrandt rehbein mckinnon developing decision support system sustainable cage aquaculture environmental modelling software june este rahman turnbull predicting shellfish farm closures class balancing methods aai advances artificial intelligence lecture notes computer science rahman mcculloch ensemble feature ranking shellfish farm closure cause identification proc workshop machine learning sensory data analysis conjunction australian conference doi http shahriar rahman mcculloch predicting shellfish farm closures using time series classification aquaculture decision support elsevier computer electronics agriculture shahriar este rahman detecting predicting harmful algal blooms coastal information systems proc ieee oceans rahman shahriar algae bloom prediction identification influential environmental variables machine learning approach international journal computational intelligence applications vol shahriar rahman prediction algal bloom proc ieee international conference natural computation icnc china zhang rahman impute ignore missing values prediction proc ieee international joint conference neural networks ijcnn dallas texas rahman claire este timms dealing missing sensor values predicting shellfish farm closure proceedings ieee intelligent sensors sensor networks information processing issnip melbourne rahman murshed feature weighting methods abstract features applicable motion based video indexing ieee international conference information technology coding computing itcc vol usa rahman murshed feature weighting retrieval methods dynamic texture motion features international journal computational intelligence systems vol march este rahman similarity weighted ensembles relocating models rare events proc international workshop multiple classifier systems mcs lecture notes computer science nanjing china may rahman benthic habitat mapping seabed images using ensemble color texture edge features international journal computational intelligence systems vol issue doi rahman smith timms novel machine learning approach towards quality assessment sensor data ieee sensors journal doi rahman smith timms multiple classifier system automated quality assessment marine sensor data proceedings ieee intelligent sensors sensor networks information processing issnip melbourne http
| 5 |
extasy scalable flexible coupling simulations advanced sampling techniques vivekanandan iain ardita elena eugen cecilia charles shantenu department electrical computer engineering rutgers university piscataway usa university edinburgh james clerk maxwell building peter guthrie tait road edinburgh school pharmacy centre biomolecular sciences university nottingham university park nottingham department chemistry rice university houston usa department physics rice university houston usa center theoretical biological physics rice university houston usa jun epcc trajectories required build statistically meaningful model dynamical process high dimensionality macromolecular systems complexity associated potential energy surfaces creating multiple metastable regions connected high free energy barriers pose significant challenges adequately sample relevant regions configurational space words beside curse dimensionality associated large number degrees freedom trajectories easily get trapped low free energy state fail explore biologically relevant states waiting time escape local free energy minimum increases exponentially height free energy barrier needs crossed reach another state metastable states separated free energy barriers several tens boltzmann constant physiological temperature uncommon biologically relevant systems present routinely sampled standard simulations practice better sampling relevant regions macromolecule configurational space achieved methodologies able bias sampling towards scarcely visited regions reducing waiting time inside metastable state artificially flattening energy barriers states metadynamics accelerated dynamics although results usually reweighed reproduce correct boltzmann statistics kinetics properties easily recovered biased simulations unless used combinations unbiased simulations see addition design effective bias usually requires priori information system interest instance suitable choice collective variables describe slow timescale processes alternative approach tackle sampling problem development ensemble swarm simulation strategies data large numbers simulations may weakly coupled coupled integrated replica exchange markov state models msms last class methods increasing interest variety reasons firstly hardware roadmap many macromolecular systems accurate sampling relevant regions potential energy surface obtained single long molecular dynamics trajectory new approaches required promote efficient sampling present design implementation extensible toolkit advanced sampling analysis extasy building executing advanced sampling workflows hpc systems extasy provides python based templated scripts interface interoperable run time system abstracts complexity managing multiple simulations extasy supports use existing parallel code coupling analysis tools based upon collective coordinates require priori knowledge system bias describe two workflows couple large ensembles relatively short simulations analysis tools automatically analyse generated trajectories identify molecular conformational structures used new starting points iterations one workflows leverages locally scaled diffusion maps technique makes use complementary coordinates techniques enhance sampling generate next generation simulations show extasy tools deployed range hpc systems including archer cray blue waters cray stampede linux cluster good strong scaling obtained simulations independent size simulation discuss extasy easily extended modified endusers build workflows ongoing work improve usability robustness extasy ntroduction otivation approximately compute cycles xsede devoted research biomolecular systems using molecular dynamics simulations much computational cost comes need adequate sampling conformational space accessible complex flexible systems order answer particular research question example calculate free energies one needs adequate sample boltzmann weighted ensemble states system order estimate thermodynamic quantity interest another example study kinetic processes association integration data large numbers based almost entirely increasing core counts rather clock speeds face developments favour weak scaling problems larger larger molecular systems simulated strong scaling problems getting data faster system fixed size however running ensembles simulations cores integrating data using msm approaches timescales far excess sampled individual simulation effectively accessed last years several studies published using msm methods processes protein folding ligand binding completely quantitatively characterized thermodynamically kinetically simulations orders magnitude shorter process interest becoming increasingly clear application ensemble simulation strategies computational facilities unparalleled potential permit accurate simulations largest challenging generally biologically relevant biomolecular systems main challenge development ensemble approach faster sampling complex macromolecular systems design strategies adaptively distribute trajectories relevant regions systems configurational space without using priori information system global properties definition smart adaptive sampling approaches redirect computational resources towards unexplored yet relevant regions currently great interest light challenge posed trends computer architecture need improve sampling range existing codes analysis tools designed implemented extensible toolkit advanced sampling analysis extasy extasy provides three key features within single framework enable development applications requiring advanced sampling flexible highly scalable environment firstly extensible toolkit extasy allows wide range existing software integrated leveraging significant community investment highly optimised software packages enabling users continue work tools familiar support specific codes analysis tools provided order demonstrate extasy may used users easily add tools needed secondly extasy flexible providing programmable interface link individual software components together construct sampling workflows workflows capture sequence execution individual tools data transfers dependencies first class support provided defining large ensembles independent simulations thus complex calculations may scripted executed without need user intervention thirdly extasy workflows may executed either locally remote high performance computing systems complexities batch queueing system data transfer abstracted making easy users make use appropriate compute resources access addition abstraction allows workflows executed without exposing component queue waiting time respecting dependencies defined workflow many simulations scheduled executed parallel rest paper organized follows next section discuss design implementation extasy brief discussion section iii two distinct applications used design validate extasy section provides careful analysis performance scalability extasy given complex interplay functionality performance designing extensible production tool perform wide range experiments aimed investigate strong weak scaling properties inter alia set heterogeneous hpc platforms conclude discussion scientific impact well lessons sustainable software development elated ork need better sampling driven developments methodology algorithm hardware software bio molecular simulation one features popular metadynamics method accelerated dynamics constant analysis sampled far used bias future sampling unexplored regions range alternative approaches emerging likewise alternating segments analysis inform direction future sampling coarsely grained iterative approach advantage metadynamics method identity interesting directions enhanced sampling need defined priori emerge respond flexibly developing ensemble another advantage sampling process analysis method implemented within executable two executables permitting greater flexibility many methods make use collective variables cvs define directions promote sampling variety novel established algorithms unsupervised adaptive construction cvs exist addition work preto clementi interleaving cycles simulation data analysis locally scaled diffusion maps related methods include pacsmd method harada kitao variants thereof method peng zhang better sampling also come faster sampling enabled hardware developments anton special purpose computers enable much faster calculations different contributions forces along trajectories thus speeding clock time required perform time integration step simulation allowing execution significantly longer trajectories cost access special purpose computers ensure spite potential accessible wider scientific community general purpose approaches anton requires customized ecosystem bespoke engines anton specific data analysis middleware himach thus anton style approaches simulation science take advantage rich community driven advances methods replica exchange metadynamics require tight coupling simulation analysis processes thus typically implemented additional facilities within core code replica exchange methods implemented amber charmm gromacs lammps namd example provided separate package communicates manner running executable generally though hooks example approach plumed package provides metadynamics capabilities amongst others amber gromacs lammps namd also quantum espresso contrast knowledge far established generally available package support types adaptive workflows described cores available utilized hand access large number cores available framework able use effectively regard strong weak scalability framework investigated usability depending application user might need change using set input data modify simulation parameters replace simulation analysis tools framework offer easy application setup minimizing user time effort process user concerned decisions workflow executed details deployment execution occurs hidden underlying software thus framework use tools abstract complexities deployment execution user workflow users developers able concentrate efforts application logic expect underlying software provide level transparent automation aspects data movement job submission iii tasy equirements esign mplementation design requirements identify following primary design objectives extasy support range hpc systems abstract task execution data movement user flexible resource management coupling capabilities different stages well within stage provide users easy method specify change workload parameters without delving application design objectives put together lead following simple software architecture middleware resource execution management extasy framework aimed provide easy method run advanced sampling algorithms hpc systems process extasy framework abstract users complexity application composition workload execution resource management need middleware provides resource execution management provides many required functionalities performance details tasks mapped executed resources abstracted extasy user user required provide details workload identify resource accessible design choice thus acknowledges separation concerns workload description execution hpc systems configuration files composing advanced sampling algorithms discussed previous sections using components provided particular middleware stack requires specific knowledge middleware extasy framework bypasses complexity adding one level abstraction provides scripts accordance advanced sampling methods discussed paper configurable via configuration files configuration files user exposed meaningful parameters modified necessary section first present requirements considered design implementation extasy discuss requirements consistent design many new software systems tools analyze functionality performance usability requirements extasy functionality specific sampling need couple two distinct computational stages stage compared typical duration monolithic simulation furthermore two stages differ resource requirements significantly one stage characterized multiple compute intensive simulations single analysis program operates data aggregated multiple simulations extasy reference extensible nature framework thus coupling must abstraction stages specific codes fixed time duration scientists may access multiple systems wish submit jobs may forced migrate using one system another due cpu time allocation system imperative software system support interoperability use heterogeneous systems minimal changes simulations may executed multiple nodes depending system size thus software system also support ability execute tasks multiple nodes performance order obtain good sampling large number simulations must executed many cases aggregate number cores required set simulations ensemble much higher total number cores available given instance allocated system framework must decouple aggregate peak resource requirement workload number fig sal pattern common sampling algorithms crux pattern iteration stages simulation analysis number simulation analysis instances different pattern may also contain processing stages fig design extasy framework using ensemble toolkit middleware extasy framework provides scripts created using components provided ensemble toolkit parameters related resource workload exposed via configuration files alone files users interact within ensemble toolkit workload converted executable units execution plugins submitted resource using pplications illustrate capabilities extasy approach via two exemplar applications two different advanced sampling algorithms implemented extasy diffusion techniques algorithms common execution pattern ensemble simulation tasks followed analysis stage performed multiple iterations following pattern shown figure case algorithm simulation stage consists gromacs lsdmap analysis stage whereas coco algorithm simulation stage consists gromacs runs trajectory conversions analysis consists coco individual simulation analysis tools might differ depending algorithm chosen overall pattern observed implementation ensemble toolkit mentioned previously extasy requires middleware resource execution management chose use ensemble toolkit middleware component provides several relevant features ability support mpi tasks dynamic resource management one type able execute tasks resources available support heterogeneous hpc systems strong weak scalability guarantees ensemble toolkit tested upto tasks short long term plans support tasks ensemble toolkit turn based upon pilot abstraction implementation pilot abstraction provide much flexible scalable resource management capabilities ensemble toolkit exposes three components user used express many applications kernel plugins execution patterns resource handle scripts part extasy framework use components describe application logic configuration files application logic expressed via components ensemble toolkit resource workload specifications exposed via configuration files extasy framework two types configuration files resource configuration consist details resource application executed resource name runtime username account details used access resource kernel configuration defines workload parameters location input files molecular dynamcis simulation analysis tools parameters tools workflow parameters number simulations diffusion diffusion technique improves efficiency computational resources choosing replicas protein used run replicas close trajectories similar information gain simulating close replicas small part replicas close deleted hold total number replicas constant replicas far apart duplicated dimensionality reduction technique locally scaled diffusion map lsdmap used calculate distance different replicas deletion duplication replicas would destroy correct sampling protein changing weights individual replicas reweighting step correct sampling protein obtained technique requires protein starting structure additional information protein necessary user fine tune sampling mainly varying total number replicas way local scale lsdmap calculated begin method replicas generated protein starting structure step lsdmap calculated lsdmap requires final structure replica step based lsdmap results new replicas next iteration chosen current replicas reweighting ensures shown technique least one order magnitude faster compared plain comparison done alanine dipeptide model system hpc systems used one requirements extasy interoperable used several different hpc systems experiments characterised performance extasy stampede dell linux cluster located texas advanced computing center part extreme science engineering discovery environment xsede consists compute nodes intel xeon sandy bridge processors total cpu cores per node well intel xeon phi used experiments stampede uses slurm batch scheduler job submission archer cray supercomputer hosted epcc operated behalf engineering physical sciences research council epsrc natural enviroment research council nerc compute nodes intel xeon ivy bridge processors giving cores per node archer uses portable batch system pbs job submission blue waters cray operated national centre supercomputing applications behalf national science foundation university illinois partition used work consists compute nodes amd interlagos processors giving cores per node blue waters uses workload manager job submission workflow coco complementary coordinates technique designed originally method enhance diversity ensembles molecular structures type produced nmr structure determination method involves use pca cartesian space map distribution ensemble low typically dimensional space identification regions coco generates new conformations molecule would correspond regions number new structures generated users control algorithm divides space bins chosen resolution marks bins sampled first returns structure corresponding centre bin furthest sampled one marks bin sampled iterates many times desired workflow ensemble structures simulations analysed using coco method new conformations become start points new round simulations latest data added previous set coco repeated method agglomerative data generated far used analysis also adaptive fresh pca performed time applied simulations alanine pentapeptide workflow able reduce mean first passage times extended state local minimum states factors ten greater compared conventional simulations evaluation individual components since performance entire workflow depends performance component parts investigate scaling simulation code gromacs analysis tools isolation three target platforms using system used full sampling workflows simulation tools parallel efficiency gromacs respect single core machine shown figure efficincies archer cores stampede cores blue waters cores suggest scaling relatively small simulation ideal using single node per simulation good use available hardware beyond single node efficiency drops although multiple node simulation tasks supported ensemble toolkit useful benchmark case analysis tools due nature two workflows many parallel simulation tasks single analysis task therefore analysis task may configured run many cores available simulations coco lsdmap parallelised using mpi consist parts independent reading trajectory files coco involve communication diagonalisation covariance matrix coco diffusion matrix lsdmap parallel scaling expected performance coco also strongly dependent since reads entire trajectory file rather final configurations like lsdmap erformance valuation experiment setup physical system mixed protein pdb code atoms including water chosen physical system experiments experimentally measured folding time around folding process extensively studied experiment simulations means folding home distributed computing platform coupled msm analysis anton supercomputer relatively small size existence previous simulation results long timescales make protein ideal candidate testing benchmarking approach albeit small much larger simple peptide exhibits folding process two competing routes thus presenting test adaptive sampling fig measurement overhead extasy archer blue waters stampede using simulation stage sal pattern overhead extasy measured three machines various core counts fig gromacs parallel efficiency archer blue waters stampede single gromacs simulation system performed using various core counts three machines execution time measured characterization overheads addition necessity characterizing performance overhead introduced new system approach implementation order instill confidence potential users extasy important measure overheads imposed extasy objective discern contributions different aspects extasy framework opposed analysis components time taken create launch terminate simulations instantiate extasy framework resource examples former overhead experiments use ensemble toolkit version ran single iteration workflow null workloads task work otherwise configured normal simulation task launched using mpi taking whole node machines number tasks ranged run concurrently figure shows overheads stampede blue waters relatively small growing around tasks archer overheads much larger investigation due aprun job launcher performs poorly multiple concurrent tasks launched strong scaling test test strong scaling extasy workflows fix number executing instances independent simulations vary total number cpu cores used execute workflow largest experiments use enough cpu cores simulations execute concurrently example archer instances executing single node cores gives maximum cores since simulations independent may executed concurrently extasy expect number cores increased simulation time decrease proportionally time spent analysis part expected constant since total amount data constant despite parallelisation analysis tools seen figure time completion plateaus fairly low core counts figure bottom shows results experiment workflow executed blue waters fig strong scaling coco analysis tool archer blue waters stampede total simulations analyzed using various core counts three machines execution time measured fig parallel efficiency lsdmap archer blue waters stampede total structures analyzed using various core counts three machines execution time measured figures show strong scaling coco fixed input simulations see coco able scale least cores archer blue waters around cores stampede thus following experiments configure workflow run coco many cores input trajectories lsdmap figure however scale efficiently much beyond single node machine even input structures nevertheless run lsdmap many cores available even though use fully due structure workflow cores would otherwise sit idle analysis phase evaluation extasy fig strong scaling workflow archer top blue waters bottom number simulations held constant number cores per simulation archer bluewaters total number cores used varied constant workload hence measuring strong scaling performance framework fig strong scaling workflow archer top stampede bottom number simulations held constant number cores per simulation archer stampede total number cores used varied constant workload hence measuring strong scale performance framework simulation time decreases cores cores speedup many cores yielding scaling efficiency analysis time essentially constant around expected loss scaling efficiency simulation part comes two sources firstly fixed overhead discussed section associated execution concurrent tasks approximately secondly actual computation occurs within task take longer simulations run concurrently due fact write shared filesystem example instances run concurrently cores simulations take average instances run concurrently takes slower effects removed effective scaling efficiency cores rises similar results obtained archer figure top although scaling simulation part tails cores lsdmap analysis takes somewhat longer higher variability blue waters due fact analysis involve significant known opening many small files concurrently slow metadata servers parallel lustre filesystem become bottleneck stampede figure bottom similar strong scaling simulation part simulation time decreases cores cores speedup increase number cores efficiency however analysis time coco scale due fact parallelisation coco limited number input trajectories case even cores available workflow archer top show good scaling platforms reason lies fact actual molecular dynamics calculation trajectory conversion step required prepare data coco step takes fraction second execute large overhead caused aprun allocates resources launches individual task occur blue waters uses orte implementation yet default archer weak scaling test investigate weak scaling properties extasy fix ratio number instances cpu cores vary number instances constraint simulations execute concurrently example archer instances executed one node cores giving total cores number instances increased number cores since simulations run concurrently length simulation change expect simulation time constant however analysis part increase since performance analysis tools function input data size depending number instances well number cores available even though number cores proportional data size amount work grows faster linearly data size workflow blue waters figure top observe small increase simulation part scale cores similar strong scaling results combination overhead due increased number tasks slowdown individual tasks analysis found increase discussed section lsdmap computation consists parts linear quadratic size input data combined increasing number cores fig weak scaling workflow archer top stampede bottom number cores per simulation held constant archer stampede total number instances varied cores used increased proportionally keeping ratio workload number resources constant observe weak scaling performance framework fig weak scaling workflow blue waters top stampede bottom number cores per simulation held constant blue waters stampede total number simulations varied cores used increased proportionally keeping ratio workload number resources constant observe weak scaling performance framework available tool scaling result similar behaviour observed stampede figure bottom although different weighting simulation analysis part reflecting fact performance kernel depends well optimised application binary execution platform archer figure top shows clearly aprun bottleneck discussion section effect increases number concurrent tasks grows however analysis part scales better linearly expected since coco consists parts weak scale ideally independent operations per trajectory file parts construction diagonalisation covariance matrix grow data size squared stampede weak scaling simulation part workflow figure bottom much better archer simulation time grows around compared archer range cores tested coco scales almost identically archer effect larger ensembles distinguish effects caused strong scaling increasing parallelism fixed amount work weak scaling increasing parallelism proportionally amount work also measured effect increasing amount work fixed number compute cores available figure shows results workflow running blue waters vary number instances keeping total number cores available since task runs single node cores per instance simulation tasks run concurrently ideally would expect simulation time increase linearly number instances practice see time taken grows fig workflow stampede workload increased instances keeping number cores constant within overheads increase execution time proportion increase workload number instances increases factor due fact overheads related managing tasks occur execution task case overheads overheads hidden done concurrently agent execution remaining tasks number instances greater scaling analysis part consistent discussed section close linear scaling since larger ensemble size parallelism available lsdmap dynamic simulations important characteristic lsdmap coco based workflows number instances typically changes iteration thanks extasy framework supports flexible mapping number concurrent instances total number cores agnostic number cores per instance functionality used workflow depending progress extasy designed implemented provide significant step direction extasy allows simulate many parallel trajectories means standard engines extract long timescale information trajectories means different dimensionality reduction methods use information extracted data adaptively improving sampling extasy utilized different engines analysis algorithms localized changes section iii formally identified functional performance usability requirements support coupling simulations advanced sampling algorithms presented design implementation extasy extensible portable scalable python framework building advanced sampling workflow applications achieve requirements establishing accurate estimates overhead using extasy section consisted experiments designed validate design objectives performed experiments characterized extasy along traditional scaling metrics also investigated extasy beyond single weak strong scaling performance exception machine specific reasons extasy displayed linear scaling strong weak scaling tests various machines simulation instances nodes workflows order keep footprint new software small extasy builds upon understood abstractions efficient interoperable implementation ensemble toolkit provides double duty core functionality extasy provided simple higher level extensions complex system software allowing build upon performance optimization underlying system software layers also allows extasy employ good systems engineering practice welldefined good base performance amenable optimizations using orte blue waters design extasy reuse existing capabilities extensibility different codes sampling algorithms providing well defined functionality performance essential features ensure sustainability extasy compared existing software tools libraries advanced sampling extasy provides much flexible approach agnostic individual tools compute platforms architected enable efficient scalable performance simple general user interface extasy toolkit freely available http extasy toolkit used deliver two handson computational science training exercises tutorials simulations community focus advanced sampling participants given opportunity utilize hpc systems real time advanced sampling problems details events found http link lessons experience first workshop found https fig support dynamic workload extasy algorithm dictates number instances every iteration number instances iteration total iterations starting instances presented conformation space system explored lsdmap may decide spawn less trajectories next iteration sampling figure illustrates capability ran workflow blue waters three configurations initial instances see initial growth phase number instances seems stablise remaining iterations although difference starting configuration number iterations taken stabilise algorithmically systematically predictable flexible resource utilization capabilities extasy built upon prove critical summary experiments shown illustrative performance data two different applications based different analysis methods three distinct hpc platforms archer blue waters stampede overall scaling simulations clearly demonstated analysed scaling behaviour extasy framework overheads individual simulation analysis programs constitute workflow iscussion onclusion computational biophysics approaches aim balance three different requirements accuracy advanced sampling capabilities rigorous fast data analysis points strongly interconnected particular becoming clear advanced sampling data analysis need tightly coupled work efficiently accurately configurational space already sampled needs analyzed inform proceed sampling furthermore many advanced sampling algorithms biomolecular simulations require flexible scalable support multiple simulations final solution yet exists best strategy adaptive sampling different physical systems may require combination strategies need allow combination different engines different analysis tools easily extended modified build workflows development new strategies applications study complex biomolecular systems vii acknowledgments work funded nsf ssi awards epsrc work used archer national supercomputing service http acknowledge access xsede computational facilities via blue waters via gratefully acknowledge input various people helped development extasy workflows everyone else involved extasy project attendees extasy tutorials beta testing sessions particularly david dotson gareth shannon provided comments suggestions eferences nsf xsede annual report page figure https using xdmod facilitate xsede operations planning analysis xsede proceedings conference extreme science engineering discovery environment gateway discovery doi barducci bonomi parrinello metadynamics wiley interdisciplinary reviews computational molecular science vol online available http pierce oliveira mccammon walker routine access millisecond time scale events accelerated molecular dynamics journal chemical theory computation vol pmid online available http paul wehmeyer markov models molecular thermodynamics kinetics proc natl acad sci usa vol press sugita okamoto molecular dynamics method protein folding chemical physics letters vol online available http chodera markov state models biomolecular conformational dynamics current opinion structural biology vol theory simulation macromolecular machines online available http schtte reich weikl constructing equilibrium ensemble folding pathways short simulations proceedings national academy sciences vol online available http voelz bowman beauchamp pande molecular simulation initio protein folding millisecond folder amer chem vol buch giorgino fabritiis complete reconstruction binding process molecular dynamics simulations proc natl acad sci usa vol online available http silva meng yue huang quantitatively characterizing ligand binding mechanisms choline binding protein using markov state model analysis plos comput vol online available http plattner protein conformational plasticity complex kinetics explored atomistic simulations markov models nat vol preto clementi fast recovery free energy landscapes via molecular dynamics phys chem chem vol rohrdanz zheng maggioni clementi determination reaction coordinates via locally scaled diffusion map chem vol march guo computational studies paclitaxel structures templates hierarchical block copolymer assemblies sustained drug release biomaterials vol online available http harada takano baba shigeta simple yet powerful methodologies conformational sampling proteins phys chem chem vol online available http peng zhang simulating conformational changes proteins accelerating collective motions obtained principal component analysis journal chemical theory computation vol pmid online available http shaw anton machine molecular dynamics simulation commun acm vol jul ohmura computer system molecular dynamics simulations philosophical transactions royal society london mathematical physical engineering sciences vol online available http overview amber biomolecular simulation package wires comput mol vol brooks charmm program macromolecular energy minimization dynamics calculations comput vol abraham gromacs high performance molecular simulations parallelism laptops supercomputers softwarex vol plimpton fast parallel algorithms molecular dynamics comp vol phillips scalable molecular dynamics namd comput vol tribello bonomi branduardi camilloni bussi plumed new feathers old bird comp phys vol giannozzi quantum espresso modular opensource software project quantum simulations materials phys condens matter vol balasubramanian treikalis weidner jha ensemble toolkit scalable flexible execution ensembles tasks accepted icpp http merzky executing dynamic heterogeneous workloads super computers http laughton orozco vranken coco simple tool enrich representation conformational variability nmr structures proteins vol april jolliffe principal component analysis new york springer verlag sherer harris soliva orozco laughton molecular dynamics studies dna structure chem soc vol wlodek clark scott mccammon molecular dynamics acetylcholinesterase dimer complexed chem soc vol shkurti rosta balasubramanian jha laughton simple effective method enhanced sampling conformational space unpublished manuscript preparation horng moroz raleigh rapid cooperative folding miniature protein design thermostable variant mol vol piana dror shaw fastfolding proteins fold science vol henty jackson moulinec szeremi performance parallel archer online available http santcroos executing dynamic heterogeneous workloads blue waters proceedings cray user group cug online available http
| 5 |
published conference paper iclr etacontrol ptimization may jessica hamrick berkeley deepmind jhamrick oriol vinyals deepmind vinyals daptive magination andrew ballard deepmind aybd nicolas heess deepmind heess razvan pascanu deepmind razp peter battaglia deepmind peterbattaglia bstract many machine learning systems built solve hardest examples particular task often makes large expensive respect easier examples might require much less computation agent limited computational budget approach may result agent wasting valuable computation easy examples spending enough hard examples rather learning single fixed policy solving instances task introduce metacontroller learns optimize sequence imagined internal simulations predictive models world order construct informed economical solution metacontroller component reinforcement learning agent decides many iterations optimization procedure run well model consult iteration models call experts state transition models functions mechanism provides information useful solving task learned parallel metacontroller metacontroller controller experts trained interaction networks battaglia expert models approach able solve challenging problem complex dynamics metacontroller learned adapt amount computation performed difficulty task learned choose experts consult factoring reliability individual computational resource costs allowed metacontroller achieve lower overall cost task loss plus computational cost traditional fixed policy approaches results demonstrate approach powerful framework using rich forward models efficient reinforcement learning ntroduction significant recent advances deep reinforcement learning mnih silver control lillicrap levine efforts train network performs fixed sequence computations introduce alternative agent uses metacontroller choose many computations perform imagines consequences potential actions proposed actor module refines internally executing world metacontroller adaptively decides expert models use evaluate candidate actions time stop imagining act learned experts may state transition models functions function relevant task vary accuracy computational costs metacontroller learned policy exploit diversity pool experts trading costs reliability allowing automatically identify expert worthwhile published conference paper iclr draw inspiration research cognitive science neuroscience studied people use reasoning order control use internal models allocation computational resources evidence suggests humans rely rich generative models world planning control wolpert kawato reasoning hegarty battaglia adapt amount computation perform model demands task hamrick trade multiple strategies varying quality lee lieder lieder griffiths revision kool press optimization approach related classic artificial intelligence research metareasoning horvitz russell wefald hay formulates mdp selecting computations perform computations known cost also build classic work schmidhuber used controller recurrent neural network rnn world model evaluate improve upon candidate controls online recently andrychowicz used fully differentiable deep network learn perform gradient descent optimization tamar used convolutional neural network performing value iteration online deep learning setting similar work fragkiadaki made use visual imaginations action planning work also related recent notions conditional computation bengio bengio adaptively modifies network structure online adaptive computation time graves allows variable numbers internal pondering iterations optimize computational cost work key contribution framework learning optimize via metacontroller manages adaptive optimization loop represents hybrid system metacontroller constructs decisions using actor policy manage experts experimental results demonstrate metacontroller flexibly allocate computational resources basis achieve greater performance rigid fixed policy approaches using computation required difficult task odel consider class fully observed tasks continuous contextual bandits performance objective find control given initial state minimizes loss function known future goal state result forward process performance loss negative utility executing control world related optimal solution follows arg min however defines optimal achieve ptimizing erformance consider iterative optimization procedure takes input returns approximation order minimize optimization procedure consists controller iteratively proposes controls expert evaluates good controls nth iteration controller takes input information history previously proposed controls evaluations returns proposed control aims improve previously proposed controls expert takes proposed control provides information quality control call opinion opinion added history passed back controller loop continues steps final control proposed standard optimization methods use principled heuristics proposing controls gradient descent example controls proposed adjusting direction gradient reward respect control bayesian optimization controls proposed based selection criteria probability improvement criterion choosing among published conference paper iclr scene history manager controller action control switch world expert expert opinion outcome performance loss resource loss memory history expert scene history metacontroller agent figure metacontroller architecture task components part metacontroller agent box except scene world part agent environment manager takes scene history determines action take whether execute ponder expert ponder denoted orange lines controller takes scene history computes control force apply spaceship denoted blue lines orange line ending circle switch reflects fact manager action affects behavior switch routes controller control either expert simulation model spaceship trajectory function etc world outcome reward expert along history action control fed memory produces next history history fed back controller next iteration order allow propose controls based already tried scenes consisted number planets depicted colored circles different masses well spaceship also variable mass task apply force spaceship one time step simulation depicted solid red arrow resulting trajectory dotted red arrow would put spaceship target bullseye steps simulation white ring bullseye corresponds performance loss black ring loss blue ring loss red ring loss yellow center loss less depicts easy scene depicts difficult scene several basic selection criteria hoffman shahriari rather choosing one several controllers work learns single controller instead focuses selecting multiple experts see sec cases known inexpensive compute thus optimization procedure sets however many settings expensive advantageous use approximation state transition model function quantity gives information ptimizing omputational ost given controller one experts two important decisions made first many optimization iterations performed approximate solution usually improves published conference paper iclr iterations iteration costs computational resources however traditional optimizers either ignore cost computation select number iterations using simple heuristics balance cost computation performance loss overall effectiveness approaches subject skill preferences practitioners use second expert used step optimization experts may accurate expensive compute terms time energy money others may crude yet cheap moreover reliability experts may known priori limiting effectiveness optimization procedure use metacontroller address issues jointly optimizing choices many steps take experts use consider family optimizers use controller vary expert evaluators assuming controller experts deterministic functions number iterations sequences experts exactly determine final control performance loss means transformed performance optimization optimization arg mink notation used emphasize control function optimizer associated computational cost also exactly determine computational resource loss optimization run total loss sum functions optimal solution defined arg minn optimizing difficult recursive dependency history discrete choices mean differentiable optimize recast problem objective jointly optimize task performance computational cost shown figure metacontroller agent comprised controller pool experts manager memory manager policy russell wefald hay actions indexed determine whether terminate optimization procedure perform another iteration optimization procedure expert specifically nth iteration controller produces new control based history controls experts evaluations manager also relying history independently decides whether end optimization procedure execute control world perform another iteration evaluate proposed control knth expert ponder graves memory updates history concatenating previous history coming back notion optimization suggest iterative optimization process analogous imagining happen using one approximate world models actually executing action world details see appendix algorithmic illustration metacontroller agent see algorithm appendix also define two special cases metacontroller baseline comparisons iterative agent manager uses single expert number iterations single reactive agent special case iterative agent number iterations fixed implies proposed controls executed immediately world evaluated expert algorithmic illustrations iterative reactive agents see algorithms appendix eural etwork mplementation use standard deep learning building blocks perceptrons mlps rnns implement controller experts manager memory effective approximating complex functions via reinforcement learning approaches could used well particular constructed implementation able make control decisions complex dynamical systems controlling movement spaceship figure though note approach limited physical reasoning tasks used error mse adam kingma training optimizer published conference paper iclr experts implemented experts mlps interaction networks ins battaglia predicting complex dynamical systems like experiments expert parameters may trained either using outputs controller case paper data pairs states controls future states reward outcomes objective lek expert may different depending expert outputs example objective could loss goal future states lek use experiments could loss function predicts directly lek see appendix details controller memory implemented controller mlp parameters implemented memory long memory lstm hochreiter schmidhuber parameters memory embeds history fixedlength vector ekn controller memory trained jointly optimize however objective includes often unknown differentiable overcame approximating differentiable critic analogous used policy gradient methods silver lillicrap heess see appendices details manager implemented manager stochastic policy samples categorical distribution whose weights produced mlp parameters categorical trained manager minimize using einforce williams deep algorithms could used instead see appendix details xperiments evaluate metacontroller agent measured ability learn solve class physicsbased tasks surprisingly challenging episode consisted scene contained spaceship multiple planets figure spaceship goal rendezvous mothership near center system exactly time steps enough fuel fire thrusters planets static gravitational force exerted spacecraft induced complex dynamics motion steps spacecraft action space continuous maximum magnitude represented instantaneous cartesian velocity vector imparted thrusters details appendix trained reactive iterative metacontroller agents five versions spaceship task involving different numbers iterative agent trained take anywhere zero reactive agent ten ponder steps metacontroller allowed take maximum ten ponder steps considered three different experts differentiable mlp expert used mlp predict final location spaceship expert used interaction network battaglia predict full trajectory spaceship true simulation expert world model conditions metacontroller could use exactly one expert others allowed select mlp experts experiments true simulation expert used backpropagate gradients controller memory experiments mlp expert used learned critic experiments one experts critic shared parameters trained metacontroller range different ponder costs different experts details training procedure available appendix eactive iterative agents figure shows performance test set reactive iterative agents different numbers ponder steps reactive agent performed poorly task especially task difficult five planets dataset able achieve performance loss average see figure depiction magnitude loss contrast iterative agent true simulation expert performed much better reaching ceiling performance available https dataset task loss published conference paper iclr mlp expert number ponder steps int net expert number ponder steps true simulation expert one planet two planets three planets four planets five planets number ponder steps figure test performance reactive iterative agents line corresponds performance iterative agent either true simulation expert mlp expert interaction net expert trained fixed number ponder steps one five datasets line color indicates dataset controller trained cases performance refers performance loss left mlp expert struggles task due limited expressivity still benefits pondering middle expert performs almost well true simulation expert even though perfect model right true simulation expert quite well task especially multiple ponder steps datasets one two planets achieving performance loss five planets dataset mlp experts also improve reactive agent minimum performance loss five planets dataset respectively figure also highlights important choice expert using true simulation experts iterative agent performs well mlp expert however performance substantially diminished despite poor performance mlp expert still benefit pondering even steps mlp iterative agent outperforms reactive counterpart however comparing reactive agent iterative agent somewhat unfair iterative agent parameters due expert memory however given tends also increase performance one two ponder steps beyond clear highly inaccurate still lead better performance reactive approach etacontroller xpert though iterative agents achieve impressive results expend computation necessary example one two planet conditions performances true simulation iterative agents received little performance benefit pondering two three steps four five planet conditions required least five eight steps performance converged computational resources cost number steps concern cost important economical metacontroller learns choose number pondering steps balance performance loss cost computation figure top row middle right subplots shows true simulation expert metacontroller take fewer ponder steps increases tracking closely minimum iterative agent cost curve metacontroller points always near iterative agent curves minima adaptive behavior emerges automatically manager learned policy avoids need perform hyperparameter search find best number iterations given metacontroller simply choose average number ponder steps take per episode actually tailors choice difficulty episode figure shows number ponder steps metacontroller chooses episode depends episode difficulty measured episode loss reactive agent difficult episodes metacontroller tends take ponder steps indicated positive slopes best fit lines proportionality persists across different levels subplot published conference paper iclr task loss total cost mlp expert int net expert number ponder steps number ponder steps true simulation expert number ponder steps figure test performance metacontroller single expert five planets dataset column corresponds different experts lines indicate performance iterative agents different numbers ponder steps points indicate performance metacontroller point corresponding different value point average across number ponder steps average loss top row show total cost rather performance task including computation cost different colors show result different different lines showing cost iterative controller different values error bars metacontroller indicate confidence intervals point corresponding curve means metacontroller able achieve better achievable iterative agent line colors increasing brightness correspond increasing values taken bottom row show performance loss without computational cost point corresponds different value fact points curve means metacontroller agent learns perform better iterative agent equivalent number ponder steps ability adapt choice number ponder steps basis valuable allows metacontroller spend additional computation episodes require total costs true simulation metacontrollers lower median best achievable costs corresponding iterative agents respectively across range values tested see figure appendix details even benefit using metacontroller computational resource costs consider rightmost points figure bottom row middle right subplots show performance loss true simulation metacontrollers low remarkably points still outperform best achievable iterative agents suggests advantage stopping pondering good solution found generally demonstrates metacontroller learning process lead strategies superior available less flexible agents metacontroller mlp expert poor average performance high variance five planet condition figure top left subplot restricted focus section metacontrollers true simulation experts behaved mlp poor performance crucial however following section analyzes multipleexpert metacontroller manages experts vary greater reliability published conference paper iclr etacontroller xperts allow manager additionally choose two experts rather relying single expert find similar pattern results terms number ponder steps figure left additionally metacontroller successfully able identify reliable network consequently uses majority time except cases cost network extremely high relative cost mlp network figure right pattern results makes sense given good performance described previous section metacontroller expert compared poor performance metacontroller mlp expert manager generally rely mlp expert simply reliable source information however metacontroller difficulty finding optimal balance two experts basis addition second expert yield much improvement metacontroller different versions trained different values two experts achieving lower loss best iterative controller believe mixed performance metacontroller multiple experts partially due entropy term used encourage manager policy see appendix particular high values optimal thing always execute immediately without pondering however entropy term manager encourage policy therefore likely ponder use experts even suboptimal terms total loss despite fact metacontroller multiple experts result substantial improvement uses single expert emphasize manager able identify use reliable expert majority time still able choose variable number steps according difficult task figure left improvement traditional optimization methods would require expert ahead time number steps determined heuristically low ponder cost medium ponder cost high ponder cost metacontroller ponder steps slope correlation reactive controller loss slope correlation reactive controller loss slope correlation reactive controller loss figure relationship number ponder steps difficulty metacontroller subplot represents episode difficulty measured reactive controller loss represents number ponder steps metacontroller took points individual episodes line best fit regression line confidence intervals different subplots show different values labeled title case clear positive relationship difficulty task number ponder steps suggesting metacontroller learns spend time hard problems less time easier problems bottom plot fitted slope correlation coefficient values along confidence intervals brackets published conference paper iclr cost mlp expert cost int net expert fraction samples using mlp expert cost int net expert total number samples cost mlp expert figure test performance metacontroller multiple experts five planets dataset left average number total ponder steps different values metacontrollers fewer ponder steps taken cost high taken cost low right fraction ponder steps taken mlp expert relative expert majority cases metacontroller favors using expert much reliable exceptions red squares cases cost expert much higher relative cost mlp expert iscussion paper presented approach adaptive optimization neural networks approach able flexibly choose computations perform well many computations need performed approximately solving depends difficulty task way approach learns rely whatever source information useful efficient additionally consulting experts approach allows agents test actions ensure consequences disastrous actually executing experiments paper involve decision task approach lays foundation built upon support complex situations example rather applying force first time step could turn problem one trajectory optimization continuous control asking controller produce sequence forces case planning approach could potentially combined methods like monte carlo mcts coulom experts would akin several different rollout policies choose controller would akin tree policy mcts implementations run rollouts fixed amount time passed approach would allow manager adaptively choose number rollouts perform policies perform rollouts method could also used naturally augment existing approaches dqn mnih online optimization using policy controller adding additional experts form models interesting extension would compare metacontroller architecture controller performs optimization produce final control expect metacontroller architecture might require fewer model evaluations robust model inaccuracies compared method method access full history proposed controls evaluations whereas traditional methods although rely differentiable experts metacontroller architecture utilize gradient information experts interesting extension work would pass gradient information manager controller andrychowicz would likely improve performance especially complex situations discussed another possibility train experts inline controller metacontroller rather independently could allow learned functionality tightly integrated rest optimization loop expense generality ability repurposed uses conclude demonstrated neural agents use metareasoning adaptively choose think think long think published conference paper iclr method directly inspired human cognition suggests way make agents much flexible adaptive currently decision making tasks one described well planning control settings broadly acknowledgments would like thank matt hoffman andrea tacchetti tom erez nando freitas guillaume desjardins joseph modayil hubert soyer alex graves david reichert theo weber jon scholz dabney others deepmind team helpful discussions feedback eferences marcin andrychowicz misha denil sergio gomez matthew hoffman david pfau tom schaul nando freitas learning learn gradient descent gradient descent peter battaglia razvan pascanu matthew lai danilo jimenez rezende koray kavukcuoglu interaction networks learning objects relations physics advances neural information processing systems peter battaglia jessica hamrick joshua tenenbaum simulation engine physical scene understanding proceedings national academy sciences emmanuel bengio bacon joelle pineau doina precup conditional computation neural networks faster models yoshua bengio deep learning representations looking forward coulom efficient selectivity backup operators tree search international conference computers games springer katerina fragkiadaki pulkit agrawal sergey levine jitendra malik learning visual predictive models physics playing billiards proceedings international conference learning representations iclr url http jan nathaniel daw peter dayan john doherty states versus rewards dissociable neural prediction error signals underlying reinforcement learning neuron alex graves adaptive computation time recurrent neural networks jessica hamrick kevin smith thomas griffiths edward vul think amount mental simulation tracks uncertainty outcome proceedings annual conference cognitive science society nicholas hay stuart russell david tolpin solomon eyal shimony selecting computations theory applications proceedings conference uncertainty artificial intelligence nicolas heess gregory wayne david silver tim lillicrap tom erez yuval tassa learning continuous control policies stochastic value gradients advances neural information processing systems mary hegarty mechanical reasoning mental simulation trends cognitive sciences sepp hochreiter schmidhuber long memory neural computation matthew hoffman eric brochu nando freitas portfolio allocation bayesian optimization proceedings conference uncertainty artificial intelligence eric horvitz reasoning beliefs actions computational resource constraints uncertainty artificial intelligence vol philip mental models human reasoning proceedings national academy sciences diederik kingma jimmy adam method stochastic optimization wouter kool fiery cushman samuel gershman control pay plos computational biology press sang wan lee shinsuke shimojo john doherty neural computations underlying arbitration learning neuron sergey levine chelsea finn trevor darrell pieter abbeel training deep visuomotor policies journal machine learning research falk lieder thomas griffiths strategy selection rational metareasoning revision falk lieder dillon plunkett jessica hamrick stuart russell nicholas hay thomas griffiths algorithm selection rational metareasoning model human strategy selection published conference paper iclr timothy lillicrap jonathan hunt alexander pritzel nicolas heess tom erez yuval tassa david silver daan wierstra continuous control deep reinforcement learning volodymyr mnih koray kavukcuoglu david silver andrei rusu joel veness marc bellemare alex graves martin riedmiller andreas fidjeland georg ostrovski control deep reinforcement learning nature stuart russell eric wefald principles metareasoning artificial intelligence schmidhuber algorithm dynamic reinforcement learning planning reactive environments proceedings international joint conference neural networks ijcnn schmidhuber reinforcement learning markovian environments advances neural information processing systems bobak shahriari ziyu wang matthew hoffman alexandre nando freitas entropy search portfolio bayesian optimization david silver guy lever nicolas heess thomas degris daan wierstra martin riedmiller deterministic policy gradient algorithms proceedings international conference machine learning david silver aja huang chris maddison arthur guez laurent sifre george van den driessche julian schrittwieser ioannis antonoglou veda panneershelvam marc lanctot mastering game deep neural networks tree search nature aviv tamar sergey levine pieter abbeel value iteration networks advances neural information processing systems url http ronald williams simple statistical algorithms connectionist reinforcement learning machine learning wolpert kawato multiple paired forward inverse models motor control neural networks published conference paper iclr etacontroller etails give precise definitions metacontroller agent described main text iterative reactive agents special cases metacontroller agent therefore discussed metacontroller agent comprised following components controller policy maps goal initial states history controls whose aim minimize pool experts expert maps goal states input states actions opinions opinions either states rewards expert corresponds evaluator optimization routine approximation forward process manager policy decides whether send proposed control world expert evaluation order minimize formulation based used metareasoning systems russell wefald hay details corresponding mdp given appendix memory function maps prior history well recent manager choice proposed control expert evaluation updated history made available manager controller subsequent iterations history step recursively defined tuple concatenation prior history recently proposed control expert evaluation expert identity ekn ekn represents empty initial history similarly finite set histories step metacontroller produces function summarized algorithm agents iterative reactive mentioned main text simpler versions metacontroller agent summarized algorithms eta evel mdp implement manager metacontroller agent draw inspiration metareasoning literature russell wefald hay formulate problem markov decision process mdp decision whether perform another iteration optimization procedure execute control world state space consists goal states external states internal histories action space contains discrete actions correspond execute ponder ponder graves refers performing iteration optimization procedure expert deterministic state transition model otherwise otherwise otherwise published conference paper iclr algorithm metacontroller agent scene target function initial empty history get action manager propose control controller ponder expert ekn get expert opinion update history choose next action propose next control end return end function algorithm iterative agent scene target number ponder steps function initial empty history propose control controller ponder expert steps get expert opinion update history propose next control end return end function algorithm reactive agent scene target function propose control controller return end function deterministic reward function maps current state current action next state loss see otherwise see approximate solution mdp stochastic manager policy manager chooses actions proportional immediate reward taking action state plus expected sum future rewards construction imposes accuracy resources incentivizing agent ponder longer accurate potentially expensive experts problem harder published conference paper iclr history scene manager controller reinforce control action history scene manager controller control action predicted performance loss switch switch critic world expert expert expert world expert opinion expert expert opinion performance loss resource loss memory memory scene scene history history history history history scene manager scene manager controller control action history predicted performance loss controller action control switch switch critic world expert expert expert world expert expert expert opinion outcome performance loss outcome performance loss opinions memory memory scene scene history history history history figure training part network subplot red arrows depict gradients dotted arrows indicate backward connections part forward pass colored nodes indicate weights updated backpropagation occurs end full forward pass control executed world training controller memory bptt beginning critic flowing controller memory relevant expert controller training manager using einforce williams training experts note expert may different loss respect outcome world training critic radients xperts training experts straightforward supervised learning problem figure gradient published conference paper iclr expert lek loss function expert example case function expert loss function might lek case expert predicts final state using model system dynamics might lek ritic critic approximate model performance loss used backpropagate gradients controller memory means critic either function approximates directly model system dynamics composed known loss function goal future states train critic using procedure experts trained figure good expert may even used critic ontroller emory shown figure trained controller memory using backpropagation time bptt architecture specifically rather assuming known differentiable use critic backpropagate heess critic maximum number iterations controller use using notation indicate summed gradients following pascanu since already produced manager treated constant produce unbiased estimate gradient convenient allows training controller manager separately testing controller behavior arbitrary actions anager discussed main text used einforce algorithm williams train manager figure one potential issue however training controller manager simultaneously controller result high cost early training thus manager learn always choose execute action discourage manager learning essentially deterministic policy included regularization term based entropy williams peng mnih log log full return given strength regularization term paceship task datasets generated five datasets containing scenes different number planets ranging single planet five planets dataset consisted training scenes testing scenes target scene always located origin scene always sun mass units sun located distance units away target distance sampled uniformly random planets mass units located distance units away target sampled uniformly random spaceship mass units located distance units away target planets always fixed could move spaceship always started beginning episode zero velocity published conference paper iclr nvironment simulated scenes using physical simulation gravitational dynamics planets always stationary acted upon objects scene acted upon spaceship force force vector planet spaceship gravitational constant mass planet mass spaceship distance centers masses planet spaceship location planet location spaceship simulated environment using euler method dvs acceleration velocity position spaceship respectively damping constant control force applied spaceship step size note set zero timesteps except first mplementation etails used tensorflow abadi implement train versions model rchitecture implementation controller used mlp units first layer used relu activations second layer used multiplicative interaction similar van den oord found work better practice implementation memory used single lstm layer size implementation manager used mlp two fully connected layers units relu nonlinearities constructed three different experts test various controllers true simulation expert world model consisted simulation timesteps see appendix expert interaction network battaglia previously shown able learn predict dynamics accurately simple systems consists relational module object module case relational module composed hidden layers nodes outputting effects encodings size effects together relational model input used input object model contained single hidden layer nodes object model outputs velocity spaceship trained predict velocity every timestep spaceship trajectory mlp expert mlp predicted final location spaceship architecture controller discussed appendix used critic train controller memory always used expert critic except case true simulation expert used case also used true simulation critic raining rocedure weights initialized uniformly random iteration training consisted gradient updates minibatch size total ran training iterations additionally used waterfall schedule learning rates training iterations loss decreasing would decay step size trained controller memory together using adam optimizer kingma gradients clipped maximum global norm pascanu manager trained simultaneously using different learning rate controller memory mlp experts also trained simultaneously different learning rates learning rates determined using grid search small number values given table iterative agent table metacontroller one expert table metacontroller two experts published conference paper iclr iterative agent trained take fixed number ponder steps ranging reactive agent metacontrollers allowed take variable number ponder steps maximum metacontroller single expert trained manager using additional values spaced logarithmically inclusive metacontroller multiple experts trained manager grid pairs values expert could one values spaced logarithmically inclusive cases entropy penalty metacontroller onvergence reactive agent datasets training reactive agents straightforward converged reliably iterative agent iterative agent interaction network true simulation experts convergence also reliable small numbers ponder steps convergence somewhat less reliable larger numbers ponder steps believe scenes larger number ponder steps necessary solve task evidenced plateauing performance figure iterative agent effectively remember best control took last ponder steps complicated difficult task perform iterative agent mlp expert convergence variable especially task harder seen variable performance five planets dataset figure left believe mlp agent poor convergence would reliable better agent metacontroller single expert metacontroller agent single expert converged reliably corresponding iterative agent see bottom row figure mentioned previous paragraph iterative agent take steps actually necessary causing perform less well larger numbers ponder steps whereas metacontroller agent flexibility stopping found good control hand found metacontroller agent sometimes performed many ponder steps large values see figures believe due entropy term added einforce loss ponder cost high optimal thing behave deterministically always execute never ponder however entropy term encouraged policy nondeterministic plan explore different training regimes future work alleviate problem example annealing entropy term zero course training metacontroller multiple experts metacontroller agent multiple experts somewhat difficult train especially high ponder cost interaction network expert example note proportion steps using mlp expert decrease monotonically figure right increasing cost mlp expert believe also unexpected result using entropy term cases optimal thing actually rely mlp expert time yet entropy term encourages policy future work explore difficulties using experts complement better one wholly better experts experts always converged quickly reliably trained much faster rest network eferences abadi ashish agarwal paul barham eugene brevdo zhifeng chen craig citro greg corrado andy davis jeffrey dean matthieu devin tensorflow machine learning heterogeneous systems url http software available peter battaglia razvan pascanu matthew lai danilo jimenez rezende koray kavukcuoglu interaction networks learning objects relations physics advances neural information processing systems alex graves adaptive computation time recurrent neural networks published conference paper iclr true sim mlp dataset ponder steps one planet one planet one planet one planet one planet one planet one planet one planet one planet one planet one planet two planets two planets two planets two planets two planets two planets two planets two planets two planets two planets two planets three planets three planets three planets three planets three planets three planets three planets three planets three planets three planets three planets four planets four planets four planets four planets four planets four planets four planets four planets four planets four planets four planets five planets five planets five planets five planets five planets five planets five planets five planets five planets five planets five planets table hyperparameter values iterative controller refers learning rate controller memory refers learning rate expert refers learning rate mlp expert published conference paper iclr cost managed controller true simulation expert interaction network expert cost best iterative controller figure cost best iterative controller compared managed controller point represents total cost best iterative agent particular value versus total cost achieved metacontroller trained value best iterative agent chosen computing cost different number ponder steps choosing whichever number ponder stpes yielded lowest cost finding minimum curves figure top row almost cases managed controller achieves lower loss iterative controller metacontroller expert cost lower iterative controller average metacontroller true simulation expert lower average nicholas hay stuart russell david tolpin solomon eyal shimony selecting computations theory applications proceedings conference uncertainty artificial intelligence nicolas heess gregory wayne david silver tim lillicrap tom erez yuval tassa learning continuous control policies stochastic value gradients advances neural information processing systems diederik kingma jimmy adam method stochastic optimization volodymyr mnih badia mehdi mirza alex graves timothy lillicrap tim harley david silver koray kavukcuoglu asynchronous methods deep reinforcement learning proceedings international conference machine learning razvan pascanu tomas mikolov yoshua bengio difficulty training recurrent neural networks proceedings international conference machine learning stuart russell eric wefald principles metareasoning artificial intelligence van den oord nal kalchbrenner oriol vinyals lasse espeholt alex graves koray kavukcuoglu conditional image generation pixelcnn decoders ronald williams simple statistical algorithms connectionist reinforcement learning machine learning ronald williams jing peng function optimization using connectionist reinforcement learning algorithms connection science published conference paper iclr true sim mlp table hyperparameter values metacontroller single expert refers ponder cost refers learning rate controller memory refers learning rate manager refers learning rate expert refers learning rate mlp expert published conference paper iclr mlp table hyperparameter values metacontroller two experts refers ponder cost interaction network expert refers ponder cost mlp expert refers learning rate controller memory refers learning rate manager refers learning rate expert refers learning rate mlp expert
| 2 |
proceedings informs simulation society research workshop lee kuhl fowler robinson eds investigating output accuracy discrete event simulation model agent based simulation model mazlina abdul majid uwe aickelin school computer science nottingham university nottingham school computer science nottingham university nottingham abstract paper investigate output accuracy discrete event simulation des model agent based simulation abs model purpose investigation find simulation techniques best one modelling human reactive behaviour retail sector order study output accuracy models carried validation experiment compared results simulation models performance real system experiment carried using large department store case study determine efficient implementation management policy store fitting room using des abs overall found simulation models good representation real system modelling human reactive behaviour introduction simulation become preferred tool operation research modelling complex systems studies human behaviour modelling received increased focus attention simulation research robinson research human behaviour modelling applied various application areas manufacturing health care military many found literature researchers choose either discrete event simulation des agent based simulation abs tools investigate human behaviour problems choice simulation technique used relies individual judgment simulation model characteristics experience model representation human behaviour contains complexity variability therefore investigating systems important choose suitable modelling simulation technique research aim provide empirical study order find simulation modelling technique siebers school computer science nottingham university nottingham good representation real system validation experiment validation experiment compared results traditional des abs models performance real system main difference traditional des abs first one modelling focuses process flow abs modelling focus individual entities system interactions human reactive behaviour means certain individual responds certain request example sales staff provides help needed work investigate output accuracy des abs models modelling human reactive behaviour department store statistical tests used compare models content report follows background section gives taxonomy simulation techniques discussion previous related work background section explores theory characteristics three major simulation methods des abs system dynamics section define case study model design validation experimentation presented section also compare simulation models output real world output using quantitative methods addition results experiment also discussed finally section draw conclusions summarize current progress research background several tools techniques used model system modelling process abstracting real world problem modelling tools last three decades simulation become frequently used modelling tool kelton simulation defined process executing model time ability model complex systems made simulation preferred user choice compared mathematical models simulation classified majid aickelin siebers three types discrete event simulation des system dynamics agent based simulation abs des models represent system based series chronological sequencing events event changes state discrete time meanwhile models represent real world phenomenon using stock flow diagrams causal loop diagrams represent number interacting feedback loops differential equations contrast des models abs models comprise number autonomous responsive interactive agents interact order achieve objectives summarise three simulation techniques follows des abs models suitable work discrete event model changes discrete time one event another hand models suitable modelling system continuous state changes found literature performing comparisons des abs models together models terms model characteristics however none currently focusing modelling human behaviour one relevant papers comparing simulation technique regards model characteristics wakeland compared abs field biomedical authors found understanding aggregate behaviour models state changes individual entities abs models relevant study des comparisons field fisheries described morecroft robinson found des different approaches suitable modelling systems time existing comparisons des abs described becker field transportation found des less flexible abs difficult model different behaviours shippers des addition existing comparison des abs performed quantitative comparison des abs models outputs field transportation found one work describes three techniques discussed study owen tried establishing framework comparing different modelling techniques also found disparity quantity work comparing abs des contrasted amount work comparing des abs specifically disparity outstanded referring modelling human behavioural focus output accuracy therefore research choose study differences des abs regard study differences models choose focus management practices retail regards worker behaviour research retail previously focused consumer behaviour schenk however research management practices started evolve described siebers discussed existing comparison simulation techniques lot work done compare simulation techniques transportation supply chain management studies focused modelling characteristics hand decided compare accuracy output des abs models management practices currently developing area study retail domain case study fieldwork order achieve aim used case study approach research focused operation main fitting room womenswear department one top ten retailers see figure wanted identify potential impact fitting room performance different numbers sales staff permanently present investigated staff behaviour human reactive behaviour relates staff responding customer available requested case study exploration produced flow chart diagram des conceptual models see figure des focus process flow abs conceptual models see figure example state chart diagrams different types people represent case customers staff abs focus individual actors interactions fitting room operation staff reactive behaviours seen three jobs job counting number garments giving green card job providing help lastly job receiving green card unwanted garments customers case study data transformed simulation inputs experimentation experimentation two similar simulation models des abs developed using software known anylogictm conventional queuing system constructed techniques model consists arrival process customers three single queues customer entry queue customer return queue customer help queue resource sales staff run length simulation models one day replicated runs simulation models used inputs simulation model developed one member staff three jobs mentioned workload job counting garments entry workload job providing help workload job counting garments exit along developing simulation models des abs verification validation process performed simultaneously models next proceedings informs simulation society research workshop lee kuhl fowler robinson eds figure illustration main fitting room operation figure flow chart des model describe performed validation experiment validation experiment used black box validation compare simulation outputs des abs real system output using quantitative method using statistical methods comparison able find simulation model good representation real system compared output data observed department store distribution predicted output generated model assuming alternative hypothesis opposite null hypothesis tests thus paper state null hypothesis main hypothesis validation test constructed following des model good representation real system abs model good representation real system figure state chart abs model customer used mean waiting time three queues performance measurement experiment performance data able collect real system two tests setup answering hypothesis tests described following test comparing medians using non parametric test normal distributed data observed mean median mode similar values identical population since waiting time data normally distributed chosen compare medians robust measure central tendency rather means chosen non parametric statistical test known avoid assuming normality constructed following hypothesis test majid aickelin siebers des models significantly different real system mean customer waiting time simulation models real system execute test produced following hypothesis abs models significantly different real system mean customer waiting time des model shows similar variability compare real system performing test used open source statistical software median waiting time des abs models real system used comparison purpose chosen level significance probability seeing data different expected model smaller reject null hypothesis fail reject null hypothesis put simply conclude data observed consistent model predictions without assuming model must therefore correct statistic test des model real system pvalue less chosen sigma value insufficient evidence reject hypothesis statistic test abs model real system showed similar results larger chosen level fail reject hypothesis simulation models find distribution median waiting times consistent models next look variability waiting time see one get simulation models matches variability observe real system abs model shows similar variability compare real system test measuring variability finding des abs models variation real system plotted frequency distribution customer waiting time single day represents one replication simulation models real system figure frequency waiting time outputs real system des abs one day real des abs custimer waiting time figure waiting time outputs one day real system des abs models next calculate variance measure dispersion customer waiting times allows compare variability simulation outputs data real system statistical basis study spread frequency distribution customer waiting time spread frequency distribution des abs models shown figure seem slightly different look differences compared variances outputs variance shows close simulation outputs waiting time distribution middle distribution table waiting time outputs des abs real system one day standard deviation variance models real system des abs mean waiting time minutes standard deviation variance looking result table variance des model real system significantly different difference abs model similar real system difference reason differences variance simulation models may due different operation structure des abs models entities des model strong order depending change state meanwhile agents abs model decentralised order change state independently test fail reject hypothesis abs model model produced similar variability real system however reject hypothesis des model model shown dissimilarity variability compared real system validation experiment conclusion conclusion find statistically significant differences des abs models compared real system test however test found des model produced different variability real system abs produced similar variability real system therefore based test result suggest abs model suitable representing behaviour real system operation involving human main focus system nevertheless based validation experiment simulation models good representing real system abs shows better representation real system passed test majid aickelin siebers conclusion future work validation experiments des abs models able demonstrate simulation models good representation real system modelling human reactive behaviour achieve compared output accuracy simulation outputs real system mean customer waiting time statistical methods analyse outputs helped identify statistical significance similarity difference simulation models one another testing suggested even though des abs models produce similar outputs test compared medians real system showed different variation model outputs test abs models reflect real system behaviour much better des model terms predicted variability waiting times system modelled des abs typical queuing system extra features complex human behaviour moreover based validation results concluded des abs models good representation real system contains human reactive behaviour investigating outputs behaviour using different sample data different scenarios carried part future work also want evolve experiment modelling human proactive behaviour abs look benefit modelling behaviour queuing system simulation study siebers using intelligent agents understand management practices retail proceedings winter simulation conference dec usa siebers simulation customer proceedings operational research society simulation workshop april worcestershire wills agentbased modelling quantitative approach aiaa aviation technology integration operation conference atio wakeland tale two simulation system biomedical context acute inflammatory response european congress systems science authors biographies mazlina abdul majid phd student school computer science university nottingham interest discrete event simulation agent based simulation email mva uwe aickelin professor school computer science university nottingham interests include simulation heuristic optimization artificial immune system email uxa references becker discrete event simulation autonomous logistic processes proceedings borutzky orsoni zobel eds european conference modelling simulation kelton simulation arena fourth edition new york isbn morecroft robinson comparing discreteevent simulation system dynamics modelling fishery proceedings society simulation workshop robinson taylor brailsford eds owen love simulation tools improving supply chain performance proceedings society simulation workshop robinson simulation pioneers present next journal operational research society schenk simulation consumer behavior grocery shopping regional level journal business research peer siebers research fellow school computer science university nottingham main interest includes simulation human oriented complex adaptive system email pos
| 5 |
feb convolutional neural networks control flow graphs software defect prediction anh viet phan minh nguyen lam thu bui japan advanced institute information technology nomi japan email anhphanviet nguyenml quy technical university noi vietnam email lambt defects software components unavoidable leads waste time money also many serious consequences build predictive models previous studies focus manually extracting features using tree representations programs exploiting different machine learning algorithms however performance models high since existing features tree structures often fail capture semantics programs explore deeply programs semantics paper proposes leverage precise graphs representing program execution flows deep neural networks automatically learning defect features firstly control flow graphs constructed assembly instructions obtained compiling source code thereafter apply directed convolutional neural networks dgcnns learn semantic features experiments four datasets show method significantly outperforms baselines including several deep learning approaches index defect prediction control flow graphs convolutional neural networks file file fig motivating example recently several software engineering problems successfully solved exploiting tree representations software defect prediction one attractive programs abstract syntax trees asts research topics field software engineering according field machine learning quality input data directly regular reports semantic bugs result increase affects performance learners regarding due application costs trouble users even serious containing rich information programs approaches quences deployment thus localizing fixing defects shown significant improvements comparison early stages software development urgent requirements previous research especially software mou various approaches proposed construct models proposed convolutional neural network able predict whether source code contains extract structural information asts classifying programs defects studies divided two directions one functionalities wang employed deep belief applying machine learning techniques data software network automatically learn semantic features ast metrics manually designed extract features tokens defect prediction kikuchi measured source code using programs tree representations similarities tree structures source code plagiarism detection deep learning automatically learn defect features traditional methods focus designing combining however defect characteristics deeply hidden profeatures programs product metrics grams semantics cause unexpected output based statistics source code example halstead specific conditions meanwhile asts show rics computed numbers operators operands execution process programs instead simply represent metrics measured function inheritance counts abstract syntactic structure source code therefore mccabe metric estimates complexity program software metrics ast features may reveal many analyzing control flow graph however according types defects programs example consider many studies surveys existing metrics often fail procedures name sumton two files file capturing semantics programs result file fig two procedures tiny although many efforts made adopting difference line file line file robust learning algorithms refining data classifier seen bug file statement causes performance high infinite loop statement case whereas ntroduction using rsm extract traditional metrics feature vectors exactly matching since two procedures lines code programming tokens etc similarly parsing procedures asts using asts identical words approaches able distinguish programs explore deeply programs semantics paper proposes combining precise graphs represent execution flows programs called control flow graphs cfgs powerful graphical neural work regarding source code converted execution flow graph two stages compiling source file assembly code generating cfg compiled code applying cfgs assembly code beneficial asts firstly assembly code contains atomic instructions cfgs indicate execution process programs compiling two source files fig observed translated different instructions secondly assembly code refined products ast processing compiler compiler applies many techniques analyze optimize asts example two statements int main procedure file treated similar way statement int statement removed since value identified meanwhile asts describe syntactic structures programs may contain many redundant branches assume statement procedure sumton file changed programs however ast structures many differences including positions subtrees two functions main sumton separation declaration assignment variable function prototype sumton file thereafter leverage directed convolutional neural network dgcnn cfgs automatically learn defect features advantage dgcnn treat graphs process complex information vertices like cfgs experimental results four datasets show applying dgcnn cfgs significantly outperforms baselines terms different main contributions paper summarized follows proposing application graphical data structure namely control flow graph cfg software defect prediction experimentally proving leveraging cfgs successful building classifiers presenting algorithm constructing control flow graphs assembly code formulating model software defect prediction convolutional neural networks adopted automatically learn defect features cfgs http https source code collected datasets publicly available https model implementation released motivate related studies remainder paper organized follows section explains approach solving software defect prediction problem processing data adapting learning algorithm settings conducting experiments datasets algorithm evaluation measures indicated section iii analyze experimental results section conclude section roposed pproach section formulates new approach software defect prediction applying convolutional neural networks control flow graphs binary codes proposed method illustrated fig includes two steps generating cfgs reveal behavior programs applying graphical model cfg datasets first step obtain graph representation program source code compiled assembly code using linux cfg thereafter constructed describe execution flows assembly instructions second step leverage powerful deep neural network directed labeled graphs called multiview convolutional neural network automatically build predictive models based cfg data control flow graphs control flow graph cfg directed graph set vertices set directed edges cfgs vertex represents basic block linear sequence program instructions one entry point first instruction executed one exit point last instruction executed directed edges show control flow paths paper aims formulate method detecting faulty source code written language regarding cfgs assembly code final products compiling source code server input learn faulty features using machine algorithms based recent research proved cfgs successfully applied various problems including malware analysis software plagiarism since semantic errors revealed programs running analyzing execution flows assembly instructions may helpful distinguishing faulty patterns nonfaulty ones fig illustrates example control flow graph constructed assembly code snippet vertex corresponds instruction directed edge shows execution path instruction pseudocode generate cfgs shown algorithm algorithm takes assembly file input outputs cfg building cfg assembly code includes two major steps first step code partitioned blocks instructions based labels fig second step creating edges represent control flow transfers program specifically first line invokes procedure read file contents fig example control flow graph cfg fragment assembly code cfg code fragment node viewed line number name instruction return blocks line set edges initially set empty line graph edges created traversing instructions block considering possible execution paths current instruction others block instructions executed sequence every node outgoing edge next one line additionally consider two types instructions may several targets jump instructions edge added current instruction first one target block use two edges model function calls one current node first instruction function final instruction function next instruction current node line finally graphs formed instruction edge sets line convolutional neural networks algorithm algorithm constructing control flow graphs assembly code input asm ile file assembly code output graph representation code blocks initialize blocks asm ile edges inst blocks inst blocks new edge inst inst end inst jump inst call label inst block block label label block null inst irst instruction new edge inst inst inst call inst instruction inst inst new edge inst inst end end end end end instructions get instructions blocks return construct graph instructions edges vector finally feature vector fed fullyconnected layer output layer compute categorical distributions possible outcomes specifically forward pass convolutional layers filter slides vertices graph computes dot products entries filter input suppose subgraph sliding window includes vertices current vertex neighbors vector representations rvf output filters computed follows directed convolutional neural network dgcnn dynamic graphical model designed treat graphs complex information vertex labels instance cfg vertex simply token represents instruction may contain many components including instruction name several operands addition instruction viewed perspectives multiple views instruction types functions tanh wconv bconv fig depicts overview architecture dgcnn dgcnn model first layer called vector representations embedding layer whereby vertex represented set bconv rvc wconv rvc tanh vectors corresponding number views activation function vector size next design set circular windows sliding number views input layer numbers entire graphs extract local features substructures filters views convolutional layer input graphs explored different levels stacking due arbitrary structures graphs numbers vertices several convolutional layers embedding layer subgraphs different seen current work apply dgcnn two layers convolution receptive field red node includes vertices stage convolution dynamic pooling layer applied vertices considered window moves right gather extracted features parts graphs consequently determining number weight matrices fig overview approaches software defect prediction using convolutional neural networks control flow graphs assembly code filters unfeasible deal obstacle divide vertices groups treat items group similar way regarding way parameters convolution three weight matrices including cur current outgoing incoming nodes respectively pooling layer face similar problem dynamic graphs numbers nodes varying programs cfgs meanwhile convolutions preserve structures input graphs feature extraction means impossible determine number items pooled final feature vector efficient solution problem applying dynamic pooling normalize features dimension regarding max pooling adopted gather information parts graph one fixed size vector regardless graph shape size iii xperiments datasets datasets conducting experiments obtained popular programming contest site codechef created four benchmark datasets one involves source code submissions written python etc solving one problems follows sumtrian sums triangle given lower triangular matrix rows find longest path among paths starting top towards base movement part either directly diagonally right length path sums numbers appear path gcd lcm find greatest common divisor gcd least common multiple lcm pair input integers mnmx minimum maximum given array consisting distinct integers find minimum sum cost convert array single element following https table statistics datasets dataset mnmx subinc sumtrian total class class class class class operations select pair adjacent integers remove larger one two operation size array decreased cost operation equal smaller subinc count subarrays given array elements count number subarrays array target label instance one possibilities source code assessment regarding program assigned one groups follows accepted program ran successfully gave correct answer time limit exceeded program compiled successfully stop time limit wrong answer program compiled ran successfully output match expected output runtime error code compiled ran encountered error due reasons using much memory dividing zero syntax error code unable compile collected submissions written march four problems data preprocessed removing source files empty code unable compile table presents statistical figures instances class datasets datasets imbalanced taking mnmx dataset example ratios classes class conduct experiments dataset randomly split three folds training validation testing ratio experimental setup compare model approaches successfully applied programming language table structures numbers hyperparameters neural networks layer presented form name followed number neurons emb embedding layer stand recursive convolutional convolutional respectively evaluation measures approaches evaluated based two widely used measures including accuracy area receiver operating characteristic roc curve known auc predictive accuracy considered important criterion evaluate effectiveness classification algonetwork architecture weights biases rithms classification accuracy estimated rvnn average hit rate tbcnn sibstcnn machine learning auc estimates tion ability classes important measure judge effectiveness algorithms equivalent nonparametric wilcoxon test ranking classifiers according processing tasks methods source code previous research auc proved better represented abstract syntax trees ast using parser statistically consistent criterion accuracy different machine learning techniques employed especially imbalanced data cases imbalanced build predictive models settings baselines presented datasets classes much samples others standard algorithms biased towards follows neural networks neural networks major classes ignore minor classes consequently ing dgcnn tree based convolutional neural networks hit rates minor classes low although overall tbcnn convolutional neural networks accuracy may high meanwhile practical applications sibstcnn extension tbcnn feature accurately predicting minority samples may important detectors redesigned cover subtrees including node taking account software defect prediction essential task descendants siblings recursive neural networks detecting faulty modules however many software defect rvnn use common initial datasets highly imbalanced faulty instances belong learning rate vector size tokens structures minority classes experimental datasets imbalanced adopt measures evaluate classifiers networks shown table roc curves depict tradeoffs hit rates adapting dgcnn cfgs use two views cfg nodes including instructions type instance jne false alarm rates commonly used analyzing binary jle jge assigned group classifiers extend use roc curves conditional jump instructions instructions problems average results computed based two noted may many operands case ways gives equal weight class replace operands block names processor register gives equal weight decision names literal values symbols name reg sample auc measure ranking classifiers val respectively following replacement instruction estimated area roc curves addq rsp converted addq value reg esults iscussion generate inputs dgcnn firstly symbol vectors randomly initialized corresponding vector table iii shows accuracies classifiers four view computed based component vectors datasets seen approaches significantly follows outperform others specifically comparison second best improve accuracies mnmx subinc sumc trian mentioned section software defect number symbols view prediction complicated task semantic errors vector representation symbol taking instruction hidden deeply source code even defect exists proaddq rsp example vector linear gram revealed running application combination vectors symbols addq value reg specific conditions therefore impractical manually neighbors knn apply knn algorithm design set good features able distinguish tree edit distance ted levenshtein distance faulty samples similarly asts represent number neighbors set structures source code although approaches support vector machines svms svm classifiers sibstcnn tbcnn rvnn successfully applied built based features namely software engineering tasks like classifying programs bow way feature vector program functionalities shown good performance determined counting numbers times symbols appear software defect prediction contrast cfgs assembly ast svm rbf kernel two parameters code precise graphical structures show behaviors values respectively programs result applying dgcnn cfgs achieves table iii comparison classifiers according accuracy following dgcnn means cfg nodes viewed one two perspectives noop using instruction without operands noop noop mnmx subinc sumtrian true positive rate approach ted rvnn tbcnn sibstcnn dgcnn dgcnn dgcnn dgcnn table performance classifiers terms auc noop noop mnmx subinc sumtrian false positive rate true positive rate approach rvnn tbcnn sibstcnn dgcnn dgcnn dgcnn dgcnn roc curve area roc curve area roc curve class area roc curve class area roc curve class area roc curve class area roc curve class area roc curve area roc curve area highest accuracies experimental datasets software roc curve class area defects roc curve class area last four rows table iii information roc curve class area roc curve class area provided efficient learner general viewing roc curve class area graph nodes two perspectives including instructions instruction groups helps boost dgcnn classifiers cases false positive rate without use operands similarly taking account components instructions beneficial case dgcnn models achieve highest accuracies fig illustration discrimination ability experimental datasets specifically dgcnn one view classes classifiers imbalanced datasets fig fig reaches accuracies roc curves tbcnn dgcnn noop mnmx dgcnn two views obtains accuracies mnmx dataset respectively subinc sumtrian also assess effectiveness models terms discrimination measure auc equivalent belonging degrees instance classes two wilcoxon test ranking classifiers imbalanced datasets groups including approaches many learning algorithms trend bias majority approaches group similar auc class due objective error minimization result scores approaches show better performance models mostly predict unseen sample instance worth noticing along majority classes ignore minority classes fig efforts accuracy maximization approach based plots roc curves tbcnn dgcnn noop dgcnn cfgs also enhance distinguishing ability classifiers mnmx dataset imbalanced data categories even imbalanced data dgcnn minority classes two classifiers notable classifier improves second best average auc lower ability detecting minority instances others scores analysis conclude leveraging predicting class tbcnn even equivalent precise control flow graphs binary codes suitable random classifier observing roc curves software defect prediction one difficult tasks found similar problem approaches field software engineering experimental datasets thus auc essential measure rror nalysis evaluating classification algorithms especially case imbalanced data analyze cases source code variations methods table presents aucs probabilistic classifiers able handle based observations classifiers produce probabilities scores indicate outputs training test data found rvnn file training sample file file file fig source code examples dataset may cause mistakes approaches fig sample training set figs samples test set symbols denote sample correctly incorrectly classified approaches formance degraded tree sizes increase problem new branches asts file file also pointed research tasks natural weight matrices node determined based language processing programming language position sibstcnn tbcnn easily affected processing tables iii larger trees changes regarding tree shape size lower accuracies aucs rvnn obtains comparison meanwhile approaches able handle approaches especially sumtrian dataset changes observed although loop statements like sibstcnn tbcnn obtain higher performance dowhile different tree representations baselines due learning features subtrees analyzing assembly instructions similar using jump instruction methods section take account control loop similarly moving statement possible sibstcnn tbcnn positions may result notable changes assembly code effect code structures approaches suffer moreover grouping set statements form procedure varying structures asts example given also captured cfgs using edges simulate procedure program many ways reorganize source code invocation section changing positions statements constructing effect changing statements approaches may procedures replacing statements equivalent ones affected replacements statements considering source modifications lead reordering branches producing code file file similar asts assembly codes different language statements translated different sets assembly instructions example operator sets instructions manipulating data types int long int dissimilar moreover statements possible replaced others without changes program outcomes indeed show values select either printf cout since contents cfg nodes changed significantly dgcnn may fail predicting types variations effect using library procedures writing source code programmer use procedures libraries fig file applies procedure library algorithm others use ordinary statements computing greatest common divisor integer pair asts cfgs contain contents external procedures embedded generate assembly code source code result approaches successful capturing program semantics cases onclusion paper presents model solving software defect prediction one difficult tasks field software engineering applying precise representations cfgs graphical deep neural network model explores deeply behavior programs detect faulty source code others specifically cfg program constructed assembly code compiling source code dgcnn leveraged learn various information cfgs data build predictive models evaluation four datasets indicates learning graphs could significantly improve performance approaches according accuracy discrimination measures method improves accuracies comparison approach comparison approaches acknowledgements work supported partly jsps kakenhi grant number first author would like thank scholarship ministry training education moet vietnam project eferences frances allen control flow analysis acm sigplan notices volume pages acm blake anderson daniel quist joshua neil curtis storlie terran lane malware detection using dynamic analysis journal computer virology danilo bruschi lorenzo martignoni mattia monga detecting selfmutating malware using graph matching international conference detection intrusions malware vulnerability assessment pages springer cagatay catal software fault prediction literature review current trends expert systems applications chae jiwoon kim boojoong kang eul gyu software plagiarism detection approach proceedings acm international conference conference information knowledge management pages acm shyam chidamber chris kemerer metrics suite object oriented design ieee transactions software engineering tom fawcett introduction roc analysis pattern recognition letters maurice howard halstead elements software science volume elsevier new york jones strengths weaknesses software metrics american programmer hiroshi kikuchi takaaki goto mitsuo wakatsuki tetsuro nishino source code plagiarism detecting method using alignment abstract syntax tree elements software engineering artificial intelligence networking computing snpd international conference pages ieee charles ling jin huang harry zhang auc statistically consistent discriminating measure accuracy ijcai volume pages kenneth louden programming languages principles practices cengage learning thomas mccabe complexity measure ieee transactions software engineering tim menzies jeremy greenwald art frank data mining static code attributes learn defect predictors ieee transactions software engineering lili mou zhang tao wang zhi jin convolutional neural networks tree structures programming language processing thirtieth aaai conference artificial intelligence viet anh phan ngoc phuong chau minh nguyen exploiting tree structures classifying programs functionalities knowledge systems engineering kse eighth international conference pages ieee daniel rodriguez israel herraiz rachel harrison javier dolado riquelme preliminary comparison techniques dealing imbalance software defect prediction proceedings international conference evaluation assessment software engineering page acm richard socher eric huang jeffrey pennington andrew christopher manning dynamic pooling unfolding recursive autoencoders paraphrase detection nips volume pages richard socher jeffrey pennington eric huang andrew christopher manning recursive autoencoders predicting sentiment distributions proceedings conference empirical methods natural language processing pages association computational linguistics richard socher alex perelygin jean jason chuang christopher manning andrew christopher potts recursive deep models semantic compositionality sentiment treebank proceedings conference empirical methods natural language processing emnlp volume page marina sokolova guy lapalme systematic analysis performance measures classification tasks information processing management xin sun yibing zhongyang zhi xin bing mao xie detecting code reuse android applications using control flow graph ifip international information security conference pages springer song wang taiyue liu lin tan automatically learning semantic features defect prediction proceedings international conference software engineering pages acm martin white christopher vendome mario denys poshyvanyk toward deep learning software repositories mining software repositories msr working conference pages ieee
| 9 |
mar independence number number maximum independent sets pseudofractal web gasket liren shana huan lia zhongzhi zhanga school computer science fudan university shanghai china shanghai key laboratory intelligent information processing fudan university shanghai china abstract fundamental subject theoretical computer science maximum independent set mis problem purely theoretical interest also found wide applications various fields however general graph determining size mis exact computation number miss even difficult thus significant interest seek special graphs mis problem exactly solved paper address mis problem pseudofractal web gasket number vertices edges graphs determine exactly independence number number possible miss independence number pseudofractal web twice one gasket moreover pseudofractal web unique mis number miss gasket grows exponentially number vertices keywords maximum independent set independence number minimum vertex cover network gasket complex network introduction independent set graph vertex set subset pair vertices adjacent maximal independent set independent set subset independent set largest maximal independent set called maximum independent set email address zhangzz zhongzhi zhang preprint submitted theoretical computer science march mis words mis independent set largest size cardinality cardinality mis referred independence number graph graph called unique independence graph unique mis mis problem close connection many fundamental graph problems instance mis problem graph equivalent minimum vertex cover problem graph well maximum clique problem complement graph addition mis problem also closely related graph coloring maximum common induced subgraphs maximum common edge subgraphs addition intrinsic theoretical interest mis problem found important applications large variety areas coding theory collusion detection voting pools scheduling wireless networks example shown problem finding largest error correcting codes reduced mis problem graph problem collusion detection framed identifying maximum independent sets moreover finding maximal weighted independent set wireless network connected problem organizing vertices network hierarchical way finally mis problem also numerous applications mining graph data view theoretical practical relevance past decades mis problem received much attention different disciplines theoretical computer science discrete mathematics solving mis problem generic graph computationally difficult finding mis graph classic problem enumerating miss graph even pcomplete due hardness mis problem exact algorithms finding mis general graph take exponential time infeasible moderately sized graphs practical applications many local heuristic algorithms proposed solve mis problem massive intractable graphs comprehensive empirical study unveiled large real networks typically vertex degree following distribution nontrivial heterogeneous structure strong effect various topological combinatorial aspects graph average distances maximum matchings dominating sets although concerted efforts understanding mis problem general significantly less work focused mis problem graphs particular exact result independence number number miss graph still lacking despite fact exact result helpful testing heuristic algorithms moreover influence behavior mis problem well understood although suggested play important role mis problem ubiquity phenomenon makes interesting uncover dependence miss feature helpful understanding applications mis problem paper study independence number number maximum independent sets scalefree graph called pseudofractal web gasket networks deterministic number vertices edges note since determining independence number counting maximum independent sets general graph formidable choose two exactly tractable graphs fundamental route research problems example pointed great interest find specific graphs matching problem exactly solved since problem general graphs using analytic technique based decimation procedure find exact independence number number possible maximum independent sets studied graphs independence number pseudofractal web twice one associated gasket addition difference unique maximum independent set pseudofractal web number maximum independent sets gasket increases exponential function number vertices independence number number maximum independent sets pseudofractal web section study independence number pseudofractal web demonstrate maximum independent set unique network construction properties pseudofractal web constructed iterative way let denote network iterations triangle obtained adding every edge new vertex connected figure illustrates networks first several iterations construction total number edges figure first three iterations graph figure alternative construction network network displays striking properties observed networks first since degree vertices obeys power law distribution implying probability vertex chosen randomly degree approximately moreover average distance growing logarithmically number vertices finally highly clustered average clustering coefficient converging particular interest network another ubiquitous property real networks three vertices generated highest degree called hub vertices denoted respectively feature network seen another construction approach given network obtained joining three copies hub vertices see fig let three replicas denote three hub vertices respectively obtained merging resp resp identified hub vertex resp let stand total number vertices second construction network satisfies relation together initial value solved give independence number number maximum independent sets let denote independence number network determine introduce intermediate quantities since three hub vertices connected independent set contains one hub vertex classify independent sets two subsets represents independent sets hub vertex denotes remaining independent sets exactly one hub vertex let subset independent set largest cardinality number vertices denoted definition independence number network max two quantities evaluated using structure network lemma two successive generation networks max max proof definition cardinality independent set show constructed iteratively obtained establish recursive relations first prove graphically notice consists three copies definition independent set three hub vertices belong implying corresponding six identified hub vertices see fig therefore construct set considering whether hub vertices fig illustrates possible configurations independent sets include subsets fig obtain max similarly prove graphical representation shown fig lemma network figure illustration possible configurations independent sets contain hub vertices shown filled vertices independent sets open vertices figure illustration possible configurations independent sets contain proof prove lemma mathematical induction obtain hand thus basis step holds immediately assume statement holds according max induction hypothesis analogous way obtain relation comparing eqs using induction hypothesis thus lemma true theorem independence number network proof lemma indicates maximum independent vertex set contains hub vertices obtain considering initial condition equation solved give result corollary largest number vertices independent vertex set contains exactly hub vertex proof eqs derive using obtain following recursive equation together boundary condition solved yield theorem network unique maximum independent set proof fig mean maximum independent set actually union maximum independent sets three copies constituting thus maximum independent set determined maximum independent set unique unique maximum independent set furthermore unique maximum independent set fact set vertices generated iteration theorem indicates pseudofractal web unique independence graph independence number number maximum independent sets gasket section consider independence number number maximum independent sets gasket compare results pseudofractal web aim unveil effect network structure particular property independence number number maximum independent sets construction gasket gasket also constructed iteratively let represent graph equilateral triangle three vertices three edges perform bisection three edges forming four smaller replicas original equilateral figure first three generations gasket figure alternative construction gasket triangle remove central downward pointing equilateral triangle get obtained performing two operations triangle fig illustrates first several iterations gaskets number vertices number edges gasket network equal respectively contrast inhomogeneity gasket homogeneous degree vertices equal except topmost vertex leftmost vertex rightmost vertex degree three vertices degree called outmost vertices hereafter analogously network gasket also exhibits property suggests alternative construction way graph given nth generation graph generation graph obtained amalgamating three copies outmost vertices see fig let represent three copies denote three outmost vertices respectively obtained coalescing outmost vertices independence number case without causing confusion employ notation study related quantities gasket let independence number note independent sets sorted four types stands independent sets includes exactly outmost vertices let denote subsets independent set largest cardinality denoted independence number gasket max therefore determine one alternatively determine solved establishing relations based architecture gasket lemma integer following relations hold max max max max proof lemma proved graphically figs illustrate graphical representations figure illustration possible configurations independent sets contain outmost vertices shown filled vertices independent sets open vertices lemma arbitrary figure illustration possible configurations independent sets contain note illustrate independent sets includes excludes since equivalent omit independent sets including resp excluding resp figure illustration possible configurations independent sets contain note illustrate independent sets includes two outmost vertices excludes outmost vertex similarly illustrate independent sets including resp excluding resp figure illustration possible configurations independent sets contain proof prove lemma induction easy check thus result holds let suppose statement true induction assumption lemma difficult check relation true theorem independence number gasket proof according lemmas obtain considering obvious holds theorems theorem show independence number gasket larger one corresponding pseudofracal web former half latter large corollary largest possible number vertices independent vertex set contains exactly outmost vertices respectively proof theorem shows lemma obtain results obtained immediately number maximum independent sets comparison network unique maximum independent set number maximum independent sets increases exponentially number vertices theorem number maximum independent sets gasket proof let denote number maximum independent sets gasket let number independent sets maximum number vertices including excluding initial condition prove two quantities obey following relations first prove definition number different maximum independent sets contains three outmost vertices according lemma fig two configurations fig maximize thus contains exactly one outmost vertex contains three outmost vertices establish using rotational symmetry gasket proved analogously using lemma fig since eqs show obtain recursion relation together initial value solved yield acknowledgements work supported national natural science foundation china grant references hopkins staton graphs unique maximum independent sets discrete math robson algorithms maximum independent sets algorithms berman approximating maximum independent set bounded degree graphs proceedings annual acmsiam symposium discrete algorithms radhakrishnan greed good approximating independent sets sparse graphs algorithmica karp reducibility among combinatorial problems complexity computer computations springer pardalos xue maximum clique problem global optim liu yang xiao wei towards maximum independent sets massive graphs proceedings international conference large data base butenko pardalos sergienko shylo stetsyuk finding maximum independent sets graphs arising coding theory proceedings acm symposium applied computing acm araujo farinha domingues silaghi kondo maximum independent set approach collusion detection voting pools parallel distrib comput joo lin ryu shroff distributed greedy approximation maximum weighted independent set scheduling fading channels trans netw basagni finding maximal weighted independent set wireless networks telecom syst chang zhang computing independent set linear time proceedings acm international conference management data acm murat paschos priori optimization probabilistic maximum independent set problem theoret comput sci xiao nagamochi confining sets avoiding bottleneck cases simple maximum independent set algorithm graphs theoret comput sci agnarsson losievskaja algorithms maximum independent set problems hypergraphs theoret comput sci hon kloks liu liu poon wang maximum independent set categorical product ultimate categorical ratios graphs theoret comput sci lozin monnot ries maximum independent set problem subclasses subcubic graphs discrete algorithms chuzhoy ene approximating maximum independent set rectangles proceedings ieee annual symposium foundations computer science ieee stewart maximum independent set maximum clique algorithms overlap graphs discrete appl math xiao nagamochi exact algorithm maximum independent set graphs discrete appl math mosca sufficient condition extend polynomial results maximum independent set problem discrete appl math valiant complexity computing permanent theor comput sci valiant complexity enumeration reliability problems siam comput fomin kratsch exact exponential algorithms springer science business media tarjan trojanowski finding maximum independent set siam compu beame impagliazzo sabharwal resolution complexity independent sets vertex covers random graphs comput complex andrade resende werneck fast local search maximum independent set problem heuristics dahlum lamm sanders schulz strash werneck accelerating local search maximum independent set problem proceedings international symposium experimental algorithms springer lamm sanders schulz strash werneck finding independent sets scale heuristics newman structure function complex networks siam rev albert emergence scaling random networks science chung average distances random graphs given expected degrees proc natl acad sci liu slotine controllability complex networks nature zhang pfaffian orientations perfect matchings scalefree networks theoret comput sci nacher akutsu dominating networks variable scaling exponent heterogeneous networks difficult control new phys gast hauptmann karpinski inapproximability dominating set power law graphs theoret comput sci zhang domination number minimum dominating sets pseudofractal web graph theoret comput sci ferrante pandurangan park hardness optimization graphs theoret comput sci dorogovtsev goltsev mendes pseudofractal scalefree web phys rev zhang zhou xie guan exact solution mean time pseudofractal web phys rev plummer matching theory volume annals discrete mathematics north holland new york vannimenus properties collapse transition branched polymers exact results fractal lattices phys rev lett zhang zhou chen evolving pseudofractal networks eur phys song havlin makse complex networks nature
| 8 |
note lattices eloir strapasson aug school applied sciences abstract shown given lattice lattice sequence lattice converging less equivalence also discuss conditions faster convergence keywords subortoghonal lattice dense packing spherical code msc introduction large class problems coding theory related properties lattices special sublattices generated orthogonal basis suborthogonal lattices several authors investigated relationship suborthogonal spherical codes codes see course restrict problems general problems lattices concentrated obtaining certain parameters shortest vector packing radius packing density radius coverage radius coverage density points lattices interpreted elements code thus finding efficient coding decoding schemes essential several schemes literature establish relationship linear codes lattices good reference preprint submitted elsevier february article organized follows section present notations definitions small properties section present new scheme obtaining lattices section present case study special lattices leech lattice background definitions results lattice discrete additive subgroup generator matrix full rank said rank determinant lattice det det bbt gram matrix lattice volume lattice det volume parallelotope generate rows minimum norm lattice min kvk center density packing two lattices generator vol matrices equivalence matrix integer matrix det orthogonal matrix oot identity matrix dual lattice lattice obtained vectors span span vector space generated rows generator matrix bbt particular subset also lattice generator matrix formed orthogonal row vectors say lattice since lattice group remember quotiente lattice sublattice finite abelian group elements ratio volume volume vol vol elements lattice seen orbit vector torus essentially establishes relationship central spherical class codes well class linear codes track construction similar constructions see details suborthogonal sequences consider lattice rank contain orthogonal equivalence generator matrix oot let generator matrices respectively assuming integer entries lattice generator matrix adj det lattice generator matrix det ratio volume measured quantities points case vol vol det det det want build codes large number points observe want increase number points must increase determinant matrix dual lattice proposition let lattice dual generator matrices respectively assuming integer entries define generator matrix integer integer matrix lattices generator matrices adj respectively satisfy continuity matrix inversion process det proof proof trivial fact convergence entry generator matrix convergence generating matrix defines convergence groups recalling cardinality points quotient polynomial specifically det corollary let lattice dual generator matrices respectively define generator matrix words rounded entries lattices generator matrices adj respectively satisfy continuity matrix inversion process det corollary allows extend use proposition whose lattices dual less equivalence integer generator matrix following propositions establish speed convergence proposition faster dual convergence let proposition faster convergence obtained minimizing inputs considering antisymmetric matrix whose parameters minimized course identically zero best convergence error proof better sequence vectors closest sizes angles desired lattice must analyse sequence formed gram matrix therefore say convergence linear quadratic constant identically zero unless change variable assume conditions antisymmetric integer matrix case convergence quadratic convergence coefficient also depends inputs must minimized proposition faster convergence let proposition faster convergence obtained minimizing inputs gpbt considering antisymmetric matrix whose parameters minimized course identically zero best convergence error proof recall inverse sum matrix matrix identity calculated neumann series dual generator matrix lattice sequence inverse transpose adj det det therefore gram matrix approximated gpbt desirable follows antisymmetric integer matrix case convergence quadratic convergence coefficient also depends inputs gpbt must minimized structure group obtained quotient lattice sequence respective orthogonal sublattice determined extended applying theorem particular lower triangular matrix cyclic perturbation otherwise quotient cyclic group although convergence nearly quadratic lattices rank less equivalence sublattices integer lattice play interesting role regard convergence discussed case study next section case study section present construction applied special cases show best perturbations found although null perturbation optimal never associated cyclic quotient groups offer best solution terms spherical codes shall see cyclical optimal unlikely addition results presented complement results obtained considering initial vector problem extend simplify results obtained root lattice consider generate matrix good perturbation good perturbation case quotient cyclic case odd performance illustrated tables root lattice less equivalence assuming generated matrices good perturbation group group group table show performance case perturbations respectively group group group group table show performance case perturbations performance illustrated table note density ratio deployed close amount associated points points dual lattice dual lattice points second case table illustrates performance applied spherical codes details perturbation better case representation moreover point performance similar second representation null perturbation group group group table show performance case representations perturbations leech lattices laminate lattice generally dense respective dimensions special dimensions admit integer representation less equivalence cases analyse fast convergence consider matrix generator leech lattice unless equivalence distance points distance points distance points distance points table show spherical code performance case different representations consider distance distance table show spherical code performance case two different representations respectives perturbations know literature leech lattice regarded lattice case use perturbation first case second case null perturbation analyse performance point view spherical codes vide table first two columns refer first case lattice case lattices full size representation example lattices construction done matrix cholesky decomposition gram dual lattice according corollary exemplify lattice note case different perturbations induce number points distinct distances see table table show density rate case integer integer conclusions conclude lattice less scale approximated sequence lattices orthogonal furthermore degree freedom quadratic convergence freedom induces quotient group different number generators make convergency fast certain applications example case spherical codes reticulated target multiple minimum vectors canonical vector find pertubation efficient present method finding lattices suborthogonal method simpler general efficient one presented references alves costa commutative group codes bound discrete math berstein sloane wright sublattices hexagonal lattice discrete mathematics biglieri elia codes gaussian channel ieee transactions information theory campello strapasson sequences projections cubic lattice comput appl math campello strapasson costa projections arbitrary lattices linear algebra appl cohen course computational algebraic number theory new york conway sloane sphere packings lattices groups new york costa muniz augustini palazzo graphs tessellations perfect codes flat tori ieee transactions information theory ingemarsson group codes gaussian channel lecture notes control information sciences vol springer verlag loeliger signals sets matched groups ieee transactions information theory meyer matrix analysis applied linear algebra society industrial mathematics siam philadelphia usa micciancio goldwasser complexity lattice problems cryptographic perspective kluwer academic publishers norwell massachusetts usa siqueira costa flat tori lattices bounds commutative group codes designs codes cryptography slepian group codes gaussian channel bell system technical journal sloane vaishampayan costa note projecting cubic lattice discrete comput geom torezzan costa vaishampayan spherical codes torus layers proceedings ieee international symposium information theory torezzan strapasson costa siqueira optimum commutative group codes des codes cryptogr vaishampayan costa curves sphere dynamics error control continuous alphabet sources ieee transactions information theory
| 7 |
universal groups oct maurice chiodo zachiri mckenzie abstract set group elements recursively enumerable give two independent proofs exists universal presented group one contains presented groups also show recursively enumerable set presentations groups kleene arithmetic hierarchy introduction torsion object group theory let denote order group element recalling torsion write tor torsion restrict type torsion looking perhaps concerned torsion elements prime order set define group written torx torx clear case torn tor usual sense set define factor completion say factor complete say group torx equivalently tord use following notation set orders torsion elements group tord tor one famous consequence higman embedding theorem fact universal finitely presented group finitely presented group finitely presented groups embed recently belegradek chiodo independently showed exists universal group group groups embed paper generalise result context follows ams classification keywords groups torsion torsion quotients embeddings universal horn theory work part project received funding european union horizon research innovation programme marie grant agreement date march maurice chiodo zachiri mckenzie theorem let recursively enumerable set integers universal finitely presented group finitely presented group two key steps proving theorem first construct free product groups algorithmic way theorem let set integers countably generated recursive presentation group contains embedded copy every countably generated recursively presentable group construct finitely presented example using following theorem higman theorem uniform algorithm input countably generated recursive presentation constructs finite presentation tord tord along explicit embedding prove theorem following two independent ways reasons outline shortly firstly section generalise construction theorem universal group using arguments group theory section generalise construction theorem universal group using arguments model theory results section provide framework proving might group properties exist universal examples main result section apply directly theory groups prove theorem following theorem tgrp universal horn lgrp exists recursively presented group every recursively presented group embeds reason providing proofs theorem twofold one hand proof section direct gives clear picture theorem holds explicit algorithmic hand proof section general might lead showing group properties possess universal examples indeed looking arguments theorem realised could consider object results could generalised would never made connection otherwise universal groups would interesting work see grouptheoretic properties possess universal examples potential proof technique would follows show property satisfies conditions theorem closed free products identity subgroups show universal horn lgrp thus allowing apply theorem show possessed finitely generated free groups preserved free products hnn extensions needed show preserved higman embedding theorem course satisfying quite difficult one easily find many group properties satisfy also satisfy quite trivially properties groups satisfy therein lies problem finish paper generalisation result complexity recognising show following theorem let set integers set finite presentations groups remarkable even finite set stronger still single prime say set finite presentations groups still form set universality section generalisation definitions results section universal quotients universal finitely presented groups notation group presentation denote group presented group element represented word presentation said recursive presentation finite set recursive enumeration relations said countably generated recursive presentation instead recursive enumeration generators group said finitely respectively recursively presentable finite respectively recursive presentation group presentations denote free product presentation given taking disjoint union generators relations elements group write subgroup generated elements normal closure elements let denote smallest infinite ordinal let denote cardinality set set let set cardinality disjoint along fixed bijection write set finite words make use sets sets see introduction maurice chiodo zachiri mckenzie groups set integers surjective homomorphism universal homomorphism homomorphism following diagram commutes note exists unique indeed also satisfies hence surjection thus rightcancellative moreover unique isomorphism called universal quotient denoted observe exists identity map idg universal property standard construction showing exists every group done via taking quotient radical intersection normal subgroups generalisation radical follows immediately properties universal quotient present alternative construction though isomorphic lends easily effective procedure finitely recursively presented groups definition given group set integers inductively define torx follows torx torx torn tor torn torx torn torx thus set elements annihilated upon taking successive quotients normal closure elements torx union lemma group torx proof suppose torx tor torx thus torx hence tori tor tori thus proposition group torx proof clearly torx definition fact lemma remains show torx proceed contradiction assume torx universal groups along minimal torx clearly definition tori fact normal exists torx tor else tori since torx minimality element contradicting hence torx corollary group torx follows standard result state without proof lemma let countably generated recursive presentation set words lemma let countably generated recursive presentation set integers set words tora proof take recursive enumeration using lemma start checking proceeding along finite diagonals come across add enumeration procedure enumerate words tora words tora thus set words representing elements tora use show following lemma given countably generated recursive presentation set integers set tia tora uniformly presentations moreover union tia precisely set tora proof proceed induction clearly tora normal closure tora lemma assume tora normal closure tor tora induction hypothesis lemma rest lemma follows immediately proposition uniform algorithm input countably generated recursive presentation group set integers outputs countably generated recursive presentation generating set sets associated surjection given extending idx proof corollary group tora notation lemma seen countably uniformly constructed generated recursive presentation maurice chiodo zachiri mckenzie universality complexity machinery prove main technical result section theorem result section using tools model theory theorem let set integers countably generated recursive presentation group contains embedded copy every countably generated recursively presentable group proof take enumeration countably generated recursive presentations groups construct countably generated recursive presentation countably infinite free product universal quotient countably generated recursively presentable groups repetition uniformly constructible proposition construction indeed effective hence countably generated recursive presentation also proposition shows group successfully annihilated free product factors free product groups moreover contains embedded copy every countably generated recursively presentable group universal quotient group detailed lemma theorem following implicit rotman proof theorem higman embedding theorem theorem uniform algorithm input countably generated recursive presentation constructs finite presentation tord tord along explicit embedding prove main result theorem let set integers finitely presentable group contains embedded copy every countably generated recursively presentable group proof construct theorem use theorem embed finitely presentable group construction tord tord finally embedded copy every countably generated recursively presentable group since taking completes proof groups recursively presentable following corollary theorem let set integers universal finitely presented group finitely presented group following unexpected strong generalisation theorem classifying computational complexity recognising groups universal groups theorem let set integers set finite presentations groups moreover set presentations proof follows proofs lemma theorem first must element given form recursive presentation xai form finite presentation using higman embedding theorem theorem tord tord note atorsion free latter set lemma set finite presentations groups moreover also set following description thus note recursive presentation constructed uniformly whenever find simply begin enumeration take first output finish section generalising notion torsion length first introduced definition definition given define length torlenx smallest ordinal torx rather theory developed simply state main results theorem theorem generalised xtorsion going work straightforward see results generalise thus refrain theorem given family finite presentations groups satisfying torlenx torx theorem given exists recursive presentation torlenx algorithmically construct finite presentation presentations purpose section theorem using arguments follow idea theorem notation throughout section lgrp used denote language group theory language logic supplemented binary function symbol whose intended interpretation group operation constant symbol whose intended interpretation identity element unary function symbol whose intended interpretation function sends maurice chiodo zachiri mckenzie elements inverses use standard abbreviation writing instead set constant symbols endowed elements write language obtained adding constant symbols language set new constant symbols endowed implicit canonical bijection witnessing write obtained interpreting constant symbols elements say generated obtained closing constants applications interpretation functions model theory preliminaries begin recalling definitions results chapter definition let language say basic horn form possibly infinite set atomic either atomic say universal horn write horn form basic horn say universal horn axiomatisation consists horn sentences let tgrp obvious lgrp axiomatises class groups clear tgrp written finite set finitary horn sentences definition let write lgrp axioms tgrp clear finitary universal horn theory axiomatises class groups definition let language let class tuple set new constant symbols endowed implicit called generators set atomic write instead clear context finite say finitely presented say recursively presented definition let language let class let say model definition let language let class let say presents universal groups model generated iii every model exists homomorphism say admits presentations every presents structure note presents model since generated homomorphism whose existence guaranteed definition iii unique following lemma lemma let set atomic set constants let let structure following equivalent presents generates every atomic formula every every structure model lemma lemma let language let class closed isomorphic copies following equivalent closed products substructures admits presentations iii axiomatised universal horn theory language proof theorem using model theory following adaptation proof theorem theorem tgrp universal horn lgrp exists recursively presented group every recursively presented group embeds remark group property universal horn theory imply finite presentations groups indeed set universal horn theory definition theorem set finite presentations groups proof let tgrp universal horn lgrp let class lgrp satisfy let class lgrp satisfy tgrp lemma admit presentations presentation write element presented element presented let effective enumeration recursive presentations let disjoint union therefore recursive presentation claim desired universal group satisfying immediate recursive presentation embeds shows universal remains show maurice chiodo zachiri mckenzie recursively presented let let interpretations constant symbols let set atomic lgrp language obtained adding new constants generators hold lemma implies presents need show let claim atomic lgrp since follows atomic lgrp need show converse let suppose note since atomic form words generators let group interpretation constant symbols therefore model follows iii definition map defined implicit ordering must lift homomorphism map since contradiction therefore holds since yields recursive enumeration elements definition shows universal horn theory extends tgrp immediately theorem application theorem theorem time done work using techniques way could applied group properties discussed introduction references belegradek endomorphisms relatively hyperbolic groups appendix belegradek internat algebra brodsky howie universal image group israel math chiodo finding elements splittings groups algebra chiodo torsion finitely presented groups groups complex cryptol chiodo vyas note torsion length comm algebra higman subgroups finitely presented groups proc roy soc ser hodges model theory encyclopedia mathematics applications cambridge university press rogers theory recursive functions effective computability mit press rotman introduction theory groups new york department pure mathematics mathematical statistics university cambridge wilberforce road cambridge department philosophy linguistics theory science gothenburg university olof wijksgatan gothenburg sweden
| 4 |
feb ardusoar thermalling controller autopilots samuel tabor iain guilliard andrey kolobov samuelctabor australian national university canberra australia iainguilliard microsoft research redmond soaring capability potential significantly increase time aloft uavs paper introduce ardusoar first soaring controller integrated major autopilot software suite small uavs describe ardusoar algorithmic standpoint outline integration arduplane autopilot discuss tuning procedures parameter choices ntroduction autonomous soaring long considered promise extending range time air uninhabited aerial vehicles uavs two uav characteristics crucial range scenarios aerial mapping crop monitoring package delivery currently limited combination uav powerplant limited energy available onboard exploiting energy sun moving air masses latter focus work potential increase drone range many times many atmospheric phenomena help aircraft gain altitude without help motor ubiquitous thermals rising plumes air generated certain areas ground giving heat atmosphere including first feasibility study autonomous thermalling researchers proposed least approaches modeling thermals computationally soaring evaluating techniques simulation live tests significant number intricacies related implementation parameter tuning since none implementations written different languages different platforms setups publicly available none techniques properly compared paper introduces describes implementation soaring controller called ardusoar part popular opensource uav autopilot system ardupilot ardupilot built run many different autopilot hardware platforms including pixhawk rasberry well simulation part project ardusoar software components reused inspected serve inspiration basis benchmark soaring controller designs fig convection simulation showing two rising thermal plumes red color indicating hotter air lower boundary simulation heats adjacent air hotter ambient air temperature hotter air gains buoyancy rises turbulent plumes mix cooler air small black vertical bars indicate vertical air speed altitude show typical bell shaped distribution uplift center thermal oaring hermals hermal stimation soaring usually refers process using rising air mass gain altitude birds aircraft technically dynamic soaring source lift wind gradient rising air focus static soaring always relies air pushed upwards way commonly encountered type rising air regions thermal thermals start immediately parts earth surface accumulate heat surrounding areas emit heat back atmosphere warming air thermals form darker terrain asphalt plowed fields may also arise forested areas invisible frequently pictured air columns possibly slanted due wind vertical air velocity center column higher becomes lower towards column fringes eventually turns sink reality even realistic simulation models thermals always conform idealized concept figure screenshot simulation see kadanoff standard convection model gives accurate idea thermals irregular chaotic nature existing work autonomous soaring thermals controllers use thermal models estimated observing aircraft vertical air velocity combining data information aircraft position changes soaring controllers try fit thermal model data fit good enough synthesize trajectory exploits thermal lift distribution predicted model gain altitude model heart simulations used way learn sailplane trajectories simulated aircraft expensive computationally nontrivial tune therefore computational reasons ardusoar relies thermal model proposed wharington herszberg modified allen lin makes number implicit assumptions make explicit apply thermal horizontal given altitude thermal stationary position distribution lift within air vertical velocity change time note however assumption fixed altitude model allows thermal properties vary altitude due air cooling due thermal getting affected wind thermal single center called core position xth vertical air velocity within thermal horizontal distribution thermal core strongest xth speed thus lift strength position given exp xth rth rth parameter describing thermal radius parameters rth xth continually estimated using kalman filter assume gaussian distribution keep updating distribution sensor observations hazard used unscented kalman filter ukf purpose ardusoar due computational constraints hardware platforms typically runs pixhawk apm use extended kalman filter ekf instead ekf type kalman filter allows maintaining gaussian estimate system state given sequence observations system transition function observation function nonlinear specifically suppose start state distribution want compute new estimate system makes transition one time step receive observation begin linearize transition function current state estimate computing jacobian update state estimate using state transition linearize around new state estimate computing jacobian calculate approximate kalman gain final step state distribution far needs corrected observation process matrices process observation noise covariances respectively tunable parameters model iii lgorithmic spects rdu oar gaining altitude thermal help broken four stages thermal detection identification exploitation exit abstract level aircraft traveling air motor shut tend lose altitude certain rate depends aircraft airspeed via function called polar curve discuss shortly autopilot knows polar curve current airspeed therefore point easily determine fast aircraft would descending air around still aircraft flying thermal air around still rising lifting aircraft autopilot detect comparing aircraft actual altitude change rate one predicted polar curve enter thermalling mode thermalling mode autopilot continually keeps refitting altitude change rate data receiving thermal model model equation using ekf trying keep aircraft within thermal gain altitude altitude change rate becomes close natural sink rate predicted polar curve indication thermal weak time exit reasoning straightforward operationalizing significantly complicated inherent uncertainty thermal parameters also subtleties attributing aircraft altitude gain thermal lift opposed factors ardusoar soaring controller integrated opensource uav autopilot called arduplane overcomes difficulties implements intuition using following data streams arduplane attitude heading reference system ahrs altitude roll bank angle airspeed abs gps latitude abs gps longitude wind wind velocity describe ardusoar operation stage polar curve variometer data key ardusoar operation stream variometer data synthesized data streams described first approximation variometer aims measure vertical speed air around aircraft thereby helping autopilot determine whether aircraft thermal practice however onboard instrument direct access true vertical speed air variometer tries deduce aircraft vertical speed ground case sailplane motor turned imparted sailplane thermal pitching therefore instead measuring vertical airspeed implement variometer measures rate change sailplane total specific energy total energy sailplane glide motor present mgh specific energy meters results dividing zero lift drag coefficient aspect ratio span efficiency constants airframe lift coefficient defined lift force generated wings air density wing area airspeed since pitch angle typically shallow glide approximate gives setting assume constant airframe least low altitudes gives expression correct effect load factor steady turn bank angle dividing cos setting airframe constant gives new expression equation cos glide ratio ratio distance traveled height lost equivalently ratio airspeed sink rate given rearranging gives expression sink rate bcl netto variometer signal given note measurement units computing variometer data involves estimating approximated computed using successive data points altitude airspeed data streams recover true vertical speed air mass surrounding sailplane need correct aircraft natural sink current airspeed bank angle corrected variometer known netto variometer value steady glide still air sink rate derived sailplane aforementioned drag polar curve anderson standard simplified formula drag polar curve finally substituting equation equation gives sink rate terms rate change time estimation constants described section thermal detection ardusoar constantly monitors first order version iltn ilt exceeds manually settable threshold parameter soar vspeed ardusoar implementation see table ardusoar switches thermalling mode using filtered version vario signal helps remove turbulence might otherwise trigger algorithm thermal identification exploitation ardusoar naturally interleaves two stages uses observation data variometer readings associated gps locations update distribution possible thermal parameters time chooses trajectory circle centered location radius based thermal parameter distribution recall equation ardusoar thermal model parameters need estimated rth xth simplify math make following assumption given wind velocity vector wind aircraft velocity vector aircraft thermal center locations earth reference frame time locations earth reference frame wind wind respectively words assume aircraft thermal location affected wind way assumption completely accurate thermals tend move slower surrounding air mass nonetheless small high frequency good approximation reality moreover allows easily carry calculations aircraft reference frame since assumption relative displacement aircraft thermal entirely due aircraft velocity vector hazard define aircraft always origin reference frame thermal state vector tracked follows vxwind abs vywind need perform wind correction obtain thermal motion aircraft since effectively process dynamics get observation function readings relevant thermal state vector get vertical air velocity aircraft position thermal center exp rth exp exp exp rth rth exp note two aspects make ardusoar ekf update extra efficient due equation covariance update equation strength rth becomes simply radius state vector distance north since observation vector single component center distance east aircraft vector equation vertical air velocity follows equation recall state tracking ekf involves updating integer matrix inversion kalman gain current state distribution mean linearization computation equation division transition function equation observation function equation case assuming aircraft travelled distance north east ardusoar ekf update runs giving ardusoar given time step known stream thermal parameter estimates order use information gain height ardusoar enters autopilot loiter mode see next section arduplane ensures aircraft orbits specified location circle specified radius ardusoar sets mean thermal estimate center circle setting radius less straightforward theory using mean rth estimate ardusoar could despite apparent simplicity computation involves important subtlety stated aim choose strike optimal balance estimated calculate motion thermal relative aircraft lift meters away thermal center lift loss however direct access aircraft gps due bank angle required stay turn radius positions xabs abs earth reference frame since practice however made manually settable parameter aircraft affected wind abs ardusoar reasons described end section two successive xabs resp abs measurements rather loiter loiter throttle active throttle suppressed manual cmd entry condition soar alt max exit condition soar alt min auto fig arduplane typical flight profile ardusoar enabled image reproduced permission source http soar alt cutoff throttle active auto throttle suppressed soar alt min fig state transition diagram normal transitions thick lines thermal exit ardusoar constantly monitors strength thermal decide whether exit search better thermal assumed sailplane currently circling thermal center distance lift estimate distance calculated using equation estimate adjusted using constant correction factor ksink representing sink drag bank angle thus ardusoar computes ksink compares exp soar vspeed threshold table determine thermal become weak continue note exit condition rely actual lift measurements predicted thermal model approach advantage methods overly sensitive speed controller response fluctuations thermal size position rdu lane ntegration although soaring controller enable aircraft stay aloft extracting energy atmosphere per lacks lot useful functionality autopilot including ahrs system estimating aircraft state waypoint following capability safety mechanisms geofencing etc integrating ardusoar arduplane resulted flight controller uav rich feature set major autopilot addition thermalling functionality integrate ardusoar arduplane augmented arduplane number parameters configurable via ground control station software mission planner parameters listed table soar vspeed soar polar pertain ardusoar rest particularly soar alt control ardusoar interoperation arduplane particular arduplane two modes relevant ardusoar auto loiter auto mode given set waypoints target airspeed several parameters arduplane default behavior fly aircraft fully autonomously following specified waypoint course loiter mode arduplane default behavior orbit point specified gps coordinates specified radius modified arduplane auto mode opportunistically use thermalling functionality shown figure aircraft took descended altitude given new soar alt min parameter stage flight figure climbs motor soar alt cutoff altitude stage figure following waypoints upon reaching soar alt cutoff turns motor glides arduplane target airspeed stage stage ardusoar keeps monitoring aircraft state detects thermal takes navigational control starts thermalling putting aircraft loiter mode stage figure point ardusoar commands aircraft orbit current mean thermal center estimate xth radius given special arduplane parameter loiter mode called loiter rad thermalling aircraft follow waypoint course may deviate may also reach altitude given soar alt max parameter safety parameter terminates thermalling prevent aircraft climbing high point ardusoar stops thermalling gives navigational control back arduplane aircraft starts another motorless glide stage process repeat many times without human intervention several newly introduced parameters meant give human operator ability experiment ardusoar behavior specifically soar vspeed determines strong thermal needs ardusoar catch soar min thml allows forcing ardusoar stay thermalling minimum time period even ardusoar would otherwise decide leave thermal soar min crse prevents ardusoar starting thermal soon exited thermal previously make sure keep entering exiting thermal state transition diagram resulting autopilot shown figure auto mode arduplane default behavior use motor follow waypoints implementing ardusoar involved overriding arduplane throttle setting navigation waypoints code inserted main arduplane vehicle update functions override relevant variables soaring controller active parameter name soar enable soar vspeed soar soar soar soar dist ahead soar min thml soar min crse soar soar soar soar polar polar polar alt min soar alt cutoff soar alt max description enables disables ardusoar functionality threshold vertical air velocity determines whether aircraft observes air moving upwards soar vspeed thermalling mode standard deviation process noise thermal strength standard deviation process noise thermal position radius standard deviation observation noise initial guess distance thermal center along aircraft heading minimum time aircraft keep trying soar entered thermalling mode loiter seconds minimum time aircraft avoid entering thermalling mode loiter last exited seconds polar curve parameter see polar curve parameter see polar curve parameter see minimum altitude soaring gliding allowed meters aircraft descends soar alt min turns motor starts climbing auto mode altitude meters aircraft turns motor climb soar alt min ground starts gliding auto mode trying detect thermals maximum altitude meters aircraft allowed reach upon reaching aircraft exits thermalling mode necessary starts glide auto mode methods trigger flight mode changes variometer provides total energy rate estimate height airspeed roll measurements extendedkalmanfilter provides thermal parameter estimate total energy rate estimate aircraft position instant system two primary state variables determine behaviour flight mode ardupilot feature selectively enables control features throttle suppressed flag within soaring controller determines whether throttle signal forced zero transitions states triggered update soaring based altitude thermal parameter estimates transition diagram illustrated figure number interactions required ardusoar arduplane example rapid changes operating regime switching throttle suppressed throttle active states require large changes trim pitch angle mandated resetting pitch integrator state transitions avoid overshoots light esting flight testing conducted two phases airframe characterization polar curve parameters parkzone radian pro popular sailplane estimated controller tuning key parameters affecting performance ardusoar adjusted radian pro equipped pixhawk flight controller hardware arm processor ram flash memory clock speed gps compass tube sensor telemetry radio frsky receiver remote pilot could intervene point using frsky trasmitter power provided single mah lipo battery equipment diagram shown figure table parameters ardusoar controller settable arduplane main logic controller divided modules provides single function update soaring called task scheduler depending current flight mode function updates soaring controller checks required changes flight mode navigation target exists main arduplane namespace allowing set required variables directly provides main controller logic including methods update thermal estimator variometer calculation navigation target fig radian pro electronic equipment canopy temporarily removed soar soar see table tuned largely analysis filter performance thermal encounters underlying model assumes static updraft distribution soar captures variations vario readings due actual measurement noise unmodelled aircraft dynamics turbulence soar therefore taken variance reading circling flight turbulent air initial seconds thermal encounter essential maintain uncertainty position estimates xth allows movement estimated thermal center achieved large initial variances xth aircraft follow estimated center yielding additional information rapidly localizes thermal center reduces state variances initial variance rth also affect ability xth change selected relatively smaller initial filter transients balance soar elements determines magnitude elements state covariance matrix given soar small values lead low state covariances small values give small response innovations states become almost constant time conversely large values increase state uncertainty larger gives filter ability refine thermal position estimate elements selected give good filter response characteristics thermalling strength radius estimates change slowly order period thermal orbit relative position estimates mentioned previously despite ekf providing ardusoar estimates thermal radius opted set radius ardusoar uses loiter mode manually specified parameter loiter rad thermals low altitudes conducted test flights observed large variance rth estimates indicating unreliable moreover observed large covariances rth estimates indicating soar soar fixed loiter radius performance vertical speed polar curve estimated conducting test glides calm air arduplane auto mode used track target airspeed around triangular course range target airspeeds descent rate measured parameters soar polar soar polar fitted resulting data using least squares fit equations soar polar determined directly wing dimensions weight radian pro assumed since flight testing conducted near sea level controller tuning required deriving estimates variometer measurement noise soar initial state values variances xinit pinit state process noise optimal loiter radius thermal radius rth fig simulation results showing soaring performance using three different fixed loiter radii wide range thermal sizes rth compared using optimal loiter radius thermal size least irregular thermals low altitudes ardusoar difficulty distinguishing strong narrow thermals weak large ones surprising given ardusoar lack exploration thermal confident center location orbits attempt reduce variance rth estimates exploration strategy deliberately varying orbit radius could alleviate problem time observed smallest turn radius radian pro reliably maintain loiter mode works well across wide range thermals figure based simulated thermals shows soaring performance choice loiter radius using small loiter radius extends performance smaller thermals expense performance large thermals additionally trying orbit tightly speeds close stall speed soaring performance best challenging autopilot radian pro glider radius airspeed provides good compromise found setting loiter radius tightly resulted occasional tip stalls leading dangerous flat spins required manual intervention recover elated ork besides autonomous dynamic soaring soaring thermals researchers considered many ways extracting energy atmosphere maintain flight exploiting gusts wind fields orographic lift ardusoar successfully used soaring orographic lift generated wind blowing vertical obstacle tree line hill forcing air move upwards although true shape rising air mass case almost arbitrary certainly far column ardusoar strategy circling order stay within airmass valid appoach exploiting kind lift techniques flight path planning increased endurance due natural lift sources also received attention literature far topic work among previously proposed autonomous soaring approaches wharington allen lin edwards closest three use gaussian thermal model first proposed later modified allen lin difference lies set hypotheses approach maintains thermal exploitation strategy simulation scenarios warrington maintain fixed grid candidate thermal center locations flight area identify node thermal parameter fit best center control strategy developed help reinforcement learning trained using simple simulator convergence solver slow prohibiting practical implementation allen lin place thermal center centroid recent history observation locations weighted square measurement thermal radius fitted observations using gradient descent thermal exploited using controller based form reichmann rules edwards mitigated problem biased thermal center estimation inside turn augmenting allen centroid locater surrounding grid candidate thermal locations similar makes grid scale placement adaptive evolutionary search recursively refines thermal parameters choosing grid point best fit observation history step using least squares edwards silberberg also presents weighted least squares method simultaneously fit parameters generalized ellipsoidal thermal model without need search grid however flight testing method found unreliable sensitive observation noise bird langelaan presents method map vertical wind field around sailplane fitting weights parametric spline surface observations using extended kalman filter exploit arbitrary uplift map contour solver used extract optimal lift isotherm along sailplane guided augmented sinusoidal dither signal promote exploration refinement map simulation results show significant performance improvement previously described controllers even irregular shaped thermals reddy like wharington employ method learn policy however utilize much detailed thermal simulation based convection like wharington based approaches requires random trial restart capability far impractical conduct real world limiting training simulation without flight testing clear well learnt control strategies would carry real world hand edwards conducted live flight test controller kept sailplane airborne hours along closed course tests thermal centering ran laptop ground due use adaptive grid thermal hypotheses edwards approach still expensive computationally constrained device pixhawk efficient alternative means thermal identification simulated autonomous thermalling hazard derives thermal tracking approach evaluates performance determining thermal parameters assuming various environment exploration policies flying horizontal circular sinusoidal pattern work uses several elements hazard work particularly thermal model insights regarding tracking using kalman filter variant another implementation ekf estimator used identify thermal dynamics state variables thermal strength radius distance aircraft bearing aircraft state dynamics introduced cause decay thermal strength state estimation separated two ekfs dynamics thermal strength radius decoupled position states formulation executed iteratively embedded autopilot differs approach complex state dynamics increase computational complexity decoupled estimation approach capture coupling uncertainties states coupling terms captured terms covariance matrix estimator vii onclusions paper presented ardusoar embedded thermal soaring controller implemented tested codebase popular autopilot uavs called arduplane ardusoar based estimator uses extended kalman filter trivial state dynamics achieve computational cost low enough ardusoar executable highly drone autopilot hardware pixhawk even apm implementation demonstrates good performance used several research groups well many individuals leisure research commercial purposes acknowledgements would like thank peter braswell making available soaring controller code ardusoar implementation derived eferences ardupilot project website http accessed arduplane url http mission planner url http pixhawk project website https accessed raspberry project website http accessed allen updraft model development autonomous soaring uninhabited air vehicles proc aiaa aerospace sciences meeting exhibit doi https url https allen lin guidance control autonomous soaring uav technical report url https anderson introduction flight bird langelaan montella spletzer grenestedt closing loop dynamic soaring proc aiaa guidance navigation control conference exhibit john bird jack langelaan spline mapping maximize energy exploitation thermals technical soaring chakrabarty jack langelaan path planning uavs journal guidance control dynamics chung lawrance sukkarieh learning soar exploration reinforcement learning international journal robotics research depenbusch bird jack langelaan autosoar autonomous soaring aircraft part hardware implementation flight results journal field robotics doi url http edwards implementation details flight test results autonomous soaring controller proc aiaa guidance navigation control conference exhibit doi https url http edwards silberberg autonomous soaring montague challenge journal aircraft doi https url https fisher marino clothier watkins peters palmer emulating avian orographic soaring small autonomous glider bioinspiration biomimetics url http hazard unscented kalman filter thermal parameter identification proc aiaa aerospace sciences meeting exhibit kadanoff turbulent heat flow structures scaling physics today kahn atmospheric thermal location estimation journal guidance control dynamics kalman new approach linear filtering prediction problems transactions basic engineering series langelaan long trajectory optimization small uavs proc aiaa guidance navigation control conference exhibit langelaan gust energy extraction uninhabited aerial vehicles proc aiaa aerospace sciences meeting exhibit lawrance sukkarieh guidance control strategy dynamic soaring gliding uav proc ieee international conference robotics automation icra pages url http lawrance sukkarieh path planning autonomous soaring flight dynamic wind fields proc ieee international conference robotics automation icra pages url http oettershagen melzer mantel rudin stastny wawrzacz hinzmann leutenegger alexis siegwart design small handlaunched uavs concept study world endurance record flight volume pages doi url http reddy sejnowski vergassola learning soar turbulent environments proc national academy science pnas reichmann soaring edition sutton barto reinforcement learning introduction mit press telford convective plumes convective field journal atmospheric sciences wharington autonomous control soaring aircraft reinforcement learning doctorate thesis royal melbourne institute technology melbourne australia wharington herszberg control high endurance unmanned air vehicle proc icas congress
| 3 |
replace line paper identification number edit voltage stability index case studies wenlu zhao voltage stability svs problem power systems serious due increasing load demand increasing use electronically controlled loads serious blackouts considered related voltage instability china east china grid ecg especially vulnerable voltage instability due increasing dependence power injection external grids hvdc links however svs criteria used practice qualitative svs indices proposed previous researches mostly based qualitative svs criteria voltage stability index svsi continuous quantitative proposed paper svsi consists three components reflects transient voltage restoration transient voltage oscillation recovery ability voltage signal respectively contingency cleared theoretical backgrounds affected factors three components svsi analyzed together feasible applications verification validity svsi tested cases based ecg additionally simple case selecting candidate locations install dynamic var using svsi presented show feasibility solve optimization problem dynamic var allocation index voltage stability voltage stability index svsi case studies introduction ccording definition voltage stability svs involves dynamics fast acting load components induction motors electronically controlled loads hvdc converters modern power systems operating stressed situations due increasing load demands electricity exchange regions additionally dynamic responses complicated increasing use induction motor loads electronically controlled loads hvdcs renewable power generations result power systems vulnerable voltage instability considered part reason north american blackout august may related svs china energy resources coal hydro natural gas distributed west loads distributed east transmission needed deliver electric power west china east china instance east china grid ecg typical recently modified feb system receives power hvdc links according requirements chinese environmental protection increase generation strictly controlled unless combined heat power chp units dependence power injection external grid increases year year svs yangtze river delta getting serious therefore svs ecg especially yangtze taken seriously indices describing voltage stability required evaluate svs currently indices describing transient stability divided three categories indices based transient energy function critical clearing time cct indices based voltage signals front two categories used study rotor angle stability latter one category used study svs svs criteria used practice listed western electricity coordinating council wecc voltage dip exceed load buses buses exceed cycles load buses contingencies voltage dip exceed bus exceed cycles load buses contingencies tennessee valley authority tva transmission system voltage recover nominal system voltage within seconds fault clearing contingencies transmission system voltage recover nominal system voltage within seconds fault clearing unit tripping contingencies state grid corporation china sgcc voltage restore within fault cleared china southern power grid csg voltage drop certain level lasting longer therefore svs criteria used practice determine system reflect degree furthermore criteria merely dependent operating experiences lack theoretical backgrounds therefore circumstance criteria wecc tva form different threshold occurs due increasing complexity modern power systems current svs criteria hardly satisfy demand replace line paper identification number edit evaluation optimization svs indices evaluate svs introduced previous researches mostly dependent svs criteria theoretical backgrounds discussed therefore voltage stability index svsi continuous quantitative proposed paper evaluate svs tested svs evaluation ecg verify effectiveness svsi consists three components theoretical backgrounds affected factors moreover proposed svsi additionally used researches evaluation optimization dynamic vars svs evaluation using voltage stability index voltage stability index svsi proposed paper consists three components svsir svsio svsis three components svsi reflects transient voltage restoration transient voltage oscillation ability reach respectively bus voltage signals considered paper normalized namely voltage signal transient process corresponding bus voltage bus contingency normalized voltage signal normalization voltage signals starts advantages normalization stated based svs criteria listed section new definition transient voltage restoration proposed follows average time period contingency cleared bus considered transient voltage recovered otherwise considered transient voltage unrecovered definition algorithm svsi different transient voltage recovered scenario transient voltage unrecovered scenario svsir svsio svsis presented separately two scenarios svsir transient voltage recovered scenario two important arguments tsvsir needed defined calculation svsir argument defined represent voltage transient process since time simulation limited true value hard obtain therefore logical judgments approximation proposed common circumstance voltage signals contingency cleared voltage firstly surges relative high status oscillation damping period finally comes therefore computational expression common circumstances max end min end max end last peak min end last valley addition two special circumstances need considered fluctuation cycle fluctuation amplitude recognized firstly slope tends gentle since time reached last extreme value tend tend end time simulation end time svs considered paper secondly slope tends steep since time reached last extreme value max end min end definition tsvsir moment firstly reaches ramping periods contingency cleared common circumstances computational expression tsvsir min tsvsir tclear tend svsir tsvsir tsvsir tsvsir tsvsir tclear moment simulation contingency cleared addition special circumstance need considered inequality holds tclear tend thus tsvsir tclear based arguments defined simulation result computational expression tsvsir svsi tsvsir tflt tflt moment contingency starts fig schematic svsir transient voltage recovered scenario fig area yellow region svsir maintains nearly tsvsir experiences continuous stage tsvsir paper time period tflt tsvsir defined transient voltage restoration stage time period tsvsir tend defined transient voltage restoration stage svsir transient voltage restoration component svsi reflects dynamic reactive power balance transient voltage restoration short time contingency cleared closely related dynamic response induction motors limits generators dynamic vars hvdcs svsir reflects key factors svs greater value replace line paper identification number edit greater possibility voltage instability process transient voltage restoration induction motors hvdcs consumes massive reactive power thus threaten voltage stability contrary generators dynamic vars statcoms svcs rapidly send reactive power support svs transient voltage restoration process variation operating states induction motors hvdcs threatening consumption reactive power returns normal level thus threaten svs reduced induction motors consume massive reactive power operating low voltage stalled status induction motors threaten svs transient voltage restoration stage bus remains operating states induction motors depend balance electromagnetic torque mechanical torque proportional square bus voltage usually increases speed motor bus voltage reaches level end transient voltage restoration stage transient voltage recovered scenario recovers level well sufficient maintain balance transient voltage restoration stage thus variation operating states induction motors threatening coupled effect damping induction motors return eventually hvdcs consume massive reactive power restoration process commutation failures harmful svs operating states hvdcs closely related voltage coupling point system vac extinction angles decreases may cause voltage dip threshold value commutation failure hvdc occurs bus remains transient voltage restoration stage commutation failure hvdc may occur thus threaten svs regardless whether commutation failure occurred transient voltage restoration stage hvdc operate normally transient voltage restoration stage transient voltage recovered scenario hvdc operates normally control strategy commutation failure prevention restored commutation failure allows relative strong ability voltage disturbance reduce svsir enhance svs focus modification induction motors adjustment hvdc controllers optimize avr parameters increase dynamic var reservation svsio transient voltage recovered scenario definition vsvsio needed calculation svsio vsvsio reflects oscillation shown fig oscillation considered area power system small signal analysis transient process studied area small signal stability analysis voltage recovers certain level system tends contingency cleared aperiodic low frequency fluctuations usually lower component retained vsvsio high frequency fluctuations usually lower considered noises shown fig computational expression svsio svsi tend tclear vtvsio fig schematic svsio transient voltage recovered scenario fig area yellow region svsio transient voltage oscillation component svsi reflects balance active power generators loads damping ability system closely related dynamic responses devices generator speed control systems psss greater svsio slower oscillation attenuates contingency cleared thus greater possibility voltage instability reduce svsio enhance svs focus optimizing dynamic control generators performance psss svsis transient voltage recovered scenario definition tstable need defined calculation svsis tstable moment reaches since simulation time limited true value tstable hard obtain therefore logical judgments approximation tstable proposed computational expression tstable common circumstances min tstable tend vwth vwth amplitude threshold voltage determining refers requirements voltage judgement vwth set addition two special circumstances considered firstly continuous oscillation increasing amplitude observed tstable tend secondly tend vwth holds tstable tclear based arguments defined voltage signal computational expression svsis svsi tstable tclear replace line paper identification number edit fig schematic svsis transient voltage recovered scenario fig schematic svsio transient voltage unrecovered scenarios fig area yellow region svsis recovery ability component svsi reflects speed reach voltage level bus closely related topology operating mode tripping transmission lines switching shunt compensations system greater svsis slower reaches lower voltage bus thus greater possibility voltage instability reduce svsis enhance svs focus modification operating mode system fig schematic svsis transient voltage unrecovered scenarios svsi transient voltage unrecovered scenarios main difference svsir svsio svsis transient voltage unrecovered scenario different definitions tsvsir tstable svsir definitions rest arguments indices transient voltage recovered scenario new definitions tsvsir tstable svsir vsvsio tend tsvsir tend tstable tend svsi ttvsir tflt new schematics svsir svsio svsis shown fig fig schematic svsir transient voltage unrecovered scenarios discussions motivation normalization major factor voltage instability cases continuous buses suffering voltage dip svs buses whose voltage originally lower level worse contrast buses whose voltage originally higher level better normalization increases svsi corresponding buses whose voltage originally lower base level decreases svsi corresponding buses whose voltage originally higher base level makes result svs evaluation reasonable probable application svsi according svsi utilizing svsi describe response voltage contingency cleared thus made according svsi corresponding buses set credible contingencies results mainly affected topology set credible contingencies system usually change little long time therefore result used svs evaluation next change topology set credible contingencies system thus reducing computational amount evaluation optimization dynamic var reserve svsi quantitatively evaluate svs sensitivity function dynamic var svsi used evaluation optimization dynamic var reserve evaluation svs based theoretical backgrounds affected factors replace line paper identification number edit svsi stated utilizing affected factors input svsi output supervised learning artificial neural network ann constructed constructed ann model evaluate svs much faster time domain simulation used evaluation svs summary svsi consists three components reflect key features voltage signals contingency cleared moreover components theoretical backgrounds affected factors therefore svsi comprehensive criterion evaluate svs applied various researches iii case studies validity svsi verified based ecg system contains buses transmission lines typical operating mode summer chosen analysis load level summer high thus svs system vulnerable cases carried verification four typical one practical instance shown due limited space case modify proportion induction motor loads fig voltage signals svsi scenarios different proportion induction motor loads fig shows voltage signals svsi proportion induction motors loads changes voltage signals left restoration ability voltage signals becomes weaker proportion induction motor increases conforms statement impact induction motors svs svsi right svsir increases significantly proportion induction motor increases contrast svsio svsis change little proportion induction motor less voltage signals proportion induction motor increases voltage signal nearly unstable proportion increase voltage instability occurs svsi svsir svsio svsis increased significantly proportion induction motor increased three components svsi increases significantly voltage signal may nearly unstable already unstable conclusion remains correctness following cases case modify installations dynamic var fig voltage signals svsi scenarios different configurations dynamic var fig show voltage signals svsi installation dynamic var changes voltage signals left restoration ability voltage signals becomes better dynamic var installed moreover voltage restoration ability voltage signals better dynamic var locates near measurement bus contrast dynamic var locates remote measurement bus svsi right three components increases dynamic var arrangement changes sequence dynamic var dynamic var remote dynamic var nearby case iii modify configuration pss fig voltage signals svsi scenarios different configuration pss fig shows voltage signals svsi configuration pss changes voltage signals left oscillation voltage signals obvious system configured proper pss contrast system configured improper pss svsi right svsio smaller pss proper reason svsir larger proper pss case may pss restricts response prevent sustained oscillation voltage restoration case modify strategy fig voltage signals svsi scenarios different strategies replace line paper identification number edit fig shows voltage signals svsi strategy changes voltage signals left ability voltage signals reach voltage level better proportion increases svsi right svsis decreases proportion increases case select installation locations dynamic var contrast dynamic var installed location however case contingency occurred location part svsi part smaller dynamic var installed location closer contingency location contrast dynamic var installed location therefore location better choice dynamic var installation contrast location furthermore optimal locations install dynamic vars solved based procedure conclusion fig diagram part ecg fig diagram part ecg buses anonymous due confidential considerations part locates end east china gird connects rest part red numbers represents candidate locations contingencies blue letters represents candidate location install dynamic vars continuous quantitative multidimensional index svsi presented paper evaluation svs svsir svsio svsis three components svsi reflects transient voltage restoration transient voltage oscillation recovery ability voltage signal respectively contingency cleared theoretical backgrounds affected factors three components analyzed detail reveal wide applicability svsi cases based ecg carried verify effectiveness svsi five presented paper presented cases shown svsi reflect key characteristics voltage signal contingency cleared three components svsi adjusted modifications affected factors respectively svsi applied determining optimal locations install dynamic vars moreover svsi also applied evaluation optimization dynamic var reserve evaluation svs based also following research contents study references fig effect dynamic var contingency occurred location fig effect dynamic var contingency occurred location fig fig show composite value svsi contingency occurred two candidate locations case contingency occurred location svsi part much smaller dynamic var installed location closer contingency location termsdefinitions definition classification power system stability ieee transactions power systems pwrs potamianakis vournas voltage instability effects synchronous induction machines ieee transactions power systems fouad vittal power system response large disturbance energy associated system separation ieee transactions power apparatus systems llamas mili clarifications bcu approach transient stability analysis power systems ieee transactions xue van custem extended equal area criterion justifications generalizations applications ieee transactions power systems yixin study dynamic security regions power systems proceedings electric power system automation ardyono priyadi yorino tanaka direct method obtaining critical clearing time transient stability using critical generator conditions ieee transactions power systems nerc planning standards http planning standards https tva tva transient stability planning criteria https national energy board technique specification power system security stability calculation electric power replace line paper identification number edit sgcc gdw security stability national grid computing specification china electric power press csg csg guide security stability analysis csg dong meng dynamic var planning voltage instability using evolutionary algorithm ieee transactions power systems paramasivam salloum ajjarapu dynamic optimization based reactive power planning mitigate slow voltage recovery short term voltage instability pes general meeting conference exposition ieee zhang zhao assessing svs electric power systems hierarchical intelligent ieee transactions neural networks learning systems dong xie zhou integrated high side control strategy improve svs power systems ieee transactions power systems tiwari ajjarapu optimal allocation dynamic var support using mixed integer dynamic optimization ieee transactions power systems wildenhues rueda erlich optimal allocation sizing dynamic var sources using heuristic optimization ieee transactions power systems kundur power system stability control new york brown demello lenfest effects excitation turbine energy control transmission transient stability ieee transactions power apparatus systems vournas unstable frequency oscillations generator engineering society winter meeting ieee ieee china machinery industry standard cmis power quality voltage fluctuation flicker
| 3 |
set membership bit probes mohit jaikumar dec tokyo institute technology tokyo tata institute fundamental research mumbai jaikumar abstract consider complexity set membership problem set size universe size represented short bit vector order answer membership queries form probing bit vector places let minimum number bits storage needed scheme buhrman miltersen radhakrishnan srinivasan alon feige investigated various ranges parameter show following general upper bound odd odd improves result buhrman states odd small values odd obtain adaptive schemes use little less space exp three probes lower bound show constant improves result alon feige states complexity scheme might principle depend function used determine answer based three bits read one may assume queries use function let sfn minimum number bits storage required scheme function used answer queries show large class functions including majority function three bits fact particular schemes use query functions give asymptotic savings trivial characteristic vector log introduction set membership problem fundamental problem area data structures information compression retrieval abstract form given subset size universe size required represent bit string membership queries form answered using small number probes bit string characteristic function representation provides solution problem one needed answer queries sets represented using strings wasteful promised small number bits representation number probes subject several previous works studied minsky papert book perceptrons recently buhrman miltersen radhakrishnan venkatesh showed existence randomized schemes answer queries one bit probe use near optimal space contrast showed deterministic schemes answer queries part work done first author tata institute fundamental research making constant number probes use optimal space deterministic worstcase problem also considered paper several subsequent works radhakrishnan raman rao alon feige radhakrishnan shah shannigrahi viola lewenstein munro nicholson raman garg radhakrishnan sets element included probability makhdoumi huang polyanskiy showed particular savings characteristic vector obtained case schemes work focus deterministic schemes probes probes made parallel equivalently location probes depend value read previous probes schemes studied several previous works let minimum number bits storage required order answer membership queries probes definition consists storage function query scheme storage function form takes set size returns representation query scheme associates element probe locations function require iff let denote minimum discussion use without subscript denote minimum space required adaptive schemes using notation describe results relation known asymptotic claims hold large general schemes theorem result schemes odd comparison odd buhrman showed exponent upper bound result roughly four times exponent appearing lower bound result schemes use majority function answer membership queries exhibit schemes still use majority need less space buhrman also show lower bound valid even adaptive schemes note exponent result twice exponent appearing lower bound result schemes well scheme due alon feige implications problem studied makhdoumi unlike case significant savings possible even using similar proof idea obtain slightly better upper bound adaptive schemes small theorem result adaptive schemes odd exp technique justify claim need describe query scheme query function use majority bits odd locations probed element obtained using probabilistic argument query scheme fixed need show assignment memory obtained describe sequential algorithm show random assignment locations ensures sufficient expansion allowing start greedy argument grateful tom courtade ashwin pananjady observation arrange queries answered correctly use hall bipartite graph matching theorem find required assignment remaining elements versions argument used previous works three probes one probe easy show space saved characteristic vector representation two probes special case trivial savings characteristic vector representation possible buhrman showed smallest number probes complexity problem probes settled three observe scheme two adaptive probes converted scheme three probes two probe decision tree three nodes thus using two adaptive probes upper bound result garg radhakrishnan thus savings space characteristic vector representation possible consequently question whether space saved upper bound tight aware scheme manages space sets size alon feige show following lower bound lgmn order obtain better lower bounds schemes proceed follows scheme query scheme specifies element three locations probe three variable boolean function applied three values read principle different elements query scheme specify different boolean functions since finite number boolean functions three variables set least elements universe use common function may thus restrict attention part universe assume function employed answer queries always furthermore may place functions obtained one another negating permuting variables common equivalence class restrict attention one representative class three variable boolean functions counting yields equivalence classes classification functions classes already available literature show following theorem result query function equivalent iff query function equivalent iff query function equivalent best upper bounds schemes four probes use majority function answer membership queries result implies three probes queries answered computing majority space required least constant fact similar lower bound holds membership queries answered using boolean functions results yield similar lower bound iff types two types query functions get slightly better lower bound implied thus investigations three probes schemes need focus iff query functions technique mentioned types functions need prove lower bound seven classes contain functions represented decision tree height two thus functions two probe adaptive lower bound implies result functions constant constant dictator function function complement fifteen classes remain functions eleven remaining fifteen classes admit density argument similar spirit adaptive proof streamline argument classify eleven classes two parts first part contains majority function second part contains function allequal function functions complements functions second part deal two function single proof proofs produce sets size storing storing leads contradiction proof complement function works small twist storing storing leads contradiction thus eleven cases handled six proofs proofs roughly argue sometimes probabilistically scheme valid must conceal certain dense graph avoids small cycles standard graph theoretic results moore bound relate density girth gives lower bound remaining four classes employ arguments representatives chosen classes parity iff iff parity iff show using standard dimension argument space used smaller universe size element linearly dependent elements storing elements leaves scheme choice thus leading contradiction iff modification algebraic argument radhakrishnan sen venkatesh implies lower bound interestingly need choose appropriate characteristic field based function deal improve argument employing random restrictions results together improve previous best lower bound due alon feige irrespective query function used general upper bound section prove general upper bound result theorem definition bipartite graph vertex sets partitioned disjoint sets vertices every unique neighbour naturally gives rise scheme follows view memory array bits indexed vertices receiving query answer yes iff majority locations neighbourhood contain say query scheme satisfiable set assignment memory locations correctly answers queries form restrict attention odd first identify appropriate property underlying guarantees satisfiable sets size show graph exists definition admissible graph say admissible sets size following two properties hold set neighbors theorem follow following claims lemma admissible sets size scheme satisfiable every set size lemma admissible every set size proof lemma fix admissible graph thus satisfies fix set size show assignment memory queries answered correctly know let hence hall theorem may assign element set disjoint assign value locations assign value locations since queries answered correctly assign locations result queries elements answered correctly majority evaluates one proof lemma following set show suitable random admissible sets size positive probability graph constructed follows recall one neighbor chosen uniformly independently holds fails fix set size size let elements thus last inequality consequence conclude using union bound choices fails probability tes last inequality holds chosen large enough holds fail must exist set size fix set size fix last inequality holds choice large thus conclude bounded high probability use following version chernoff bound random variable independently large using union bound conclude fails thus probability least random graph admissible general upper bound adaptive section prove general adaptive upper bound result theorem use following definitions previous works definition storing scheme query scheme scheme systematic scheme method representing subset size size string formally scheme map deterministic scheme family boolean decision trees depth internal node decision tree marked index indicating address bit data structure internal node one outgoing edge labeled one labeled leaf nodes every tree marked yes tree induces map yes map also referred scheme scheme together form let minimum say systematic value returned trees equal last bit reads interpreting order show small exhibit efficient adaptive schemes store sets size exactly imply bound allow sets size may pad universe additional elements extend adding additional elements get subset size exactly universe size definition adaptive bipartite graph vertex sets partitioned disjoint sets one vertices exactly one edge let naturally gives rise systematic scheme follows view memory array bits indexed vertices query element first probes resulted values probe made location indexed unique neighbor particular probe made location answer yes iff last bit read addition use following notation refer leaves let leaves let leaves say query scheme satisfiable set assignment memory locations correctly answers queries form assume odd show exp scheme two parts part adaptive part respective parts based appropriate adaptive respectively decide set membership check set membership two parts separately take answer yes iff bits read last bit read refer scheme first identify appropriate properties underlying graphs guarantee queries answered correctly sets size show graphs exist exp use following constants calculations note total number nodes adaptive decision tree decision tree every choice nodes every choice answer possible assign values nodes decision tree returns answer definition say adaptive form admissible pair sets size following conditions hold survivors let survivors lemma adaptive form admissible pair sets size query scheme satisfiable every set size lemma let odd number let exist admissible pair graphs consisting adaptive exp proof lemma fix admissible pair thus satisfies satisfies fix set size show assignment answers questions form correctly assignment constructed follows assign locations remaining locations thus answers yes query elements answers query elements outside survivors however incorrectly answers yes elements survivors argue false positives eliminated using scheme using hall theorem may assign element set disjoint set false positives observed may set values locations value returned query element precisely since disjoint may take action independently partial assignment remains ensure queries elements survivors remaining false positives return consider definition location assigned value partial assignment assign unassigned locations thus returns answer queries survivors proof lemma following let exp construct randomly show positive probability pair admissible constructed proof lemma analysis similar recall one neighbor chosen uniformly independently holds fix set size using chernoff bound conclude union bound fails last inequality follows choice fix graph suchsthat holds random graph constructed follows recall one neighbor chosen uniformly independently holds establish need show sets form expand restrict choices first show claim high probability small using direct calculations show whp required expansion available random graph claim let let let proof claim part follows routine application chernoff bound several previous proofs set size last inequality holds choice next consider part hold fix set size size let elements conclude using union bound choices probability hold last inequality holds choice finally justify part bound probability fails consider set size subset size say subset size size define event justified requirement every least one neighbour factor justified remaining neighbours must lie use last factor justified neighbors elements lie use complete argument apply union bound choices note may restrict attention choice thus probability fails hold factor ranges sets size size survivors size subset size evaluate sum follows show quantity inside square brackets since quantity brackets decomposed product two factors bound separately factor consider following contributions since enough quantity lge thus large exp factor next bound contribution remaining factors justify recall quantity exp thus bounded thus since exp product factors required three probes lower bound section prove three probe lower bound result theorem definition equivalent two boolean functions called equivalent one obtained negating permuting variables proposition let equivalent minimum bits space required query functions respectively three variable boolean functions equivalence classes see prove theorem provide proofs query functions different class many proofs assume memory consists three arrays size three probes made different arrays given scheme uses space always modify meet assumption expanding space factor decision trees height two seven classes contain functions represented decision tree height two thus functions two probe adaptive lower bound implies result functions constant constant dictator function function complement majority let majority query function memory bit array length element three distinct locations probed determine whether set set size assignment elements maj iff maj majority bits definition third vertex meet let majority query function fix graph edge labels lab unique edge label called edge labelled set endpoints example graph model graph let set endpoints edge label element singleton defined third vertex two cycles said meet exist elements third vertices vertex edges labelled different cycles respectively definition majority query function said forced least one following three conditions hold odd cycles lengths intersect vertex edge disjoint even cycles lengths meet even cycle length two edges labelled say even number edges traversing edges cycle order third vertices vertex lemma scheme majority query function forced majority query function lemma forced lemmas follows majority used query function proof lemma fix majority query function fix assume forced satisfies case holds implies cycles let note claim represent set particular represent set assume represents claim assigned assigned since must assigned otherwise maj maj bit assigned location since assigned must assigned similarly since must assigned finally must assigned contradiction hence assigned claim assigned assigned since must assigned since must assigned finally must assigned contradiction since neither assigned get contradiction remark proofs often encounter similar arguments cycle dependencies assigning particular bit location force assignment next location along cycle case holds implies cycles third vertices vertex let note claim represent set particular represent set assume represents since either location location memory must assigned otherwise maj maj maj assume assigned since set using similar argument must assigned similarly must assigned finally must assigned similarly assigned must assigned thus maj maj hence must assigned since either assigned assigned must assigned finally must assigned similarly assigned assigned therefore maj maj maj since must assigned contradiction case holds implies cycle third vertices vertex let note claim represent set particular represent set assume represents since either must assigned assume assigned since must assigned since must assigned locations must assigned locations must assigned else assigned locations must assigned locations must assigned maj maj maj since must assigned similarly maj maj maj since must assigned contradiction order prove lemma make use following proposition consequence theorem alon hoory linial see also ajesh babu radhakrishnan proposition fix graph average degree cycle proof lemma fix uses majority query function note implies come forced one holds start initial observe average degree high invoke proposition find small cycle odd bin odd delete repeat even third vertices labels edges distinct bin even delete edges repeat otherwise either discover property holds modify find odd cycle bin odd delete repeat moment sum lengths deleted cycles exceeds know either sum lengths odd even cycles exceeds two odd cycles intersect even cycles distinct third vertices meet means either holds formally procedure described maintain following invariant even contain cycles length even third vertices labels cycle distinct odd contain cycles length odd furthermore even odd always step initialization even odd observe step end ensures either holds else using proposition fix cycle step odd odd odd goto step step even third vertices labels edges distinct even even goto step step even third vertices labels two edges even number edges traversing edges order end note means holds step even third vertices labels two edges odd number edges traversing edges order represent third vertices vertex modify modelgraph changing endpoints edges appearing labels respectively thus obtaining shorter odd cycle observe odd even continues odd length cycle length odd odd goto step step average degree least implies proposition cycle length claim procedure terminates encountering end statement step step observe procedure finds cycle step exactly one four conditions steps holds procedure encounter end statement step procedure moves step steps end goto step statement procedure encounters end statement step holds procedure encounters end statement step pigeonhole principle either first case holds second case since edge cycle even distinct third vertex two cycles even meet finally observe procedure terminates procedure terminate step procedure repeatedly finds edge disjoint cycles deletes number edges deleted cycle exceeds procedure terminate encounters end statement step density argument query functions considered section admit density argument query function valid scheme supports sets large universe using small space must conceal certain dense graph avoids certain forbidden configurations hand standard graph theoretic results moore bound would imply dense graphs must least one forbidden configurations would contradict existence scheme section provide lower bounds following ten query functions function function complements functions deal two function single proof proofs produce sets size storing storing leads contradiction proof complement function works small twist storing storing leads contradiction develop common framework prove lower bounds mentioned query functions assume query function could ten functions fix scheme query function memory consists three distinct bit arrays element scheme probes three distinct locations determine set given set size assignment memory elements iff need following definitions definition scheme subset elements define bipartite graph follows vertex sets element edge labelled end points similarly define bipartite graphs definition private vertex private vertex element say private vertex three probe locations also probe location another element exists elements case private vertex say private vertex element say private vertex probe location shared element proposition every private vertex proposition least private vertex let query function function proposition every private vertex done otherwise fix element private vertex let elements let clearly show represent set particular represent set show consider assignment set memory since function implies thus contradiction argument works try represent set let query function proposition least elements private vertex done otherwise fix set size least element private vertex assume least prove contradiction average degree vertices large since proposition exists cycle length since private vertex fix element let clearly show represent set particular represent set assignment since location assigned otherwise assignment exactly one assigned see let assume assigned bit since must assigned bit otherwise assigned similarly must assigned bit must assigned bit thus location must assigned bit location must assigned bit contradiction thus argument works try represent set let query function proposition least elements private vertex done otherwise fix set size least element private vertex make disjoint pairs distinct elements share location make many pairs possible many one per location elements remain unpaired delete unpaired elements thus true immediately done pair define reserved partner let set unreserved elements clearly find elements triple elements exists element least one probe locations done shared element thus let reserved partner clearly show represent set particular represent set assignment must assign location since reserved partner shares vertex assignment must assign since assigned together fact assigned either evaluate happen since contradiction argument works try represent set let query function function assume derive contradiction lemma contain cycles respectively size labels scheme represent sets size lemma contain cycles respectively length labels claim follows immediately two lemmas proof lemma let two cycles promised lemma let clearly show represent set particular represent set let assignment assign bit location locations must assigned bit since assigned bit since must assigned bit similarly since assigned bit since must assigned bit thus locations must assigned bit arguing similarly since assigned bit since must assigned bit thus locations must assigned bit since assigned bit three probe locations element function evaluates contradiction proof lemma graph long number edges least find cycle length proposition since average degree would least since repeatedly remove cycles graph length using proposition delete edges appearing picked cycle till cycle remains number remaining edges graph since similarly following argument remove cycles graph length cycle till cycle remains number remaining edges graph pick random element probability edge label appears cycle edge label appears cycle least using union bound thus exists element cycles length respectively containing edge labelled argument works lemma try represent set let query function definition forced say scheme forced graph least one following two conditions hold exists cycle length two elements appear labels two edges exist cycles lengths two elements appear labels edge edge respectively probe location lemma scheme forced represent sets size lemma forced claim follows immediately two lemmas proof lemma assume either holds derive contradiction case holds let cycle length contains two edges labelled let clearly show represent set particular represent set assignment assign bit locations see first observe element query function evaluate must assigned bit assigned bit since must assigned bit assigned bit must assigned bit thus locations assigned bit since locations locations assigned bit query function evaluate value contradiction case holds let cycles length let clearly show represent set particular represent set following similar reasoning earlier case assignment must assign locations bit locations bit since locations assigned since assigned bit must since assigned must assigned assigned locations assigned bit implies contradiction proof lemma scheme uses space less hold show must hold graph long number edges least find cycle length proposition since average degree would least since repeatedly remove cycles graph length using proposition till cycle remains number remaining edges graph since thus sum lengths removed cycles exceeds two edges labelled removed cycle otherwise holds thus sum probe locations probe locations labels edges appearing cycles exceeds hence two cycles must edge labels respectively argument works lemma try represent set dimension argument proofs make use algebraic arguments query functions considered section admit dimension argument argument follows element universe associate vector space used scheme small vectors reside vector space small dimension turn force linear dependence argue keep one element aside store elements appearing linear dependence scheme left choice leading contradiction develop common framework prove lower bounds query parity iff assume query function could either two functions fix scheme query function memory bit array consisting locations element scheme probes three distinct locations determine set given set size assignment elements iff definition fields vector spaces vector let field modulo arithmetic let field real numbers usual arithmetic let vector space dimensional vectors field let vector space dimensional vectors field element define vector vector contains three positions everywhere else clearly vector parity fix parity function proposition set size assignment dot product two vectors vector iff set vectors vector linearly dependent thus exists distinct elements vector vector taking dot product sides assignment vector singleton vector vector last step follows proposition fact contradiction thus iff fix iff function proposition set size assignment dot product two vectors vector iff set vectors vector linearly dependent thus exists distinct elements vector vector taking dot product sides assignment vector singleton vector vector last step follows proposition fact taking dot product sides equation assignment empty set vector vector last step follows proposition fact contradiction thus degree argument section provide lower bound proofs query functions iff let scheme query function memory consists three distinct bit arrays element scheme probes three locations determine set treat location boolean variable given set size assignment elements iff first prove specializing lower bound proof case definition field vector space polynomials let denote field mod arithmetic query function let vector space field multilinear polynomials total degree variables coefficients coming set define polynomial variables coefficients coming field follows make multilinear reducing exponents variable using identity variable identity holds since considering assignment variables prove theorem use following two lemmas lemma set multilinear polynomials linearly independent vector space lemma spanning set size using two lemmas first prove theorem provide proofs lemmas later proof since size linearly independent set size spanning set using lemmas fact assignments memory storing different sets size different implies space required least prove two lemmas proof lemma first observe size polynomial factors degree hence degree sets size evaluation polynomial assignment since exists thus assignment factor evaluates factor evaluates particular proves size use observation topprove lemma let show linearly independent need show consider arbitrary set size consider assignment variables identity since since proof lemma monomials total degree form spanning set polynomial written linear combination monomials thus size spanning set last inequality follows fact onto map iff lower bound proof iff similar lower bound proof difference instead looking query function field consider query function field set three elements mod arithmetic field query function iff degree polynomial accordingly multilinear polynomial corresponding set size defined reduce exponents using identity variable identity holds consider assignments notice degree rest proof improved bound prove part theorem extend idea lower bound proof query function lgmm get better lower bound continue use framework section scheme first define bipartite graph vertex sets sets edges determined follows initially let element add edge labelled goal ensure edges graph vertices degree least get lose repeatedly delete vertices degree less edges also delete corresponding elements universe appeared labels deleted edges restricted remaining elements universe gives assume remaining elements scheme elements without loss generality form set proving lower bound lgnm space restricted scheme prove theorem thus edges labelled distinct elements universe vertex sets minimum degree least edge labelled endpoints assume show contradiction definition parameters relations following assume define clearly using definitions minimum degree vertex least assumption max max min large relations used proving various lemmas theorem definition trap gain good gainer restriction bipartite graph two edges said trap edge bipartite graph set said gain exists two edges either intersect end point trap edge bipartite graph set edges size called good gainer every subset size gains defined set define restriction set assignments memory specified storage scheme sets contain element restriction lemma fix bipartite graph minimum degree least uniformly random set edges size good gainer probability least large lemma lab good gainer exist polynomial degree set every restriction using two lemmas first prove theorem provide proofs lemmas later proof part set size lab set good gainer associate polynomial degree every restriction possible due lemma let random subset element independently included probability let indicator random variable event expected sum indicator random variables good gainer first inequality factor appears least many good gainers lemma factor appears given good gainer elements must fall outside elements must fall inside second inequality follows fact holds indicator random variables thus exists least fix let good gainer first prove collection polynomials linearly independent first observe implies restriction restriction thus restriction restriction since disjoint assignment restriction consequently property used prove linearly independent lemma since polynomials linearly independent degree holds holds holds contradiction proof lemma isq good gainer get desired running following procedure recall consists factors degree initially set run procedure steps step maintain following invariants assignments restriction end step delete two elements two intersecting edges labels say multiply two factors corresponding get degree factor steps delete else two edges intersect since run procedure elements end step invoke lemma find edges labels say traps edge label say note promised lemma also otherwise would intersecting satisfy part add element set observe assignment restriction use relation simplify product two factors corresponding get degree factor trapped edge trapped edge delete delete repeat steps clearly degree end procedure assignments restriction prove lemma require following lemma prove later lemma fix bipartite graph minimum degree least uniformly random set edges size gains probability least exp large proof lemma uniformly random set edges probability good gainer lower bounded using union bound lemma follows good gainer gain gain exp exp exp since therefore since therefore therefore using bounds get good gainer exp exp exp last inequality holds second last inequality holds since since proof lemma aid calculations pick uniformly random set edges two steps step randomly pick vertices replacement vertex picked probability proportional degree deg pick edge incident uniformly random thus edge whose endpoint probability deg deg deg way distinct edges picked note strictly less step delete edges picked step pick set edges uniformly random without replacement remaining edges let bad denote event gain step sampling vertices get fixed multiset vertices distinct vertices enough show conditioning choice conditional probability bad exp large conditioning source randomness calculations comes sampling edges steps upper bound probability bad occurs following two cases let neighbourhood case event bad occur none edges picked step incident vertex neighbourhood edge picked step incident vertex neighbourhood either trap edge endpoints intersects edge endpoints since vertex degree least number edges incident least deleting edges picked step least edges remain since least edges picked step edges probability bad occurs upper bounded follows bad using fact exp get bad exp exp last inequality holds since define case stands neighbourhood function event bad occur following event bad must occur bad incident vertex bad smallest number bad occur gains case incident vertex must neighbour vertex either trap edge endpoints intersects case smallest number clearly intersect vertex thus bound probability bad follows bad bad bad equality follows fact conditional space bad independent events bad deg deg else bad deg using upper bound bad bad deg exp deg exp exp second last inequality shown follows since deg deg deg deg deg deg last inequality follows fact large thus combining two cases large probability random set edges gains least exp max exp exp exp last two inequalities follow fact references noga alon uriel feige power two three four probes proceedings twentieth annual symposium discrete algorithms soda new york usa january pages url http noga alon shlomo hoory nathan linial moore bound irregular graphs graphs combinatorics url http ajesh babu jaikumar radhakrishnan entropy based proof moore bound irregular graphs corr url http harry buhrman peter bro miltersen jaikumar radhakrishnan srinivasan venkatesh bitvectors optimal siam url http mohit garg jaikumar radhakrishnan set membership bit probes proceedings annual symposium discrete algorithms soda san diego usa january pages url http michael goodrich michael mitzenmacher invertible bloom lookup tables annual allerton conference communication control computing allerton allerton park retreat center monticello usa september pages url http moshe lewenstein ian munro patrick nicholson venkatesh raman improved explicit data structures bitprobe model algorithms esa annual european symposium wroclaw poland september proceedings pages url http michael luby michael mitzenmacher mohammad amin shokrollahi daniel spielman efficient erasure correcting codes ieee trans information theory url http ali makhdoumi huang muriel yury polyanskiy locally decodable source coding ieee international conference communications icc london united kingdom june pages url http marvin minsky seymour papert perceptrons mit press cambridge jaikumar radhakrishnan venkatesh raman srinivasa rao explicit deterministic constructions membership bitprobe model algorithms esa annual european symposium aarhus denmark august proceedings pages url http jaikumar radhakrishnan pranab sen srinivasan venkatesh quantum complexity set membership algorithmica url http jaikumar radhakrishnan smit shah saswata shannigrahi data structures storing small sets bitprobe model mark berg ulrich meyer editors algorithms esa annual european symposium liverpool september proceedings part volume lecture notes computer science pages springer url http emanuele viola lower bounds succinct data structures siam url http wikiversity becs boolean functions wikiversity online accessed url https
| 8 |
feb codimension bipartite graphs hassan haghighi siamak yassemi rahim zaare nahandi abstract let unmixed bipartite graph dimension assume maximal complete bipartite subgraph minimum dimension codimension generalizes characterization bipartite graphs herzog hibi result cook nagel unmixed buchsbaum graphs furthermore show unmixed bipartite graph codimension obtained graph replacing certain edges complete bipartite graphs provide examples introduction simplicial complexes among central research topics combinatorial commutative algebra characterization complexes far reaching problem one appeals study specific families simplicial complexes flag complexes among important families complexes recommended study page however known simplicial complex barycentric subdivision flag complex therefore characterization flag complexes equivalent characterization simplicial complexes nevertheless ideal flag complex generated quadratic monomials simpler compared arbitrary monomial ideals furthermore seems expressing many combinatorial properties terms graphs convenient evidences characterization unmixed bipartite graphs villarreal bipartite graphs herzog hibi well expressed terms graphs hand hierarchy families graphs respect cohenmacaulay property buchsbaum complexes appear right ones unmixed bipartite buchsbaum graphs characterized cook nagel also authors natural families graphs hierarchy bipartite graphs graphs independence complexes pure cohenmacaulay codimension concept simplicial complexes introduced pure version simplicial complexes codimension studied miller novik swartz note give characterizations unmixed bipartite graphs terms dimension minimum dimension maximal nontrivial complete bipartite subgraphs cook nagel showed unmixed bipartite graphs mathematics subject classification key words phrases flag complex complex codimension emails haghighi yassemi rahimzn hassan haghighi siamak yassemi rahim zaare nahandi complete bipartite graphs theorem theorem results generalizations fact unmixed bipartite graphs arbitrary codimension next section gather necessary definitions known results used rest paper section improve results joins simplicial complexes disjoint unions graphs respect property section devoted two characterizations bipartite graphs examples preliminaries basic definitions general facts simplicial complexes refer book stanley complex always mean simplicial complex let simple graph vertex set edge set inclusive neighborhood set consisting vertices adjacent independence complex complex ind vertex set faces consisting independent sets vertices sets vertices two elements adjacent complexes called flag complexes ideal generated quadratic monomials dimension graph mean dimension complex ind graph said unmixed ind pure integer complex called pure every face link pure complexes codimension accordingly complexes precisely buchsbaum complexes respectively clearly complex complex dimension always one uses convention cmt would mean graph called ind basic tool checking property complexes following lemma lemma lemma let let nonempty complex following equivalent cmt complex pure link every vertex straightforward identity link ind ind lemma graphs would following lemma let let graph following equivalent cmt graph unmixed graph every vertex recall basic relevant facts bipartite graphs graph called bipartite disjoint union partition complete bipartite graph interested unmixed complete bipartite graphs unmixed bipartite graphs characterized villarreal following result theorem theorem let bipartite graph without isolated vertex unmixed partition vertices codimension bipartite graphs edge edges distinct edge case partition ordering called pure order edges called perfect matching edges pure order said cross edges otherwise order called see unmixed bipartite graphs independent ordering vertices precisely cross pure ordering cross every pure ordering lemma immediate consequence theorem following useful lemma lemma let unmixed bipartite graph pure order vertices let complete bipartite subgraph xin yin yik edge yil edge xik edge xil edge proof assertion immediate theorem xik yil edge also follows xil yik edge also least two nice characterization bipartite graphs theorem theorem let bipartite graph without isolated vertices pure ordering vertices implies ordering theorem called macaulay order vertices proposition proposition let bipartite graph pure order bipartite buchsbaum graphs also classified first recall complex buchsbaum pure link vertex thus graph buchsbaum unmixed vertex bipartite graphs sharper result complete bipartite graphs buchsbaum see proposition indeed converse also true theorem see theorem let bipartite graph buchsbaum complete bipartite graph joins complexes disjoint unions graphs known join two complexes see complex dimension complex dimension join complex max proposition however one hassan haghighi siamak yassemi rahim zaare nahandi complexes result could strengthened combine relevant known results theorem let two complexes dimensions respectively join complex independent sharp particular cone iii max conversely proof statement proved sava assertion iii proved theorem prove using induction let singleton thus link link link thus lemma let let link link link dimension less thus induction hypothesis link link link link hence link therefore prove result sharp proceed induction indeed case link dimension less hence induction hypothesis link link therefore let denote disjoint union graphs fact ind ind theorem graphs following theorem let two graphs disjoint sets vertices dimensions respectively graph iii max conversely two characterizations bipartite graphs restrict case bipartite graphs since bipartite graphs characterized herzog hibi theorem also different version cook nagel proposition consider case theorem let unmixed bipartite graph dimensions let maximal complete bipartite subgraph minimum dimension codimension bipartite graphs proof prove assertions induction assume show every let pure order let vertex maximal bipartite subgraph disjoint union isolated vertices unmixed bipartite graph dimension graph unmixed ind link ind link pure complex pure xic unmixed observe vertex maximal bipartite subgraph lemma vertices subgraph belong thus crosses proposition otherwise minimum dimension maximal complete bipartite subgraphs less minimum dimension subgraphs hence induction hypothesis theorem belong maximal bipartite subgraph positive dimension disjoint union isolated vertices unmixed bipartite graph dimension hence theorem similar argument reveals graph therefore lemma proceed induction step show result sharp let let maximal bipartite subgraph minimum dimension take first assume adjacent vertex consider let disjoint union isolated vertices unmixed bipartite graph dimension contains hence induction hypothesis sharp sharp therefore assume purity order adjacent adjacent vertex otherwise maximal case consider proceed similar previous case second characterization bipartite graphs show graph obtained graph replacing perfect matching edges complete bipartite graphs statement precise next theorem first provide definition lemma definition let unmixed bipartite graph pure order fixed replacing edge complete bipartite graph kni xini yini mean bipartite graph vertex set xini yini preserving adjacencies xik iii yik hassan haghighi siamak yassemi rahim zaare nahandi lemma let unmixed bipartite graph pure order vertex set let positive integers let graph obtained replacing edge complete bipartite graph kni xini yini also unmixed proof let kni xini yini xdnd ydnd pure order fact xir yir also xir yjs xjs ykt hence thus construction xir ykt theorem let bipartite graph macaulay order vertex set let positive integers list one let graph obtained replacing edge complete bipartite graph kni let min exclusively graph furthermore bipartite graph obtained replacement complete bipartite graphs unique bipartite graph proof first claim follows lemma theorem settle second claim let bipartite graph pure order vertices let knd maximal bipartite subgraphs observe maximality complete subgraphs disjoint choose one edge subgraph kni let induced subgraph vertex set lemma independent choice particular edge kni hence unique since ordering vertices pure order restriction also pure thus unmixed bipartite graph maximality complete bipartite subgraphs kni construction therefore proposition edge replace kni preserving adjacencies let resulting graph construction required remark let bipartite graph let bipartite graph obtained replacing process described assume using results section following observations immediate first dimh dimh replace one strictly hand dimh dimh one replaced replacing least two strictly dimh replacing one arbitrary hence dimension dimh number replacements least one replacement would dimh maximum number replacements dimh may occur replacing codimension bipartite graphs dimh maximum size replaced also dimh may occur two replacements using remarks may easily distinguish bipartite graphs example bipartite graphs buchsbaum using notation remark dimh two bipartite graphs dimension one replacing process produce two types bipartite graphs buchsbaum arbitrary dimensions precisely one graph disjoint union edge one consists first graph together edges second graph could depicted figure igure example bipartite graphs graphs dimh dimh example two bipartite graphs replacing two edges perfect matching case dimg see figure figure dimh bipartite graphs dimension replacing one perfect matching edge arbitrary size graph produce types bipartite graphs note depending choice edge replaced case may get bipartite graphs case dimg hassan haghighi siamak yassemi rahim zaare nahandi igure igure example bipartite graphs graphs dimh dimh two bipartite graphs obtained replacing two edges perfect matching case dimg similarly two others obtained replacing one edge another edge case dimg dimh bipartite graphs dimension replacing two perfect matching edges graph produce bipartite graphs dimension dimh bipartite graphs dimension replacing one perfect matching edge graph produce bipartite graphs dimension bipartite graphs graphs connected acknowledgments authors would like thank herzog nagel fruitful discussions subject paper work supported center international studies collaboration cissc french embassy tehran codimension bipartite graphs framework gundishapur project homological combinatorial aspects commutative algebra algebraic geometry research partially supported research grant university tehran research yassemi part supported grant ipm references cook nagel graphs face vectors flag complexes siam discrete math note ring join suspension manuscripta math haghighi yassemi bipartite graphs bull math soc sci roumanie haghighi yassemi generalization simplicial complexes ark mat herzog hibi distributive lattices bipartite graphs alexander duality algebraic combin miller novik swartz face rings simplicial complexes singularities math ann sabzrou tousi yassemi simplicial join via tensor products manuscripta math sava ring join sti univ cuza lasi tom xxxi ser schenzel number faces simplicial complexes purity frobenius math stanley combinatorics commutative algebra second edition boston villarreal unmixed bipartite graphs rev colombiana mat minimal free resolution class monomial ideals pure appl algebra hassan haghighi department mathematics toosi university technology tehran iran siamak yassemi school mathematics statistics computer science university tehran school mathematics institute research fundamental sciences ipm tehran iran rahim zaare nahandi school mathematics statistics computer science university tehran tehran iran
| 0 |
parameter quantized kernel robust algorithms combating impulsive lua haiquan badong chenb school electrical engineering southwest jiaotong university chengdu china school electronic information engineering jiaotong university china feb abstract although kernel robust krmn algorithm outperforms kernel least mean square klms algorithm impulsive noise still two major problems follows choice mixing parameter krmn crucial obtain satisfactory performance structure krmn algorithm grows linearly iteration goes thus high computational complexity memory requirements solve parameter selection problem two parameter krmn vpkrmn algorithms developed paper moreover sparsification algorithm quantized vpkrmn qvpkrmn algorithm introduced nonlinear system identification impulsive interferences energy conservation relation ecr convergence property qvpkrmn algorithm analyzed simulation results context nonlinear system identification impulsive interference demonstrate superior performance proposed vpkrmn qvpkrmn algorithms compared existing algorithms keywords kernel method impulsive noise variable mixing parameter quantization scheme convergence analysis introduction kernel method received increasing attention machine learning adaptive signal processing main idea kernel method transform input data feature space via reproducing kernel hilbert space rkhs successful applications motivate improve robustness nonlinear adaptive filter support vector machine svm kernel principal component analysis kpca recently kernel adaptive filters became popular due modeling capabilities feature space using mercer kernels many linear filters recast reproducing kernel hilbert spaces rkhss yield powerful nonlinear extensions kernel recursive least squares krls algorithm kernel least mean square algorithm klms kernel affine projection algorithm kapa algorithms successfully applied nonlinear active corresponding author school electrical engineering southwest jiaotong university chengdu sichuan china addresses lulu hqzhao swjtu zhao chenbd chen preprint submitted elsevier february noise cancellation nonlinear acoustic echo cancellation nlaec although kernel adaptive filters achieve improved performance suitable online applications structures grow linearly number processed patterns past years sparsification techniques constrain growth network size proposed quantized klms qklms algorithm successfully applied static function estimation time series prediction mechanism utilize redundant input data helpful achieve better accuracy compact network fewer centers hand signal processing applications impulsive noise exists widely well known impulsive noises infinite variance makes traditional algorithms diverge thus family norm stochastic gradient adaptive filter algorithms proposed least mean absolute third lmat algorithm lmf algorithm lmmn algorithm robust rmn algorithm developed based convex function error norms underlie least mean square lms least absolute difference lad algorithms therefore rmn algorithm robustness performance presence impulsive noise achieve improved performance impulsive noise several variants kernel adaptive filter proposed particularly krmn algorithm proposed deriving rmn algorithm rkhs unfortunately unsuitable selection mixing parameter degrades performance krmn algorithm overcome problem paper proposed two adaptation rules krmn algorithm called variable mixing parameter krmn vpkrmn based vpkrmn algorithms proposed quantized vpkrmn qvpkrmn algorithm curb growth networks furthermore energy conservation relation ecr convergence property qvpkrmn algorithm analyzed paper organized following manner section introduces brief description kernel method krmn algorithm section two novel vpkrmn algorithms proposed adapt mixing parameter qvpkrmn algorithm proposed control growth kernel structure section analysis convergence property performed simulations context nonlinear system identification conducted section finally conclusions found section kernel method krmn algorithm kernel method kernel method useful nonparametric modeling tools deal nonlinearity problem power idea transform input data input space feature space using certain nonlinear mapping expressed feature vector kernel method based mercer theorem mercer kernel expressed nonnegative eigenvalue corresponding eigenfunction eigenvalues eigenfunctions constitute feature vector well known mercer kernel continuous symmetric kernel commonly used gaussian kernel expressed exp kernel bandwidth using feature space calculated inner product consequently output adaptive filter expressed inner product transformed test data training data coefficient discrete time inner product operation respectively krmn algorithm desired input signal corrupted impulsive noise performance klms algorithm degrades overcome problem krmn algorithm proposed using kernel method input data rmn algorithm transformed rkhs weight vector feature space defined error signal defined krmn algorithm based minimization following error statistical expectation operator limited range cost function krmn algorithm linear combination klms klad algorithms combination norm norm gradient vector respect sign sign denotes sign function sign sign returns otherwise sign hence adaptive rule krmn solved iteratively new example sequence sign klmad algorithm easily derived casting lad algorithm rkhs mixing parameter vpkrmn equal zero vpkrmn becomes klad reusing sign using mercer kernel filter output calculated kernel evaluations sign sake simplicity define sign codebook refer center set time observed kernel function replaced radial kernel krmn algorithm produces growing radial basis function rbf network allocating new kernel unit every new example input main bottleneck krmn algorithm network size grows number processed data overcome severe drawback quantization scheme used curb growth network proposed algorithms vpkrmn algorithm unsuitable mixing parameter selection lead performance degradation krmn algorithm circumvent problem mixing parameter automatically adjusted use instead derive variable mixing parameter algorithm considering minimize error krmn algorithm iteration cycle obtain restricted add scaling factor control steepness result adaptive update rules krmn algorithm obtained name new algorithm vpkrmn algorithm mixing parameter adjusted switching two types error norm mixing parameter tends one klms algorithm plays dominate role filter mixing parameter tends zero klad algorithm plays dominate role filter figure cost functions different mixing parameter settings cost function vpkrmn algorithm unimodal function see fig unimodal character preserved chosen second term keeps small value hence adaptation vpkrmn algorithm sensitive choice avoid limitation new adaptive update approach called proposed adapt mixing parameter krmn algorithm vpkrmn algorithm exponential weighting parameters range control quality estimation algorithm positive constant filtered estimation note mixing parameter fixed value two reasons account use update error autocorrelation generally good measure proximity optimum environment divided two cases error autocorrelation impulsive environment impulsive environment objective ensure large vpkrmn algorithm far optimum decreasing large value leads error plays critical role provides accurate final solution less misadjustment impulsive noise environment conversely algorithm suffers severely outlier problems small error offers stable convergence characteristic krmn algorithm qvpkrmn algorithm qvpkrmn algorithm incorporates idea quantization vpkrmn algorithm provide efficient learning performance impulse interference general quantization scheme similar sparsification method fact almost computational complexity main difference quantization scheme method quantization scheme utilizes redundant data locally update coefficient closest center quantization method summarized learning strategy input space quantized current quantized input already assigned center new center added coefficient center updated merging new coefficient feature vector quantization scheme expressed sign quantization operator feature space owing high dimensionality feature space quantization scheme usually used input space therefore learning rule qvpkrmn algorithm given sign quantization operation input space throughout paper notation replaced notation jth element euclidean norm feature space threshold distance qvpkrmn algorithm reduces vpkrmn algorithm proposed qvpkrmn algorithm summarized table table summarizes computational complexity algorithms training times length filter elements index set affordable computation complexity vpkrmn algorithm behaves much better klms klad algorithms impulse noise environment since qklms qvpkrmn algorithms developed using quantization scheme algorithms lower computation complexity klms klad krmn vpkrmn algorithms convergence analysis section establish energy conservation relation ecr qvpkrmn algorithm analyze mean convergence behavior convergence property qvpkrmn algorithm difficult analyze exactly theorem independence assumption introduced throughout analyses energy conservation relation consider adaptation qvpkrmn algorithm rkhs sign define weight deviation vector second moment misalignment vector qvpkrmn algorithm table proposed qvpkrmn algorithms proposed qvpkrmn algorithms initialization choose step size bandwidth parameters kernel sign computation available compute output adaptive filter size compute error compute distance dis min dis keep codebook unchanged quantize closest center updating coefficient center sign min otherwise assign new center corresponding new coefficient sign using two new update rule mixing parameter algorithm algorithm end table summary computational complexity algorithms klms computation training memory training computation test memory test klad krmn qklms vpkrmn qvpkrmn optimal weight vector update formulation weight deviation vector qvpkrmn algorithm expressed sign define posterior error priori error shown priori posteriori errors related via sign sign combining yields squaring sides get rearranging norm feature space seen qvpkrmn algorithm form qklms algorithm quantization size goes zero ecr expression qklms algorithm obtained mean convergence subsection mean convergence analysis weight vector performed taking mathematical expectation using independence assumption obtain sign sign according second term right hand side expressed sign substituting arrive easily observed converge zero vector step size satisfies following inequality hence obtain easy see mean convergence condition qvpkrmn algorithm maximum eigenvalues since denotes trace autocorrelation matrix rigorous condition gained min vector optimal weight vector expressed formula get sign thus expressed form second moment misalignment vector introducing using independence assumption given sign sign sign sign using theorem fourth term equation respectively simplified follows sign sign similarity simplified form sixth term obtained sign calculate third term fifth term substituting yield furthermore decomposed scalar form matrix defined orthonormal matrix autocorrelation matrix side given symmetric matrix diagonal matrix elements eigenvalues matrix scalar form obtained element otherwise simulation results figure block diagram kernel adaptive identification demonstrate effectiveness proposed algorithms impulsive noise environments simulation studies carried nonlinear system identification problem following simulations software matlab used program experiments computer environment amd cpu ghz block diagram kernel adaptive system identification plotted fig goal nonlinear system identification employ pairs inputs addictive noise fit function maps arbitrary system input appropriate output model coefficients moment adjusted error signal nonlinear system contains linear filter memoryless nonlinearity linear system impulse response generated nonlinearity given testing mse figure effect parameters vpkrmn algorithms keys blue line red line black line azury line klms klad krmn testing mse figure learning curves klms klad krmn vpkrmn algorithms nonlinear system identification keys klms klad krmn testing mse qklms figure learning curves qlms vpkrmn qvpkrmn algorithms nonlinear system identification keys qklms test impulsive noise environment model example impulsive noise modeled distribution probability function root deviation white gaussian noise wgn zero mean variance used input signal white gaussian noise zero mean root deviation segment samples used training data another samples test data simulation results obtained monte carlo trials firstly effect parameter proposed vpkrmn algorithms studied fig plots effect update parameter algorithm seen figure vpkrmn algorithm achieves fast convergence rate compared vpkrmn krmn algorithm obtains faster convergence speed reason selected proposed vpkrmn algorithm respectively figs learning curves existing algorithms bandwidth parameters kernelbased algorithms set observed fig proposed algorithms outperform algorithms terms convergence rate error impulsive noise krmn network size figure network size growth qvpkrmn algorithms testing mse figure effect parameters krmn algorithms noise keys blue line black line red line azury line fast convergence rate impulse noise high mse compared vpkrmn algorithms moreover achieves better performance fig shows performance proposed two algorithms based quantization scheme network size growth curves qvpkrmn algorithms plotted fig seen proposed qvpkrmn algorithms achieve faster convergence rate lower mse compared qklms algorithm slightly slow convergence rate compared vpkrmn algorithms owing using quantization scheme qvpkrmn algorithms produce network size nonlinear system identification reduces computational burden test impulsive noise environment distribution model second example wgn employed input signal nonlinear system model first experiment continued use impulsive noise modeled symmetric distribution standard symmetric distribution form exp klms klad krmn testing mse figure learning curves klms klad vpkrmn algorithms nonlinear system identification noise keys klms klad krmn qklms testing mse figure learning curves qlms vprmn qvpkrmn algorithms nonlinear system identification noise keys qklms characteristic exponent indicates peaky heavy tailed distribution likely impulsive noise dispersion noise following simulation studies used well model radio frequency interference rfi embedded wireless data transceivers addition ratio snr noise defined snr demonstrate effect variable parameter proposed algorithms fig shows vpkrmn algorithms different parameter settings found tiny change parameters cause large change performance appropriate selection parameters fig illustrates comparison lms rmn klms klad krmn vpkrmn algorithms nonlinear system identification noise obviously krmn algorithm worse results two vpkrmn algorithms based fixed mixing parameter proposed vpkrmn algorithms achieve improved performance finally evaluate performance qklms vpkrmn network size figure network size growth qvpkrmn algorithms qvpkrmn algorithm shown fig seen proposed qvpkrmn algorithms similar identification performance superior performance presence noise compared qklms algorithm fig shows network size growth qvpkrmn algorithms one see network size qvpkrmn algorithm decreases sacrificing little performance reduces computational complexity experiment results two examples proposed vpkrmn algorithms demonstrate improved performance existing algorithms performance qvpkrmn algorithm close vpkrmn algorithm less computational complexity also robustness proposed algorithms confirmed simulating various population sizes different bandwidth parameters proposed vpkrmn algorithm vrkrmn algorithm similar misadjustment convergence speed slightly impulsive process using error autocorrelation vpkrmn algorithm obtains faster convergence rate vpkrmn algorithm highly impulsive case conclude proposed algorithms nonlinear system identification provide satisfying result impulsive interference conclusion two vpkrmn algorithms quantization form qvpkrmn algorithms proposed nonlinear system identification impulsive noises vpkrmn algorithms effectively solve problem mixing parameter selection address problem computational intensive vpkrmn algorithm quantization scheme introduced vpkrmn algorithms generate qvpkrmn algorithm moreover convergence behavior qvpkrmn algorithms analyzed simulations results showed proposed vpkrmn algorithms superior klms klad krmn algorithms qvpkrmn algorithm preserves robustness performance impulsive interference low computational complexity acknowledgment work partially supported national science foundation china grant first author would also like acknowledge china scholarship council csc providing financial support study abroad references references smola learning kernels support vector machines regularization optimization beyond mit press ishida oishi morita moriwaki nakajima development support vector machine based cloud detection method modis adjustability various onditions remote sensing environment smola nonlinear component analysis kernel eigenvalue problem neural computation fezai mansouri taouali harkat bouguila online reduced kernel principal component analysis process monitoring journal process control liu principe haykin kernel adaptive filtering comprehensive introduction vol john wiley sons engel mannor meir kernel recursive algorithm ieee transactions signal processing van vaerenbergh via kernel rls algorithm application nonlinear channel identification ieee international conference acoustics speech signal processing ieee liu park wang extended kernel recursive least squares algorithm ieee transactions signal processing santos barreto kernel rls algorithm nonlinear system identification nonlinear dynamics liu pokharel principe kernel algorithm ieee transactions signal processing takeuchi yukawa better metric kernel adaptive filtering european signal processing conference eusipco ieee wang wang qian yang kernel least mean square tracking chinese control conference ccc ieee liu kernel affine projection algorithms eurasip journal advances signal processing albu nishikawa low complexity kernel affine algorithms coherence criterion international conference signals systems icsigsys ieee signoretto van waterschoot moonen jensen nonlinear acoustic echo cancellation based leaky kernel affine projection algorithm ieee transactions audio speech language processing richard bermudez honeine online prediction time series data kernels ieee transactions signal processing chen zhao zhu principe quantized kernel least mean square algorithm ieee transactions neural networks learning systems zhao yang chen quantised kernel least mean square desired signal smoothing electronics letters wang zheng duan wang tan quantized kernel maximum correntropy mean square convergence analysis digital signal processing flores lamare kernel adaptive algorithms ieee international conference acoustics speech signal processing icassp ieee zhao gao zeng new normalized lmat algorithm performance analysis signal processing guan nonparametric variable lmat algorithm circuits systems signal processing eweda bershad stochastic analysis stable normalized least mean fourth algorithm adaptive noise canceling white gaussian reference ieee transactions signal processing eweda stable normalized least mean fourth algorithm improved transient tracking behaviors ieee transactions signal processing tanrikulu chambers convergence properties lmmn adaptive algorithm iee image signal processing wang albu sparse channel estimation based reweighted adaptive filter algorithm european signal processing conference eusipco ieee chambers avlonitis robust adaptive filter algorithm ieee signal processing letters miao kernel algorithm international conference automatic control artificial intelligence wang feng chi kernel affine projection sign algorithms combating impulse interference ieee transactions circuits systems express briefs shi zhang chen kernel recursive maximum correntropy signal processing zhao zhang wang projected kernel recursive maximum correntropy ieee transactions circuits systems express briefs gao chen kernel least mean algorithm ieee signal processing letters liu chen kernel robust adaptive filtering international joint conference neural networks ijcnn ieee aboulnasr mayyas robust variable algorithm analysis simulations ieee transactions signal processing sayed fundamentals adaptive filtering john wiley sons mathews cho improved convergence analysis stochastic gradient adaptive filters using sign algorithm ieee transactions acoustics speech signal processing vega rey benesty tressens new robust variable nlms algorithm ieee transactions signal processing haykin adaptive filter theory pearson education india shao nikias signal processing fractional lower order moments stable processes applications proceedings ieee nassar gulati deyoung evans tinsley mitigating interference laptop embedded wireless transceivers journal signal processing systems zhao chen adaptive recursive algorithm logarithmic transformation nonlinear system identification noise digital signal processing zhao chen collaborative adaptive volterra filters nonlinear system identification noise environments journal franklin institute vitae pursuing degree field signal information processing school electrical engineering southwest jiaotong university chengdu china currently visiting student electrical computer engineering mcgill university montreal canada research interests include adaptive signal processing kernel methods evolutionary computing haiquan zhao born henan china received degree applied mathematics degree degree signal information processing southwest jiaotong university chengdu china respectively since professor school electrical engineering southwest jiaotong university current research interests include adaptive filtering algorithm adaptive volterra filter nonlinear active noise control nonlinear system identification chaotic signal processing badong chen received degrees control theory engineering chongqing university respectively degree computer science technology tsinghua university researcher tsinghua university postdoctoral associate university florida computational neuroengineering laboratory cnel period october september currently professor institute artificial intelligence robotics iair jiaotong university research interests signal processing information theory machine learning applications cognitive science engineering published books chapters papers various journals conference proceedings ieee senior member associate editor ieee transactions neural networks learning systems editorial boards applied mathematics entropy
| 3 |
jan interacting behavior emerging alyssa hector eduardo hermo joost beyond center arizona state university tempe department computer science university oxford department logic history philosophy science university barcelona spain algorithmic nature group labores paris france abstract quantify change complexity throughout evolutionary processes attempt address question empirical approach general terms simulate two simple organisms computer compete limited available resources implement global rules determine interaction two elementary cellular automata grid global rules change complexity state evolution output suggests complexity intrinsic interaction rules largest increases complexity occurred interacting elementary rules little complexity suggesting able accept complexity interaction also found class rules fragile others global rules others robust hence suggesting intrinsic properties rules independent global rule choice provide statistical mappings elementary cellular automata exposed global rules different initial conditions onto different complexity classes keywords behavioral classes emergence behavior cellular automata algorithmic complexity information theory corresponding authors jjoosten introduction race understanding explaining emergence complex structures behavior wide variety participants contemporary science whether sociology physics biology major sciences one paths one evolutionary processes play important role emergence complex structures various organisms compete limited resources complex behavior beneficial outperform competitors question remains quantify change complexity throughout evolutionary processes experiment undertake paper addresses question empirical approach general terms simulate two simple organisms computer compete limited available resources since experiment takes place rather abstract modeling setting use term processes instead organisms two competing processes evolve time measure complexity emerging patterns evolves two processes interact complexity emerging structures compared complexity respective processes would evolved without interaction setting experiment especially one abstract nature subtle yet essential question emerges exactly defines particular process process distinguish background example shell hermit crab part hermit crab even though shell grow essential bacteria reside larger organisms form part larger organism line drawn organism environment throughout paper assume process separate background behavioral sense fact assume processes different backgrounds physical sense resource management demonstrate ontological questions naturally prompted abstract modeling environment methods mentioned already wish investigate complexity evolves time two processes compete limited resources chose represent competing processes cellular automata defined subsection cellular automata represent simple parallel computational paradigm admit various analogies processes nature section shall first introduce elementary cellular automata justify useful modeling interacting competing processes elementary cellular automata consider elementary cellular automata simple cas defined act discrete space divided countably infinite many cells represent space copy integers refer tape simply row cell tape particular color case ecas shall work two colors represented example respectively sometimes call distributions tape output space spatial configuration thus represent output space function also written integers set colors case instead writing shall often write call color cell row eca act output space discrete time steps seen function mapping functions functions case eca acts initial row denote output space likewise denote nth cell time rnt ecas entirely defined local fashion color cell time depend rnt general terminology two direct neighboring cells consider cas thus eca two colors output space entirely determined behavior three adjacent cells since cell two colors possible triplets possible different ecas however consider ecas experiment computational simplicity instead consider eca horizontal translation exchanges combination two interacting competing ecas global rules experiment entire interaction modeled particular combination two ecas something else far decided model process eca color white color process modeled evolution white cells throughout time steps governed particular eca rule made choice natural consider different process similar fashion model also eca tell apart grid choose work alphabet whereas alphabet use symbol white next question model interaction embedding cas setting global three colors choose fashion restricted alphabet restricted alphabet course two requirements determine given thus leaves difficult modeling choice reminiscent ontological question organisms triplets contain three colors otherwise specified since different ways define given given ecas call extends corresponding global rule since unique ecas unique combinations results possible globals rules online program illustrating interacting cellular automata via global rule visited http intrinsic versus evolutionary complexity let back main purpose paper wish study two competing processes evolve time want measure complexity emerging patterns evolves two processes interact complexity emerging structures compared complexity respective processes would evolved without interacting structure experiment readily suggests pick two ecas defined alphabets respectively measure typical complexities generated respectively different triplets applied isolated setting next pick corresponding global rule measure typical complexity generated typical complexity experiment run compared question interpret results see change typical complexity three possible reasons change attributed evolutionary process triggered interaction intrinsic complexity injection due nature defined previously tuples intrinsic complexity injection due scaling alphabet size size case second reason analogy cosmological constant readily suggested recall supposed take care modeling background say none defined interaction thus possible increase attributed second reason intrinsic entropy density background shall see choice affect change complexity upon interaction describe experiment first need decide measure complexity characterizing complexity qualitative characterization complexity given dynamical processes general ecas particular specifically wolfram described four classes complexity characterized follows class symbolic systems rapidly converge uniform state examples eca class symbolic systems rapidly converge repetitive stable state examples eca rules class symbolic systems appear remain random state examples eca rules class symbolic systems form areas repetitive stable states also form structures interact complicated ways examples eca rules throughout paper use wolfram enumeration eca rules see turns one consider quantitative complexity measure largely generalizes four qualitative complexity classes given wolfram definition shall make use bruijn sequences although different approaches possible bruijn sequence colors sequence colors possible string length colors actually substring sense bruijn sequences sometimes considered using bruijn sequences increasing length fixed number colors parametrize input makes sense speak asymptotic behavior one formally characterize wolfram classes terms kolmogorov complexity done large enough input sufficiently long time evaluation one use kolmogorov complexity assign complexity measure system input runtime see typical complexity measure defined lim sup first approximation one use values outcomes classify one four wolfram classes experiment setup experiment tested influence global rules grs two interacting ecas possible grs explored total number grs representing combination two ecas one considered different initial conditions length corresponding first canonical enumeration order bruijn sequences length alphabet execution evaluated timesteps use method described section determine complexity state evolution two ecas mixed neighborhoods evolving specifically used mathematica compress function approximation kolmogorov complexity first suggested aimed simulating grs due technical restrictions got executed far know bias introduced online program showing compression characterizes cellular automata evolution found http important note number computational hours took undergo experiment grs exploring different initial conditions combinations total billion executions even one numerical value per execution total data generated experiment terabytes hard drive hours computational time used total jobs time different core grs sometimes jobs parallel job took hours complete complexity estimate normalized subtracting compression value string equal length instances good approximation effects mentioned section use compression values wolfram complexity classes according critical thresholds values thresholds trained according ecas used interactions using compression methods run determined output class outcome organized according complexity classes constituent cas best clarity results represented heat maps next section results output class represented heat map figure four heat maps account different classes outputs map composed grid whose axes describe complexity classes shown thus runs ecas interaction initial condition yield class output represented heat map labeled class example figure class outputs generated class class interactions intense color represents densely populated value particular class general global rule effects outputs every possible grs accumulated represented figure general figure shows change complexity used determine interaction two ecas outputs complexity class generated class eca interactions least expected figure heat maps classes output types executions heat map represents percent cases resulted corresponding output complexity class global rules interest figure heat maps classes output types heat map represents percent cases resulted corresponding output complexity class figure heat maps classes output types heat map represents percent cases resulted corresponding output complexity class figure heat maps classes output types heat map represents percent cases resulted corresponding output complexity class several interesting examples grs affect classification behavior interaction state output heat maps figure interesting outputs various interactions note rules belong complexity class three grs particular shown figures shown figure captions grs enumerated according position tuples function say enumerate grs generating tuples symbols lexicographical order half outputs complexity class class interactions majority complexity class outputs generated complexity class interactions outputs actual outputs grs shown figure note rules complexity class discussion unexpected biggest increases complexity arise complexity class interactions suggests complexity intrinsic grs rather ecas suspect interacting class ecas readily accept complexity rules interactions prevalent complexity classes interactions likely complexity without interaction robust complexity changes introduced cases complexity increases remains introducing interaction rule via interesting cases global rules increase complexity output entire classes cases mixed neighborhoods present sustained throughout output form emergence interaction rule used short number time steps per execution unclear whether mixed neighborhoods eventually die nonetheless case intermediate emergence global rule found interesting cases global rules seem drastically change complexity interacting output originally class ecas example found fragile global rules resilient importantly greatest increases complexity occur interacting eca class true majority possible global rules although still yet understand mechanisms behind results confident analysis important understanding emergence complex structures acknowledgement want acknowledge asu saguaro cluster computational work undertaken references chaitin length programs computing finite binary sequences statistical considerations journal acm chandler cellular automata global control http wolfram demonstrations project published may wolfram new kind science wolfram media champaign zenil investigation behaviour cellular automata systems complex systems zenil asymptotic behaviour ratios complexity cellular automata rule spaces international journal bifurcation chaos vol
| 9 |
computational evolution strategies peter kvam kvampete center adaptive rationality max planck institute human development lentzeallee berlin germany joseph cesario cesario department psychology michigan state university physics east lansing usa sep jory schossau jory department computer science engineering michigan state university south shaw east lansing usa heather eisthen eisthen arend hintze hintze department integrative biology beacon center study evolution action michigan state university farm east lansing usa abstract research adaptive takes strategyfirst approach proposing method solving problem examining whether implemented brain environments succeeds present method studying strategy development based computational evolution takes opposite approach allowing strategies develop response environment via darwinian evolution apply approach dynamic problem artificial agents make decisions source incoming information show complexity brains strategies evolved agents function environment develop difficult environments lead larger brains information use resulting strategies resembling sequential sampling approach less difficult environments drive evolution toward smaller brains less information use resulting simpler strategies keywords computational evolution sequential sampling heuristics introduction theories often posit humans animals follow procedures achieve maximum accuracy given particular set constraints theories claim optimal relative information given involving process maximizing expected utility performing bayesian inference bogacz brown moehlis holmes cohen griffiths tenenbaum von neumann morgenstern others assume behavior makes based environment tailoring information processing achieve sufficient performance restricting priors briscoe feldman ignoring information gigerenzer todd sampling enough satisfy particular criterion link heath ratcliff cases mechanisms underlying initial development strategies assumed either explicitly implicitly result natural artificial selection pressures cognitive science research however evolution strategy often takes back seat performance herence clarity intuitiveness theory undoubtedly play immense role ability explain predict behavior whether strategy plausible result selection pressures rarely considered fair largely process evolution slow messy often impossible observe organisms lab fortunately recent innovations computing enabled model process artificial agents paper propose method studying evolution dynamic binary using artificial markov brains edlund marstaller hintze adami olson hintze dyer knoester adami investigate evolutionary trajectories ultimate behavior brains resulting different environmental conditions order demonstrate method investigate interesting problem focus simple choice situation choose whether source stimulus signal noise preferential decisions nonspecific choices substituted similar decision structure underlies vast array choices people animals make including task requires take process information time make decision source yielded information however decision maker free vary amount information uses processing applies different theories make diverging predictions vary one hand may advantageous use every piece information received feeding complex processing system order obtain maximum accuracy simpler processing architecture ignores information may sufficient terms accuracy robust random mutations errors complex models many prominent complex models fall sequential sampling framework bogacz link heath ratcliff models assume agent takes receives samples one one distribution evidence sample pointing toward signal noise distribution posit agent combines samples process information example adding number favoring subtracting number favoring magnitude difference exceeds criterion value larger smaller decision triggered favor corresponding choice option strategy implements particular form bayesian inference allowing decisionmaker achieve desired accuracy guaranteeing log odds one hypothesis least equal criterion value models piece information collected used make decision although organisms may literally add subtract pieces information expect observe two characteristics organisms implement similar strategies first relatively complex storing cumulative history information make decisions second give piece information receive relatively equal weight spreading weights assigned information across long series inputs detail less complex models decision task toward end spectrum model complexity heuristics deliberately ignore information order obtain better performance particular environments gigerenzer hertwig gigerenzer brighton gigerenzer todd many strategies meaning terminate use information soon one piece evidence clearly favors either accordingly decision maker relatively simple information processing architecture copy incoming information output indicators give answer require ordinal information different sources information validity resulting increased complexity dougherty francowatkins thomas current problem assume information comes single source result relatively simple architecture onepiece decision rules expect observe two characteristics organisms implement strategies similar heuristics first relatively simple information processing architectures favoring short robust pathways little integration second appear give weight last piece information receive making decision yielding relationship final decision sequence inputs heavily skewed recently received inputs course real behavior artificially evolved organisms probably lie somewhere along spectrum two poles however compare relative leanings different populations organisms varying characteristics environments evolve next describe task manipulations task agents solve binary decision problem received information one source another information coming either source included two binary numbers therefore could yield inputs source would yield primarily left right source would yield primarily left right exact proportion inputs varied order alter difficulty task example easy stimulus would give left right two inputs independent would ultimately give inputs difficult environment stimulus might left right yielding stimulus possible inputs would frequency inputs would flipped left right frequencies shown agents start trial instead trial started random frequency increasing consecutive step target frequency reached done part emulate agents encounter stimuli real situations stimuli progressively come sensory range increasing strength time rather simply appearing also avoid sticking local maximum agents simply copy first input outputs target frequency manipulated increments resulting difficulty levels different populations agents decision agents received inputs new input constituted one time step methods interested examining strategies evolutionary trajectories digital agents took solve simple dynamic problem developed binary task agents solve fitness agent defined number correct decisions made trials task probability would reproduce determined fitness value note fitness determined number correct answers reflecting agents propensity respond together accuracy respond fitness penalty cost agent complexity formally probability generated child next generation given fitness divided total fitness across total population roulette wheel selection agent reproduced next generation creating copy random mutations course generations selection mutation process led evolution agents could successfully perform task enabled analyze strategies evolved agents ultimately developed agent could process information agent gave answer signaling indicate indicate see decision process would come halt new inputs would given agent would graded final answer agent received point toward fitness gave correct answer points incorrect failed answer inputs given addition difficulty manipulation included time manipulation agent permitted answer time steps elapsed agent received inputs number varied increments yielding levels nondecision time across different environments increasing tended make agents evolve faster longer time tended allow agents easily implement strategies regardless difficulty level pings gate tables could change gates shown figure code consisted nucleotides included mutation rates point mutations duplication mutations deletion mutations consistent previous work edlund marstaller olson precisely logic gates specified genes within genetic code gene consists sequence nucleotides numbered reflect four base nucleotides present dna starts number sequence followed start codon beginning arbitrary location within genome genes typically nucleotides long junk sequences nucleotides resulting large size genomes first generation markov brain agents population generated random seed genome first agents created random variants seed brain using mutation rates described resulting approximately random connections per agent agents made decisions selected reproduce based accuracy process repeated population generations yielding agents per population could perform decision task data figure diagram structure sample markov brain input processing output nodes circles connecting logic gates rectangles gate contains corresponding table mapping input values left output values right note actual agents twice number nodes shown available markov brain agents markov brain agents edlund marstaller olson consisted binary nodes directed logic gates moved combined information one set nodes another see figure two nodes reserved inputs environment described another two used output nodes output nodes could show combination two binary values read indicating indicating agents permitted continue updating nodes inputs time step update nodes time step agents used logic gates represented squares figure took node values mapped onto nodes using table input nodes table output nodes gates specified underlying genetic code markov brain possessed point insertion deletion mutations genetic code would cause add subtract inputs gate add subtract outputs change conditions difficulty levels nondecision times ran generations evolution different markov brains giving total populations populations random organism chosen line ancestors tracked back first generation set agents last first generation called line decent lod replicates per experimental conditions parameters fitness agents lod averaged generation lods tracked average number connections nodes see figure agents condition generation refer property agents brain size analogous properties organism number connectivity neurons show evolutionary trajectory figure finally took close look behavior generation near end ensure generation examined could solve task slightly somewhat arbitrarily removed generation ensure agents generation approaching one random dips performance random mutations generation less likely deleterious recent ones agents examined trial see information received time step step made decision decision made coded correct incorrect allowed examine relationship inputs received final answer gave giving estimate weight assigned new piece information materials agents tasks evolution implemented using visual studio desktop xcode full evolution simulations run michigan state university high performance computing center results exception high difficulty low time conditions populations conditions agents able achieve essentially perfect accuracy decision task generations however strategies implemented population varied heavily condition perhaps worth noting point tremendous amount data approach yields condition consisted populations agents made decisions generation yielding agents million decisions per generation per condition tremendous sample size renders statistical comparisons based standard error example essentially moot reason present mostly examples illustrate important findings rather exhaustive statistical comparisons brain size final brain size number connections among nodes varied function stimulus difficulty time focus primarily high time conditions many low time populations particularly difficult stimuli conditions unable achieve high performance groups figure shows agents faced easiest conditions tended smallest final brain size means around connections agents faced medium difficulty environments evolved approximately connections agent brain size difficult conditions approached connections appeared still climbing generations perhaps interesting though evolutionary trajectory populations conditions took shown group started connections initial generation number connections initially dropped first generations however conditions appear diverge agents easy conditions losing even connections agents medium conditions staying approximately level agents difficult conditions adding connections strategy use order examine pattern information use agents additionally examined relationship piece information received final answer given taking series inputs assigning one value information favoring inputs assigned value information favoring inputs assigned value figure mean number connections agent brains across generations three levels task difficulty sake comparison trajectories shown populations time steps others assigned value answers favoring also given value answers favoring value allowed track sequence refer trajectory leading decision correlate final answer result analysis example conditions shown figure shown trajectory correlations difficult conditions tend flatter easy conditions final answers tend correlate longer history inputs indicates agents assigning similar weight piece information use utilizing full history inputs received rather final piece note agents appeared use recent pieces information heavily case almost model generates data last pieces information tend trigger decision rule example sequential sampling piece information moves evidence across threshold always highly correlated final answer information use also varied somewhat across levels nondecision time effect particularly pronounced except difficult conditions however effect largely consequence agent populations failure evolve perform task well stimulus however since sometimes take several updates time steps move trigger input brain output nodes final piece information always perfectly correlated output figure example correlations inputs final decision easy blue medium purple difficult red conditions trajectories final answer right side last piece information agent received rightmost value left moving backward trajectory discriminability time low example agents difficult short time condition red left panel figure attained accuracy compared conditions higher difficulty still led larger brains longer history processing conditions effect less pronounced therefore high values time apparently made easier evolve complex strategies likely agents exposed information making decisions discussion agents strategies spanned range complexity difficult environments pushed toward complex strategies resembling sequential sampling easier environments led strategies similar heuristics therefore sequential sampling heuristics seem strategies could plausibly result different environmental demands however results run counter idea heuristics invoked decisions particularly difficult choice alternatives easily distinguished final strategies may support claim organisms primarily heuristic gigerenzer brighton still lends credence premise ecological rationality many heuristics based approach suggests different environments choice ecologies lead different strategies rather process certainly plausible agents environments mixed changing difficulty levels converge single strategy moment seems multiple strategies implemented across multiple choice environments difficult conditions led larger brains information processing perhaps critical finding simpler choice environments led simpler decision strategies architectures may initially seem like side coin result particularly interesting impose penalties larger brains although researchers suggested metabolic costs limit evolution large brains isler van schaik laughlin van steveninck anderson substantial real brains kuzawa necessary drive evolution toward smaller brains instead suspect drop brain size result agents response mutations mutation load imposed size genome example random mutation genome connects disconnects gate likely affect downstream elements brain uses nodes connections process information higher mutation load particularly larger ratio coding nucleotides case smaller brain would tool avoiding deleterious mutations information processing stream alternatively minimum number nodes connections required perform task likely lower easier conditions difficult ones mutations reduce brain size function might able persist easier difficult conditions either case clear larger brain offer sufficient benefits easier conditions overcome mutation load imposes another potential risk larger brain chance random mutation preventing information reaching output nodes longer chain processing nodes easier interrupt confuse shorter one agents difficult conditions evidently able overcome possibility usually answering within steps end time may barrier required substantial fitness rewards cross present easier conditions hesitate make claims broad given scope study finding brain size limited mutation load provoking may explain systems subject mutations selection pressures including neurons muscle cells reduced unused even energetic costs maintaining structure appear low seems promising direction future research examine mutation rate robustness contribute organisms fitness beyond costs associated metabolism approach hope presented method examining questions regarding adaptation evolution often arise cognitive science psychology whereas previous studies worked particular strategy examined choice environments succeeds present way answering questions environment shape evolution strategy strategies resulting computational evolution approach adaptive easily implemented brain result realistic natural selection pressures additionally shown approach capable addressing important questions existing models simple dynamic decisions though could undoubtedly shed light array related problems course limitations approach many computational agents used nodes reserved inputs outputs meaning could used storing memory processing information although nodes could added certainly accurate model even simple nervous systems would many times would severely slow steps required evolution might also lead problems analogous occurs parameters added model though question worth exploring conclusions paper presented computational evolution framework could used examine environments lead different behaviors framework allowed examine strategies might arisen organisms address problem dynamic agents receive information time must somehow use input make decisions affect fitness found evolutionary trajectory strategies ultimately implemented agents heavily influenced characteristics choice environment difficulty task particularly notable influence difficult environments tended encourage evolution complex information integration strategies simple environments actually caused agents decrease complexity perhaps order maintain simpler robust decision architectures despite explicit costs complexity indicating mutation load may sufficient limit brain size finally discussed results context existing models human suggesting strategies fast frugal heuristics gigerenzer todd complex ones sequential sampling link heath may provide valid descriptions least serve useful landmarks strategies implemented evolved agents provided evidence strategy use environmentdependent different decision environments led different patterns information use generally shown computational evolution approach integrating computer science evolutionary biology psychology able provide insights different strategies evolve acknowledgments work supported michigan state university high performance computing facility national science foundation cooperative agreement grant references bogacz brown moehlis holmes cohen physics optimal decision making formal analysis models performance tasks psychological review gigerenzer hertwig priority heuristic making choices without psychological review briscoe feldman conceptual complexity tradeoff cognition dougherty thomas psychological plausibility theory probabilistic mental models fast frugal heuristics psychological review edlund chaumont hintze koch tononi adami integrated information increases fitness evolution animats plos computational biology gigerenzer brighton homo heuristicus biased minds make better inferences topics cognitive science gigerenzer todd simple heuristics make smart new york oxford university press griffiths tenenbaum optimal predictions everyday cognition psychological science isler van schaik metabolic costs brain size evolution biology letters kuzawa chugani grossman lipovich muzik hof lange metabolic costs evolutionary implications human brain development proceedings national academy sciences laughlin van steveninck anderson metabolic cost neural information nature neuroscience link heath sequential theory psychological discrimination psychometrika marstaller hintze adami evolution representation simple cognitive networks neural computation olson hintze dyer knoester adami predator confusion sufficient evolve swarming behaviour journal royal society interface ratcliff theory memory retrieval psychological review von neumann morgenstern theory games economic behavior princeton princeton university press
| 9 |
note convex characters fibonacci numbers algorithms jul steven kelk georgios stamoulis abstract phylogenetic trees used model evolution leaves labelled represent contemporary species taxa interior vertices represent extinct ancestors informally convex characters measurements contemporary species subset species contemporary extinct share given state form connected subtree given unrooted binary phylogenetic tree set taxa closed fairly opaque expression number convex characters known since independent exact topology note prove number actually equal fibonacci number next define number convex characters state appears least taxa show somewhat curiously also independent topology equal fibonacci number demonstrate topological neutrality subsequently breaks however show fixed computed time set characters thus counted efficiently listed sampled use insights give simple effective exact algorithm maximum parsimony distance problem runs time golden ratio exact algorithm computes tree bisection reconnection distance equivalently maximum agreement forest time poly introduction phylogenetics science accurately efficiently inferring evolutionary trees given information contemporary species important concept within phylogenetics convexity essentially captures situation within phylogenetic evolutionary tree biological state emerges exactly emerge die concretely given phylogenetic tree set states assigned leaves assign states internal vertices tree state forms connected island within tree possible assignment states leaves known convex character article present number results concerning enumeration convex characters section give formal definitions describe relevant earlier work section start showing earlier result counting convex characters simplified term fibonacci sequence seek count convex characters added restriction state occur least leaves proving somewhat surprising result tree topology irrelevant formulation terms fibonacci numbers possible give explicit example showing topological neutrality breaks section show size space counted polynomial time space using dynamic programming also permits listing sampling uniformly random noting also trees exactly vector space sizes section give number algorithmic applications problems arising phylogenetics seek quantify dissimilarity two phylogenetic trees finally section briefy discuss number open problems arising work software associated article made publicly available preliminaries general background mathematical phylogenetics refer unrooted binary phylogenetic undirected tree every internal vertex degree whose leaves bijectively labelled set often called set steven kelk georgios stamoulis taxa representing contemporary species use denote often simply write tree clear context character surjective function set states state represents characteristic species number legs say character character naturally induces partition regard two characters equivalent induce partition extension character function extension denote number edges parsimony score character denoted obtained minimizing possible extensions say character convex equivalently character convex exists extension state vertices allocated state form connected subtree call extension convex extension see figure example convexity character tested polynomial fact linear time figure given tree taxa convex characters total state appears least taxa shown character uses exactly state abcdef characters use states characters use states character shown extension verifying subtree induced state connected character convex write denote number convex characters denote number characters additional property state used character appears least taxa follows definition character define value therefore equal total number convex characters tree shown figure adopt standard convention binomial coefficient evaluates proven note convex characters fibonacci numbers algorithms hence observed expression somewhat surprisingly depend topology number taxa hence write without ambiguity fibonacci numbers convex characters theorem value depends topology relevant given expression proof prove induction base case note one binary tree topology relabelling taxa possible taxa expression correctly evaluates evaluates cases except correctly evaluates consider let value expression correctly evaluates expression correctly evaluates every tree taxa contains least one cherry two taxa common parent third neighbour interior vertex fix cherry similar technique used observe convex character property state appears least twice follows definition convexity let let denotes tree set taxa obtained taking minimum subtree connecting elements suppressing vertices degree two cases distinguish first case state appear taxa characters second case appear least one taxon characters hence inductive hypothesis last equality follows identity known pascal rule holds completes proof consequently total number convex characters tree state appearing least twice independent topology specifically corollary even proof immediate observing equation obtained substituting equation steven kelk georgios stamoulis let denote nth fibonacci number comprehensive background fibonacci numbers see theorem proof following identity index rather obtain replace expression obtained applying corollary question arises whether values share topological neutrality counterparts case see figure however figure number characters convex state appearing least taxa corresponding number characters ghi abcdef characters ghi ghi abcdef hence topology play role contrasting situation usually attributed lucas applying pascal rule algebraic manipulation proven induction note convex characters fibonacci numbers algorithms computing listing elements dynamic programming results previous section give rise number questions compute polynomial time also want explicitly list elements counted possible achieve reasonable total running time show answer questions yes specifically show compute using dynamic programming combinatorial recurrence within dynamic programming also allow derive computable bijection characters counted using bijection straightforward list sample elements note also advance since recurrence used derive based obvious transform bijection begin rooting subdividing arbitrary edge new vertex implicitly directing edges away new vertex new vertex becomes root tree note rooting operation impact convexity characters location root irrelevant simply convenience ensures term child dynamic programming works leaves towards root helpful represent character set subsets partition corresponds state also need new definitions character valid convex consider ordered pair character call pair pair convex exists convex extension root assigned state equality pairs defined strictly say pair note pair valid vertex tree compute store following values simply subtree rooted number defined number characterroot state pairs following conditions hold semivalid also store defined slightly differently replace term taxon equal equal show compute values recursively assuming corresponding values already computed subtree rooted left child subtree rooted right child first idea behind recurrence characters counted created two ways taking union character left subtree character right subtree taking pair left subtree pair right subtree merging root states yield character characters subtrees used already valid respect subtrees characters used might valid respect steven kelk georgios stamoulis subtrees require combined obtain character valid possible respective subtrees sum cardinalities least see figure example figure character valid valid character valid validity obtained allowing merge possible could reach roots respective subtrees second note means pairs counted recurrence valid first two terms recurrence concern situation analogous specifically case assume states merged new pair created constructed combination valid character one subtree pair summation term corresponds count combinations pairs two subtrees cardinality merged state exactly finally final recurrence semantically similar previous one main difference counts pairs valid given vertex equation computed time assuming values already computed earlier time bound holds equations specific equation computed yielding running time bound per vertex easily improved observing single sweep values recycled computation different values vertices tree yields following theorem theorem let unrooted binary tree taxa computed time space corollary let unrooted binary tree taxa characters counted generated total time character counted sampled uniformly random time space proof critically involved equations allows impose canonical ordering characters pairs counted note convex characters fibonacci numbers algorithms equations example within equation choose place characters earlier ordering characters within characters refine order follows first character left subtree combined turn characters right subtree second character left subtree combined turn characters right subtree canonical ordering chosen dynamic programming completed start root using values computed vertices tree recursively backtrack generate uniqely defined ith character hence obtain bijection characters counted time space requirements backtracking tree evaluating bijection given element dominated time space requirements executing original dynamic program respectively bijection used list characters counted sample uniformly random space implemented dynamic programming corresponding algorithms listing sampling java downloaded http finally within section unrooted binary tree leaves define simply vector natural ask whether two trees taxa isomorphic see related discussions identifiability using code verified claim true leaves see software website proof exists see figure figure two trees leaves algorithmic applications one advantages expressing fibonacci numbers allows give tight bounds rate growth particularly useful bounding running time algorithms following classical expression fibonacci numbers golden ratio steven kelk georgios stamoulis observing term obtained binet formula vanishing combining theorem obtain using asymptotic notation clear convex characters convex characters state occurs least two taxa give two examples insights yield algorithms two problems arising phylogenetics computation maximum parsimony distance let two unrooted binary trees set taxa metric dmp maximum parsimony distance defined follows ranges characters defined section dmp max compute dmp used quantify dissimilarity two phylogenetic trees lower bound similarly tree bisection reconnection tbr distance denoted theorem given two unrooted binary trees set taxa dmp computed time golden ratio proof proven optimum achieved character convex state character occurs least two taxa hence simply looping characters counted separately characters counted sufficient locate optimal character note computed time using fitch dynamic programming hence scoring character easily performed quadratic time result follows leveraging corollary implemented dmp algorithm java algorithm results encouraging code freely available http single intel atom processor algorithm terminates less second seconds seconds respectively powerful machine previously fastest algorithm integer linear programming ilp approach described took seconds terminate taxa stalled completely trees taxa even using ilp software enhanced range software recently used experiments verify dmp often good lower bound dmp computation tbr distance maximum agreement forests finally note results article also give easy although cases somewhat crude upper bound number agreement forests two unrooted binary trees taxa recall unrooted binary phylogenetic tree defined unrooted binary phylogenetic tree obtained taking minimal subtree connects suppressing vertices degree agreement forest partition subsets within respectively minimal connecting subtrees induced equality explicitly takes taxa account see recent articles background agreement forests maximum agremeent forest agreement forest algorithm running time number taxa number states character context rise note convex characters fibonacci numbers algorithms minimum number components minimum denoted dmaf note due part definition every agreement forest induces character convex although characters convex necessarily correspond agreement hence agreement forests equal number components maximum agreement forest minus hence leveraging corollary obtain theorem given two unrooted binary trees set taxa dmaf computed time poly moreover agreement forests listed time bound conclusion number interesting open problems remain example characterize trees given give analytical lower upper bounds ranging space trees taxa acknowledgements thank mike steel helpful discussions references allen steel subtree transfer operations induced metrics evolutionary trees annals combinatorics bachoore bodlaender convex recoloring trees utrecht university technical report bordewich huber semple identifying phylogenetic trees discrete mathematics chen fan sze parameterized approximation algorithms maximum agreement forest multifurcating trees theoretical computer science dress huber koolen moulton spillner basic phylogenetic combinatorics cambridge university press fischer kelk maximum parsimony distance phylogenetic trees annals combinatorics fitch toward defining course evolution minimum change specific tree topology systematic zoology hartigan minimum mutation fits given tree biometrics pages kelk fischer complexity computing distance binary phylogenetic trees annals combinatorics appear preprint kelk fischer moulton reduction rules maximum parsimony distance phylogenetic trees theoretical computer science appear preprint koshy fibonacci lucas numbers applications pure applied mathematics wiley series texts monographs tracts wiley semple steel phylogenetics oxford university press steel complexity reconstructing trees qualitative characters subtrees journal classification steel classifying counting linear phylogenetic invariants model journal computational biology department data science knowledge engineering dke maastricht university box maastricht netherlands address address convex characters agreement forests related
| 8 |
venture probabilistic programming platform programmable inference apr vikash mansinghka daniel selsam yura perov vkm mit edu dselsam mit edu perov mit edu abstract describe venture interactive virtual machine probabilistic programming aims sufficiently expressive extensible efficient use like church probabilistic models inference problems venture specified via probabilistic language descended lisp unlike church venture also provides compositional language custom inference strategies assembled scalable implementations several exact approximate techniques venture thus applicable problems involving widely varying model families dataset sizes constraints also describe four key aspects venture implementation build ideas probabilistic graphical models first describe stochastic procedure interface spi specifies encapsulates primitive random variables analogously conditional probability tables bayesian network spi supports custom control flow probabilistic procedures partially exchangeable sequences stochastic simulators custom proposals also supports integration external models dynamically create destroy perform inference latent variables hidden venture second describe probabilistic execution traces pets represent execution histories venture programs like bayesian networks pets capture conditional dependencies pets also represent existential dependencies exchangeable coupling third describe partitions execution histories called scaffolds efficiently constructed pets factor global inference problems coherent finally describe family stochastic regeneration algorithms efficiently modifying pet fragments contained within scaffolds without visiting conditionally independent random choices stochastic regeneration insulates inference algorithms complexities introduced changes execution structure runtime scales linearly cases previous approaches often scaled quadratically therefore impractical show use stochastic regeneration spi implement inference strategies gibbs sampling blocked proposals based hybrids particle markov chain monte carlo variational inference techniques keywords probabilistic programming bayesian inference bayesian networks markov chain monte carlo sequential monte carlo particle markov chain monte carlo variational inference acknowledgements authors thank vlad firoiu alexey radul contributions multiple venture implementations daniel roy cameron freer alexey radul helpful discussions suggestions comments early drafts work supported darpa ppaml program grants onr aro google rethinking project opinions findings conclusions recommendations expressed work authors necessarily reflect views sponsors contents introduction contributions venture language modeling inference instructions modeling expressions inference scopes inference expressions values automatic inference versus inference programming procedural declarative interpretations markov chain sequential monte carlo architectures examples hidden markov models hierarchical nonparametric bayesian models inverse interpretation stochastic procedures expressiveness extensibility primitive stochastic procedures stochastic procedure interface definition exposed simulation requests latent simulation requests foreign inference interface optimizations sps auxiliary state probabilistic execution traces definition probabilistic execution trace families exchangeable coupling existential dependence contingent evaluation examples trick coin simple bayesian network stochastic memoization constructing pets via forward simulation pseudocode eval apply undoing simulation pet fragments pseudocode uneval unapply enforcing constraints via constrain unconstrain partitioning traces scaffolds scalable incremental inference motivation notation partitioning traces defining scaffolds constructing scaffold pseudocode constructing scaffolds absorbing applications breaking global inference problem collections local inference problems local kernels scaffolds local simulation kernels resimulation proposals local delta kernels reuse random choices stochastic regeneration algorithms scaffolds detaching along scaffold regenerating new trace along scaffold building invariant transition operators using stochastic regeneration weights assuming simulation kernels weights assuming delta kernels versus inference schemes inference strategies via stochastic regeneration factorization acceptance ratio auxiliary variables stochastic selection kernels via stochastic regeneration approximating optimal proposals via stochastic variational inference posing optimization problem stochastic gradient descent proposal using regen detach inference enumerative gibbs particle markov chain monte carlo mutation versus simultaneous particles memory operator constructing transition operators mixtures kernels mixmh operator kernels generating particles repeatedly applying seed kernel mhn operator special case particle methods enumerative gibbs special case using different kernels particle particle markov chain monte carlo adding iteration resampling pseudocode pgibbs simultaneous particles conditional independence parallelizing transitions markov blankets envelopes conditional independence envelopes envelope scaffold related work discussion debugging profiling probabilistic programs inference programming conclusion introduction probabilistic modeling approximate bayesian inference proven powerful tools multiple fields machine learning bishop statistics green gelman robotics thrun artificial intelligence russell norvig cognitive science tenenbaum unfortunately even relatively simple probabilistic models associated inference schemes difficult design specify analyze implement debug applications different fields robotics statistics involve differing modeling idioms inference techniques dataset sizes different fields also often impose varying speed accuracy requirements interact modeling algorithmic choices small changes modeling assumptions data requirements compute budget frequently necessitate redesign probabilistic model inference strategy turn necessitating reimplementation underlying software difficulties impose high cost practitioners making modeling inference approaches impractical many problems minor variations standard templates reach cost also high experts development time failure rate make difficult innovate methodology except simple settings limits richness probabilistic models cognition artificial intelligence systems kinds models push boundaries possible current knowledge representation inference techniques probabilistic programming languages could potentially mitigate problems provide formal representation models often via executable code makes random choice every latent variable attempt encapsulate automate inference several languages systems built along lines last decade lunn stan development team milch pfeffer mccallum systems promising domain strengths described however none probabilistic programming languages systems developed thus far suitable general purpose use examples drawbacks include inadequate unpredictable runtime performance limited expressiveness operation lack extensibility overly restrictive opaque inference schemes paper describe venture new probabilistic language inference engine attempts address limitations several probabilistic programming tools sought efficiency restricting expressiveness example microsoft system minka leverages fast message passing techniques originally developed graphical models result restricts use stochastic choice language influence control flow random choices would yield models sets random variables varying even unbounded size therefore preclude compilation graphical model bugs arguably first still widely used probabilistic programming language essentially restrictions lunn random compound data types procedures stochastic control flow constructs could lead priori unbounded executions scope stan language developed bayesian statistics community limited support discrete random variables incompatible hybrid monte carlo strategy uses overcome convergence issues gibbs sampling stan development team probabilistic programming tools seen use include factorie mccallum markov logic richardson domingos applications emphasized problems information extraction probabilistic models defined using factorie markov logic finite undirected specified imperatively factorie declaratively markov logic systems make use specialized efficient approximation algorithms inference parameter estimation stan bugs factorie markov logic capture important modeling approximate inference idioms also interesting models express additionally number probabilistic extensions classical logic programming languages also developed poole sato kameya raedt kersting motivated problems statistical relational learning factorie markov logic languages interesting useful properties thus far yielded compact descriptions many useful classes probabilistic generative models domains statistics robotics contrast probabilistic programming languages blog milch ibal pfeffer figaro pfeffer church goodman mansinghka roy bonowitz tenenbaum mansinghka emphasize expressiveness language designed around needs models whose structure represented using directed undirected graphical models standard inference algorithms graphical models belief propagation directly apply examples include probabilistic grammars jelinek nonparametric bayesian models rasmussen johnson rasmussen williams griffiths ghahramani probabilistic models worlds priori unknown numbers objects models learning structure graphical models heckerman friedman koller mansinghka models inductive learning symbolic expressions grosse duvenaud models defined terms complex stochastic simulation software lack tractable likelihoods marjoram model classes basis applications inference richly structured models address limitations classic statistical modeling pattern recognition techniques example domains include natural language processing manning speech recognition baker information extraction pasula multitarget tracking sensor fusion arora ecology computational biology friedman toni dowell eddy however performance engineering needed turn specialized inference algorithms models viable implementations challenging direct deployment probabilistic program implementations applications often infeasible elaborations models expressive probabilistic languages enable thus seem completely impractical church makes extreme tradeoffs respect expressiveness efficiency represent models classes listed partly support probabilistic procedures also represent generative models defined terms algorithms simulation inference arbitrary church programs flexibility makes church especially suitable nonparametric bayesian modeling roy well artificial intelligence cognitive science problems involve reasoning reasoning sequential decision making planning theory mind goodman tenenbaum mansinghka additionally probabilistic formulations learning church programs data including program structure parameters formulated terms inference ordinary church program mansinghka although various church implementations provide automatic inference mechanisms principle apply problems mechanisms exhibited limitations practice clear make inference church sufficiently scalable typical machine learning applications including problems standard techniques based graphical models applied successfully also easy church programmers override built inference mechanisms add new stochastic primitives paper describe venture interactive higher order probabilistic programming platform aims sufficiently expressive extensible efficient generalpurpose use venture includes virtual machine language specifying probabilistic models language specifying inference problems along custom inference strategies solving problems venture implementation standard mcmc schemes scales linearly dataset size problems many previous inference architectures scale quadratically therefore impractical venture also supports larger class primitives including primitives arising complex stochastic simulators enables programmers incrementally migrate portions probabilistic program optimized external inference code venture thus improves church terms expressiveness extensibility scalability although remains seen improvements sufficient use unoptimized venture prototypes begun successfully applied system building bayesian data analysis cognitive modeling machine intelligence research contributions paper makes two main contributions first describes key aspects venture design including support interactive modeling programmable inference due innovations venture provides broad coverage terms models approximation strategies models overall applicability inference problems varying complexities requirements second paper describes key aspects venture implementation stochastic procedure interface spi encapsulating primitives probabilistic execution trace pet data structure efficient representation updating execution histories suite stochastic regeneration algorithms scalable inference within trace fragments called scaffolds important aspects venture including venturescript syntax formal language definitions software architecture standard library performance measurements optimized implementations beyond scope paper helpful consider relationships pets spi spi generalizes notion elementary random procedure associated many previous probabilistic programming languages spi encapsulates venture primitives enables interoperation external modeling components analogously foreign function interface traditional programming language external modeling components represent sets latent variables hidden venture use specialized inference code spi also supports custom control flow probabilistic procedures exchangeable sequences stochastic primitives arise complex simulations probabilistic execution traces used represent generative models written venture along particular realizations models data values must explain pets generalize bayesian networks handle patterns conditional dependence existential dependence exchangeable coupling amongst invocations stochastic procedures conforming spi pets thus must handle priori unbounded sets random variables arbitrary probabilistic generative processes written venture may lack tractable probability densities using tools show define coherent local inference steps arbitrary sets random choices within pet conditioned surrounding trace core idea notion scaffold scaffold subset pet contains random variables must exist regardless values chosen set variables interest along set random variables whose values conditioned show construct scaffolds efficiently inference given scaffold proceeds via stochastic regeneration algorithms efficiently consume either restore resample pet fragments without visiting conditionally independent random choices proposal probabilities local priors local likelihoods gradients needed several approximate inference strategies obtained via small variations stochastic regeneration use stochastic regeneration implement composite gibbs sampling also blocked proposals based hybrids conditional sequential monte carlo variational techniques uniform implementation approaches incremental inference along analytical tools converting randomly chosen local transition operators ergodic global transition operators pets constitutes another contribution venture language consider following example venture program determining coin fair tricky assume bernoulli assume uniform observe bernoulli true observe bernoulli true infer default one predict bernoulli informally discuss program defining venture language precisely assume instructions induce hypothesis space probabilistic model including random variable whether coin tricky either deterministic coin weight potential random variable corresponding unknown weight model selection problem expressed via stochastic predicate alternative models consequent alternate branches executing assume instructions particular values sampled though meaning program far corresponds probability distribution possible executions observe instructions describe data generator produces two flips coin generated weight along data assumed generated given generator program observes encode constraints coin flips landed heads infer instruction causes venture find hypothesis execution trace probable given data using iterations default markov chain infer evolves probability distribution execution traces inside venture whatever distribution present instruction case prior closer conditional given observations added prior infer using inference technique example program resulting marginal distribution whether coin tricky shifts prior approximation posterior given two observed flips increasing probability coin tricky ever slightly increasing shifts distribution closer true posterior inference strategies including exact sampling techniques covered later execution trace inside venture instruction sampled new distribution inference finished predict instruction causes venture report sampled prediction outcome another flip coin weight used generate sample comes current execution trace modeling inference instructions venture programs consist sequences modeling instructions inference instructions given positive integer index global instruction interactive venture sessions structure modeling instructions used specify probabilistic model interest conditions model inference engine needs enforce requests prediction values conditioned distribution core modeling instructions venture assume name expr binds result simulating model expression expr symbol name global environment used build model used interpret data returns value taken name along index instruction observe expr adds constraint model expression expr must yield every execution note constraint enforced inference performed predict expr samples value model expression expr current distribution executions engine returns value amount inference done since last observe approaches infinity distribution converges conditioned distribution reconciles observes default markov chain variant algorithm church goodman mansinghka roy bonowitz tenenbaum mansinghka simple random scan algorithm chooses random choices uniformly random current execution resimulates conditioned rest trace accepts rejects result venture implementations also support labels instruction language however recurrence line numbers reflective ways current instruction language primitive compared modeling language example currently lacks control flow constructs procedural abstraction recursion let alone runtime generation execution statements instruction language current research focused addressing limitations forget instruction causes engine undo forget given instruction must either observe predict forgetting observation removes constraint represents inference problem note effect may visible infer performed venture supports additional instructions inference including infer instruction first incorporates observations occurred last infer evolves probability distribution executions according inference strategy described two inference expressions corresponding exact approximate sampling schemes useful consider rejection default corresponds use rejection sampling generate exact sample conditioned distribution traces runtime requirements may substantial exact sampling applies smaller class programs approximate sampling however rejection crucial understanding meaning probabilistic model debugging models without simultaneously debugging inference strategies default one corresponds one transition standard uniform mixture transition operators used automatic inference scheme many probabilistic programming systems number transitions increased towards semantics instruction approach exact implementation conditioning via rejection sample expr instruction simulates model expression expr current trace returns value forgets trace associated simulation equivalent predict expr followed forget provided single instruction convenience force expr modify current trace simulation expr takes value implementation roughly thought observe immediately followed infer forget instruction used controlling initialization debugging modeling expressions venture modeling expressions describe stochastic generative processes space possible executions modeling expressions venture program constitute hypothesis space program represents venture program thus represents probabilistic model defining stochastic generative process samples expression level venture similar scheme church though several example branching procedure construction desugared applications stochastic procedures ordinary combinations need treated sometimes refer syntax including syntactic sugar venchurch desugared language represented json objects corresponding parse trees venture special forms additionally venture supports dynamic scoping construct called scope include tagging portions execution history referred inference instructions best knowledge analogous constructs yet introduced probabilistic programming languages venture modeling expressions broken simple cases literal values describe constant values language discussed combinations first evaluates expressions arbitrary order applies value must stochastic procedure values returns value application result quoted expressions quote expr returns expression value expr compared combinations quote suppresses evaluation lambda expressions lambda args returns stochastic procedure given formal parameters procedure body args specific list argument names argk conditionals evaluates resulting value true evaluates returns value evaluates returns value inference scope annotations scope include expr provides mechanism naming random choices probabilistic model referred inference programming scope include form simulates obtain scope value block value simulates expr tagging random choices process given scope block details inference scopes given inference scopes venture programs may attach metadata fragments execution traces via dynamic scoping construct called inference scopes scopes defined modeling expressions via special form scope include expr assigns random choices required simulate expr scope named value resulting simulating block within scope named value results simulating single random choice multiple inference scopes one block within scope also random choice gets annotated scope time choice simulated within context scope include form first time simulated inference scopes referred inference expressions thus providing mechanism associating custom inference strategies different model fragments example parameter estimation problem hidden markov models might natural one scope hidden states another hyperparameters third parameters blocks hidden state scope correspond indexes hidden state sequence see later write cycle hybrid kernels use make proposals hyperparameters either particle gibbs hidden states inference scopes also provide means controlling allocation computational effort across different aspects hypothesis performing inference scopes whose random choices conditionally dependent choices made given predict instruction interest random choices currently tagged scope block pairs blocks thought subdivisions scopes meaningful potentially ordered subsets see later inference expressions make use block structure provide control inference enable novel inference strategies example order set random choices traversed conditional sequential monte carlo controlled via blocks regardless order constructed initial simulation scopes blocks produced random choices ordinary venture modeling enables use random choices one scope control scope block allocation random choices another scope random choices used construct scopes blocks may auxiliary variables independent rest model latent variables whose distributions depend interaction modeling assumptions data inference present restriction inference random choices set blocks add remove random choices set blocks though probability membership affected applications randomized scope block assignments include variants cluster sampling techniques beyond spin glass swendsen wang regression nott green settings typically deployed implementation details needed handle random scopes blocks beyond scope paper however later see analytical machinery sufficient justifying correctness complex transition operators involving randomly chosen scopes blocks venture provides two default scope contains every random choice previously proposed inference schemes church well concurrently developed generic inference schemes variants venture correspond inference instructions acting default global scope latents scope contains latent random choices made stochastic procedures hidden venture using scope programmers control frequency external inference systems invoked interleave inference external variables inference latent variables managed venture inference expressions inference expressions specify transition operators evolve probability distribution traces inside venture virtual machine contrast instructions extend model add data initiate inference using valid transition operator venture provides several primitive forms constructing transition operators leave conditioned distribution invariant valid inference expression current implementations restrict values scope block names symbols integers simplicity restriction intrinsic venture specification implementations far merged default latent scopes triggered inference latents automatically every transition default scope forms scope must literal scope block must either literal block within scope keyword one selected set random choices inference expression acts given specified scope block block specification union blocks within scope taken block specification one one block chosen uniformly random set blocks within given scope core set inference expressions venture follows scope block propose new values selected choices either resimulating invoking custom local proposal kernel one provided accept reject results via rule accounting changes mapping random choices using machinery provided later paper repeat whole process times rejection scope block use rejection sampling generate exact sample conditioned distribution selected random choices repeat whole process times potentially improving convergence selected set randomly chosen block one transition operator often computationally intractable optimal sense makes progress per completed transition towards conditioned distribution traces transition operators exposed venture inference language viewed asymptotically convergent approximations pgibbs scope block use conditional sequential monte carlo propose approximation conditioned distribution selected set random choices block ordered blocks scope sorted distribution sequence distributions includes random choices next block otherwise distribution sequence includes single random choice drawn selected set ordering arbitrary meanfield scope block iters use iters steps stochastic gradient optimize parameters partial approximation conditioned distribution random choices given scope block block interpreted make single proposal using approximation repeat process times enumerative gibbs scope block use exhaustive enumeration perform transition selected random choices proposal corresponding optimal conditional proposal conditioned values newly created random variables random choices whose domains enumerated resimulated prior unless equipped custom simulation kernels selected random choices discrete new random choices created equivalent rejection transition operator corresponds discrete enumerative implementation gibbs sampling hence name computational cost scales exponentially number random choices opposed divergence prior conditional freer pgibbs implemented mutation however versions use simultaneous particles represent alternative possibilities given prepended func signal aspect implementation simultaneously accessible sets particles implemented using persistent data structure techniques typically associated pure functional programming yield improvements order growth runtime compared pgibbs imposes restrictions selected random also currently two composition rules transition operators enabling creation cycle mixture hybrids cycle produces cycle hybrid transition operators represented given inference expression transition operator run sequence whole sequence repeated times mixture produces mixture hybrid given transition operators using given mixing weights invoked times language flexible enough express broad class standard approximate inference strategies well novel combinations standard inference algorithm templates conditional sequential monte carlo gibbs sampling variational inference additionally ability use random variables map random choices inference strategies perform inference variables may enable new cluster sampling techniques said aesthetic standpoint current inference language also many limitations seem straightforward relax example seems natural expand inference expressions support arbitrary modeling expressions thereby also support arbitrary computation produce inference schemes machinery needed support natural extensions discussed later paper values venture values include usual scalar symbolic data types scheme along extended support collections additional datatypes corresponding primitive objects probability theory statistics venture also supports stochastic procedure datatype used primitive procedures well compound procedures returned lambda full treatment value hierarchy scope provide brief list important values numbers roughly analogous floating point numbers atoms discrete items internal structure ordering generated categorical draws also dirichlet processes symbols symbol values name formal argument passed lambda name associated assume instruction result evaluating quote special form support multiple simultaneous particles stochastic procedures within given scope block must support clone operation auxiliary state storage ability emulate feasible standard exponential family models may feasible external inference systems hosted distributed hardware collections vectors map numbers values support random access maps map values values support amortized random access via hash table relies hash function associated kind value stochastic procedures include components standard library also created lambda stochastic procedures automatic inference versus inference programming although venture programs incorporate custom inference strategies require interfaces automatic existing probabilistic programming systems straightforward implement gibbs sampling algorithms sole automatic inference option many probabilistic programming systems invoked single instruction also seen global sequential monte carlo mean field algorithms similarly straightforward describe support programmable inference necessarily increase education burden probabilistic programmers although provide way avoid limiting probabilistic programmers potentially inadequate set inference strategies idea inference strategies formalized structured compositionally specified inference programs operating model programs best knowledge new venture view standard inference algorithms actually correspond primitive inference programming operations program templates depend specific features model program acted upon perspective suggests far complex inference strategies possible right primitives means combination means abstraction identified considerations modularity analyzability soundness completeness reuse become central complicated interaction inference programs model programs example inference programmers need able predict asymptotic scaling inference instructions factoring contribution computational complexity model expressions given inference instruction applied another example comes considering abstraction reuse possible write compound inference procedures reused across different models perhaps even use inference learn procedures via appropriate hierarchical bayesian formulation another view arguably closer mainstream view machine learning inference algorithms better thought analogy mathematical programming operations research algorithm corresponding solver class problems certain structure perspective suggests likely small set monolithic opaque mechanisms sufficient important problems setting one might hope inference mechanisms matched models problems via simple heuristics problem automatically generating inference strategies prove easier query planning databases vastly easier automatic programming remains seen whether traditional view sufficient practice underestimates richness inference interaction modeling problem specification procedural declarative interpretations briefly consider relationship procedural declarative interpretations venture programs venture code direct procedural reading defines probabilistic generative process samples hypotheses checks constraints invokes inference instructions trigger specific algorithms reconciling hypotheses constraints instructions significantly impact runtime change distribution outputs divergence true conditioned distribution execution traces distribution encoded program may depend strongly inference instructions chosen interleaved incorporation data venture code also declarative readings unaffected procedural details one way formalize meaning venture program probability distribution execution traces second approach ignore details execution restrict attention joint probability distribution values predicts far third approach consistent venture interactive interface equate meaning program probability distribution values predicts possible sequences instructions could executed future second third readings many programs equivalent induce distribution albeit different scaling behavior amount inference performed infer instruction increases interpretations coalesce recovering simple semantics based sequential bayesian reasoning consider replacing inference instructions replaced exact sampling infer rejection default one sufficiently large number transitions generic inference operator infer default one case infer implements single step sequential bayesian reasoning conditioning distribution traces observes since last infer distribution infer becomes equivalent distribution represented programs assumes observes infer order single infer computational complexity varies based ordering interleaving infers observes declarative meaning unchanged although correspondence declarative fully bayesian semantics may require unrealistic amount computation applications close approximations useful tools debugging presence limit may prove useful probabilistic program analysis transformation venture programs represent distributions combining modeling operations sample values expressions constraint specification operations build conditioner inference operations evolve distribution closer conditional distribution induced conditioner later paper see evaluate partial probability densities probabilistic execution traces distributions well ratios gradients partial densities needed wide range inference schemes markov chain sequential monte carlo architectures current venture implementation maintains single probabilistic execution trace per virtual machine instance trace initialized simulating code assume observe instructions stochastically modified inference via transition operators leave current conditioned distribution traces invariant prior posterior probability distributions traces implicit probed repeatedly program forming monte carlo estimates markov chain architecture chosen simplicity sequential monte carlo architectures based weighted collections traces also possible indeed straightforward number initial traces could specified via infer instruction beginning program forward simulation would nearly unchanged observe instructions would attach weights traces based likelihood probability density corresponding constrained random choice observation predict instructions would read values single arbitrarily chosen active trace infer resample instruction would implement multinomial resampling also change active trace ensure predicts always mutually consistent venture inference programming instructions could treated rejuvenation kernels del moral would need modify kind sequential monte carlo implementation would advantage weights could used estimate marginal probability densities given observes another source parallelism would exposed integrating sophisticated coupling strategies framework whiteley inference language could also prove fruitful running separate venture virtual machines guaranteed produce independent samples distribution represented venture program distribution typically approximate desired conditional need quantify uncertainty precisely venture programmers append repetitions sequence infer predict instructions program unless infer instructions use rejection sampling choice yields predict outputs dependent markov chain sequential monte carlo architectures approximates behavior independent runs program application constraints determine approximation strategies appropriate problem examples give simple illustrations venture language including standard modeling idioms well use custom inference instructions venture also used implement applications several advanced modeling inference techniques examples include generative probabilistic graphics programming mansinghka kulkarni perov tenenbaum topic modeling blei description applications beyond scope paper idden arkov odels represent hidden markov model venture one use stochastic recursion capture hidden state sequence index stochastic observation procedure give variant continuous observations binary latent state priori unbounded number observation sequences model assume hypers unique gamma assume mem lambda seq state bernoulli seq assume subtlety transition operators must change random choice constrained would require changing weight trace mem lambda seq seq assume lambda state state bernoulli state assume lambda state normal state observe infer default one observe infer default one sequentialized variant default inference scheme last infer statement removed program would yield stochastic transitions several church implementations linear rather quadratic scaling length sequence interleaving inference addition observations improves bulk incorporation observations mitigating strong conditional dependencies posterior another inference strategy particle markov chain monte carlo example one could combine moves hyperparameters given latent states conditional sequential monte carlo approximation gibbs hidden states given hyper parameters observations one implementation scheme transitions metropolishastings done hyper parameters every transitions approximate gibbs based particles repeated times infer cycle hypers one pgibbs state ordered global particle gibbs algorithm wood particles would expressed follows infer pgibbs default ordered note transitions effective pure conditional sequential monte carlo handling global parameters andrieu moves allow hyperparameter inference constrained latent states applications hyperparameter inference sometimes skipped one representation close standard particle filter uses randomly chosen hyper parameters yields single latent trajectory infer pgibbs state ordered ierarchical onparametric bayesian odels show implement one version multidimensional dirichlet process mixture gaussians rasmussen difference particle filter exposes weighted particles forming monte carlo expectations rapidly obtaining set approximate samples straightforward modifications needed venture expose set weighted traces instead single trace literally recover particle filtering assume alpha hypers gamma assume scale hypers gamma assume crp alpha assume mem lambda clustering crp assume mem lambda cluster dim parameters cluster normal assume mem lambda cluster dim parameters cluster gamma scale assume lambda cluster dim lambda normal cluster dim cluster dim assume mem lambda dim dim observe default resimulation based metropolis hastings scheme infer default one parameters explicitly represented uncollapsed rather integrated often practice default scheme effective problem also straightforward balance computational effort differently infer cycle hypers one parameters one clustering one execution cycle one hyperparameter transition five parameter transitions parameters randomly chosen cluster five cluster reassignments made hyperparameter parameters cluster assignments chosen random number data points grows ratio computational effort devoted inference hyperparameters cluster parameters cluster assignments higher default scheme note complexity inference instruction well computational effort ratio depend number clusters current trace also straightforward use structured particle markov chain monte carlo scheme cluster assignments infer mixture hypers one parameters one pgibbs clustering ordered due choice particles pgibbs inference strategy scheme closely resembles approximation blocked gibbs indicators based sequential initialization complete model also note despite low mixing weight clustering scope mixture inference program allocates asymptotically greater computational effort inference cluster assignments previous strategy pgibbs transition operator guaranteed reconsider every single cluster assignment nverse nterpretation describe inverse interpretation modeling idiom possible languages recall venture modeling expressions easy represent venture data objects venture models invoke evaluation application arbitrary venture stochastic procedures features make straightforward write evaluator perhaps better termed simulator probabilistic programming language application highlights also embodies new potentially appealing path solving problems probabilistic program synthesis less expressive languages learning programs structure requires custom machinery goes beyond provided language venture inference machinery used state estimation causal inference brought bear problems probabilistic program synthesis dependency tracking inference programming machinery general venture brought bear problem approximately bayesian learning probabilistic programs first define utility procedures manipulating references symbols environments assume lambda lambda assume deref lambda assume lambda list dict list quote bernoulli quote normal quote plus quote times quote branch list bernoulli normal plus times branch assume lambda syms vals pair dict syms vals assume lambda sym env contains first env sym lookup first env sym sym rest env interesting make ref deref use closures pass references around trace using idiom avoids unnecessary growth scaffolds consider execution trace value argument make ref becomes principal node transition uses reference value passed deref become resampling nodes value reference unchanged though value reference refers permits dependence tracking construction complex data structures although still far study expressiveness probabilistic languages via definitional interpretation reynolds abelson sussman seems likely probabilistic programming formulations probabilistic program synthesis inference space probabilistic programs possibly including inference instructions interpreter programs revealing given machinery straightforward write evaluator simple function language access arbitrary venture primitives assume lambda args eval pair deref args assume lambda operator operands deref lookup operator deref lookup operator deref lookup operator operands assume lambda expr env expr deref expr env expr expr deref lookup expr quote lambda pair env rest expr lambda operator operands operator operator operands operator operands deref lookup expr env lambda deref env rest expr also possible generate input exprs using another venture program use generalpurpose inference mechanisms explore hypothesis space expressions given constraints values result call inverse interpretation approach probabilistic program synthesis church contrary liang inverse interpretation algorithms limited rejection sampling church limited scheme venture programmers options venture possible associate portions program source portions induced program executions custom inference strategies example expression grammar used incremental evaluator assume genbinaryop lambda flip quote plus quote times assume genleaf lambda normal assume genvar lambda assume genexpr lambda flip genleaf flip genvar list genbinaryop genexpr genexpr assume noise gamma assume expr genexpr quote assume mem lambda expr list quote list assume lambda normal noise observe observe indirection via references substantially improves asymptotic scaling programs like given production rule grammar resimulated portions execution program depend changed source code resimulated naive implementation evaluator would property scaling approach larger symbolic expressions small programs require multiple advances overall system efficiency improvements necessary inverse interpretation also may benefit additional inference operators hamiltonian monte carlo continuous parameters expression priors structures would also help example prior resimulation recovers search moves duvenaud may expressible separately generating symbolic expressions contents environment evaluated longer term may fruitful explore formalizations knowledge taught programmers using probabilistic programming stochastic procedures random choices venture programs arise due invocation stochastic procedures sps stochastic procedures accept input arguments values venture sample output values given inputs venture includes stochastic procedure library includes sps construct sps special form lambda gets desugared stochastic procedures also added extensions venture provide mechanism incremental optimization venture programs model fragments venture delivers inadequate performance migrated native inference code interoperates enclosing venture program expressiveness extensibility many typical random variables draws bernoulli gaussian distribution straightforward represent computationally one common approach use pair functions simulator maps input space values stream random bits output space values marginal density maps pairs representation corresponds elementary random procedures supported early church implementations repeated invocations procedures correspond iid sequences random variables whose densities known simple intuitive simple interface naturally handle many useful classes random objects fact many objects easy express compound procedures church venture made fit form stochastic procedures venture support broader class random objects stochastic procedures mem including stochastic memorization apply map procedures may accept procedures arguments apply procedures internally produce procedures return values stochastic procedures venture equipped simple mechanism handling cases fact turns structural changes execution traces including arising execution constructs affect control flow handled mechanism simplifies development inference algorithms permits users extend venture adding new primitives affect flow control stochastic procedures whose applications exchangeably coupled examples include collapsed representations conjugate models bayesian statistics combinatorial objects bayesian nonparametrics chinese restaurant processes probabilistic sequence models hmms whose hidden state sequences efficiently marginalized support primitives whose applications coupled important recovering efficiency manually optimized samplers frequently make use collapsed representations whereas exchangeable primitives church thunks prohibits collapsing many important models hmms venture supports primitives whose applications partially exchangeable across different sets arguments formal requirement cumulative log probability density sequence pairs invariant permutation stochastic procedures lack tractable marginal densities complex stochastic simulations incorporated venture even marginal probability density outputs simulations given inputs efficiently calculated models literature approximate bayesian computation abc priors defined outcome forward simulation code thus naturally supported venture additionally range doubly intractable inference problems including applications venture reasoning behavior approximately bayesian reasoning agents included using mechanism stochastic procedures external latent variables invisible venture always models admit specialized inference strategies whose efficiency recovered performing generic inference execution traces one principal design decisions venture allow strategies exploited whenever possible supporting broad class stochastic procedures custom inference internal latent variables hidden rest venture stochastic procedure interface thus serves flexible bridge venture foreign inference code analogous role foreign function interfaces ffis play traditional programming languages primitive stochastic procedures informally primitive stochastic prodecure psp object simulate family distributions indexed arguments addition simulating psps may able report logdensity output given input may incorporate unincorporate information samples drawn using mutation case conjugate prior mutation aribitrary draws psp must remain partially exchangeable discussed psp may also custom proposal kernels case must able return contribution acceptance rate example psp simulates gaussian random variables may provide drift kernel proposes small steps around previous location rather resampling prior distribution primitive stochastic procedures parameterized following properties behaviors isstochastic psp consume randomness invoked canabsorbargumentchanges psp absorb changes input arguments true psp must correctly implement logdensity described childrencanabsorbatapplications psp return implements absorbing applications optimization needed integrate optimized expressions log marginal probability sufficient statistics standard conjugate models value simulate args samples value application psp given arguments args logp logdensity value args optional procedure evaluates log probability output given input arguments psp simulate args incorporate aux value args incorporate value stored variable named value auxiliary storage aux associated contains psp incorporate used implement sps whose applications exchangeably coupled always sufficient store update full set values returned observed args often counts sufficient statistics necessary unincorporate aux value args remove value auxiliary storage restoring state consistent values incorporated unincorporated done application unevaluated stochastic procedure interface stochastic procedure interface specifies contract venture primitives must satisfy preserve coherence venture inference mechanisms also serves vehicle external inference systems integrated venture interface preserves ability primitives dynamically create destroy internal latent variables hidden venture perform custom inference latent variables efinition definition stochastic procedure stochastic prodecure pair request output psps along latent variable simulator request returns density implicitly defined respect argument independent choice dominating measure psps guaranteed produce discrete outputs measure assumed counting measure logdensity equivalent log probability mass function careful treatment venture left future work list tuples addr expr env represent expressions whose values must available output start simulation refer requests form exposed simulation requests esrs list opaque tokens interpreted latent variables output need order simulate output along values exposed simulation requests refer requests form latent simulation requests lsrs latent variable simulator responds lsrs simulating latent variables requested output returns final output procedure conditioned inputs results exposed simulation requests results latent simulation requests xposed imulation equests want procedures able pass expr env pairs venture evaluation make use results way procedure may also multiple applications make use shared evaluation mem cases procedure must take care request addr time expr env evaluated first time reused thereafter specifically esr request addr expr env handled venture follows first venture checks requesting namespace see already entry address addr venture evaluates expr env adds mapping addr root namespace root root evaluation tree namespace contain addr venture look root either way venture wires root extra argument output node atent imulation equests oreign nference nterface procedures may want simulate perform inference latent variables hidden venture consider optimized implementation hidden markov model integrated following venture program assume observe observe observe infer default one predict predict constructor make hmm generates uncollapsed hidden markov model generating rows transition observation matrices random assumption make hmm primitive although would straightforward implement make hmm compound procedure venture using variations examples presented earlier constructor returns procedure bound symbol hmm permits observations process queried via hmm index program adds sequence fragment length three requests predictions next observation sequence well initial observation entirely new sequence probabilistic program captures common pattern integrating venture foreign probabilistic model fragment dynamically queried contains latent variables hidden venture useful partition random choices program follows transition observation matrices hmm could viewed part value hmm therefore returned output psp make hmm observations managed venture applications hmm hidden states however fully latent standpoint venture yet need created updated destroyed invocations hmm created destroyed arguments arguments make hmm change venture makes possible procedures instantiate latent variables necessary simulate given program mechanism similar exposed simulation requests except case call latent simulation requests lsrs opaque venture venture calls appropriate methods appropriate times ensure bookkeeping handled correctly framework straightforward apply make hmm stochastic procedure returns hmm queried observation time requestpsp return time lsr venture tells hmm simulate lsr hmm either nothing already exists internal store simulations else continue simulating current position outputpsp samples conditioned latent application time ever unevaluated venture tell hmm detach lsr cause hmm place latents longer necessary simulate program latentdb returns venture later venture may tell hmm restore latents latentdb example proposal rejected starting trace restored main reason encapsulate latent variables way opposed requesting esrs use optimized implementations inference values potentially utilizing inference methods example uncollapsed hmm implement efficiently sample latent variables conditioned observations procedures integrated venture defining arbitrary ergodic kernel aekernel venture may call inference simply venture note mechanism may used sps make latent simulation requests latent variables instantiated upon creation hmm uncollapsed implement functionality stochastic procedures must implement three procedures addition procedures needed esr requestor lsr requestor output psps simulatelatents aux lsr shouldrestore latentdb simulate latents corresponding lsr using tokens latentdb indexed lsr find previous value shouldrestore true detachlatents aux lsr latentdb signal latents corresponding request lsr longer needed store enough information latentdb value recovered later aeinfer aux trigger external implementation perform inference latent variables using contents aux often convenient simulatelatents store latent variables aux incorporate store return values applications aux along arguments produced examples use stochastic procedure interface found current releases venture system ptimizations higher order venture provides special mechanism allows certain sps exploit ability quickly compute logdensity applications consider following program assume alpha gamma assume alpha alpha observe false observe true repeat times observe false infer inference scheme would keep track counts observations could perform rapid proposals alpha exploiting conjugacy hand naive generic inference scheme might visit one billion observation nodes compute acceptance ratio proposal achieve efficient inference scheme letting make beta bernoulli responsible tracking sufficient statistics applications collapsed coin evaluating log density applications block stochastic procedures return stochastic procedures implement optimization said absorbing applications often abbreviated aaa discuss techniques implementing mechanism later section auxiliary tate stateless may associated auxiliary store called spaux carries mutable information spauxs several uses makes exposed simulation requests venture uses spaux store mappings addresses esrs node stores result simulation psp keeps track sample counts sufficient statistics stores number trues falses store information spaux makes latent simulation requests latent variables simulates respond requests stored spaux sps may optionally store part value directly spaux necessary easily store value latents outputs separately probabilistic execution traces bayesian networks decompose probabilistic models nodes random variables equipped conditional probability tables edges conditional dependencies interpreted probabilistic programs via ancestral simulation algorithm frey network traversed order consistent topology random variable generated given immediate parents bayesian networks also viewed expressing function evaluating joint probability density nodes terms factorization given graph structure describe probabilistic execution traces pets serve analogous functions venture programs address additional complications arise presence higher order probabilistic procedures also describe recursive procedures constructing destroying probabilistic execution traces venture modeling language expressions evaluated unevaluated definition probabilistic execution trace probabilistic execution traces consist directed graphical model representing dependencies current execution along stateful auxiliary data stochastic procedure venture program metadata existential dependencies exchangeable coupling typically identify executions pets denote via symbols pets contain following nodes one constant node global environment one constant node every value bound global environment includes builtin sps one constant node every call eval expression either quoted one lookup node every call eval triggers symbol lookup one request node one output node every call eval triggers application refer operator nodes operand nodes request nodes output nodes note special node types pets also contain following edges one lookup edge lookup node node looking one operator edge every request node operator node one operator edge every output node operator node one operand edge every request node operand nodes one operand edge every output node operand nodes one requester edge every output node corresponding request node one esr edge root node every family application requests every node represents random variable value change execution pet also includes spaux every needs one unlike values nodes spauxs may mutated execution example increment number trues families divide traces families one venture family every assume predict observe directive one family every unique esr requested forward simulation uniform treatment conditional simulation executions programs satisfy following property structure every family function expression depend random choices made evaluating expression part topology graph change esrs requested exchangeable coupling given exchangeability assumptions sps cite generalized finetti orbanz roy conclude addition observed random variables explicitly represented pet one latent random variable every pet corresponding unobserved finetti measure one latent variable args every set arguments args called edge edges every args edge args every node corresponding application form args however args marginalized way mutation spaux effectively introducing hyperedge indirectly couples application nodes applications chosen represent dependencies graphical structure pet following reasons first integrate args encode dependencies directed edges fix specific ordering applications second orderings interested graph combines types directed edges would cyclic would complicate future efforts develop causal semantics pets venture programs existential dependence contingent evaluation family existentially dependent nodes request part esr sense point requested nodes family would computed simulating pet existential dependence handled garbage collection semantics whereby family longer part pet selected nodes thus unevaluated examples briefly give example pets simple venture programs rick coin variant running example defines model used inferences whether coin tricky trick coin allowed weight version use includes one observation single flip coin came heads keep pet simple possible give program pet form already desugared application predicate consequent alternate replaced branch predicate quote consequent quote alternate branch ordinary stochastic procedure exp bernoulli weight value exp bernoulli weight value request exp weight value request exp bernoulli value bernoulli exp weight value exp bernoulli value bernoulli exp quote beta quote value exp quote value exp quote beta value beta exp quote beta quote value exp value request env exp value exp beta value request beta env exp value branch exp quote value exp quote beta value beta exp value exp value branch request exp bernoulli value exp bernoulli value request exp value request exp bernoulli value bernoulli exp value exp value exp bernoulli value bernoulli venture families exp beta value beta venture families figure two different pet structures corresponding trick coin program execution trace coin fair execution trace coin tricky containing additional nodes depend existentially coin flip assume bernoulli assume weight branch quote beta quote observe bernoulli weight true figure shows two pet structures arise simulating program along arbitrarily chosen values simple bayesian network figure shows pet following program implementing simple bayesian network assume rain bernoulli assume sprinkler bernoulli branch rain assume grasswet bernoulli branch rain branch sprinkler branch sprinkler observe grasswet true note venture families pet corresponds node bayesian network coarsened version pet contains conditional dependence independence information bayesian network would tochastic memoization illustrate program exhibits stochastic memoization program constructs stochastically memoized procedure applies three times unlike deterministic memoization stochastically memoized procedure stochastic request psp sometimes returns previously sampled value sometimes samples fresh one random choices follow process figure shows pet corresponding typical simulation note overlapping requests assume pymem bernoulli exp value exp grasswet value exp bernoulli rain sprinkler sprinkler value request exp bernoulli value bernoulli exp rain sprinkler sprinkler value exp value request env exp value branch exp value exp sprinkler value exp sprinkler value request env exp value exp value branch exp value exp rain value exp value request env exp sprinkler value exp value branch exp value exp value exp sprinkler value exp bernoulli rain value request exp rain value exp value exp bernoulli value bernoulli request env exp value exp value exp value branch exp rain value exp bernoulli value request exp value exp bernoulli value bernoulli venture families figure pet corresponding bayesian network three numbered families correspond node bayesian network capturing execution history needed simulate node given parents exp value exp value exp value request pymemoizedsp env request pymemoizedsp env request exp env exp value pymem exp value pymem exp value pymem exp pymemoizedsp value exp pymemoizedsp value request request exp pymemoizedsp value bernoulli exp pymemoizedsp value bernoulli exp pymem bernoulli value pymem request exp value exp pymem value exp value exp bernoulli value bernoulli venture families figure pet corresponding execution program stochastic memoization procedure stochastically memoized bernoulli applied twice based value requests arising invocations predict predict predict constructing pets via forward simulation let constrained program venture primary inference strategies require execution positive probability even begin therefore first thing simply evaluate program interpreting fairly standard way except conditional evaluation handled uniformly esr machinery presented simplicity elide details related inference scoping seudocode eval apply evaluation generally similar pure scheme noteworthy differences first evaluation creates nodes every recursive call connects together form directed graph structure probabilistic execution trace second distinctions primitive procedures compound procedures call evaluation procedure val fam ily emphasize family block structure pets third scaffold database random choices threaded recursions support use inference including restoring trace fragments transition rejected reusing random choices note environment model evaluator abelson sussman used even though underlying language pure pet storing environment structure recursive invocations eval val family trace exp env scaffold isselfevaluating exp return exp elseif isquoted exp return textofquotation exp elseif isvariable exp sourcenode exp egenerate trace exp scaffold false return sourcenode else weight operatornode val family trace env scaffold operandnodes operand operandnode val family trace operand env scaffold weight weight operandnode requestnode outputnode operatornode operandnodes env weight weight pply trace requestnode outputnode scaffold false return weight outputnode call egenerate ensures val family called context pet traversed order compatible dependence structure program sometimes refer orders standpoint forward simulation however egenerate safely ignored pply trace requestnode outputnode scaffold restore weight pply psp trace requestnode scaffold restore weight weight val equests trace requestnode scaffold restore weight weight egenerate esrparents trace outputnode scaffold restore weight weight pply psp trace outputnode scaffold restore return weight stochastic procedures allowed request evaluations enables higherorder procedures well encapsulation custom control flow constructs example compound procedures evaluate body environment extended formal parameters argument values prevent flexibility introducing arbitrary dependencies expressions environments restricted constructible arguments procedure procedure constructed val equests trace node scaffold restore spaux node node weight request node exp env block subblock restore esrparent weight weight estore family trace esrparent scaffold else esrparent val family trace exp env scaffold weight weight node esrparent else weight weight egenerate trace scaffold restore esrparent block one block subblock esrparent esrparent lsr latentdb else latentdb one weight weight spaux lsr restore latentdb return weight stochastic procedures also allowed perform opaque operations produce outputs given value arguments requested evaluations note may result random choices added record maintained trace pply psp trace node scaffold restore psp args node node node oldvalue node else oldvalue one determine new value shouldrestore newvalue oldvalue elseif node newvalue node trace oldvalue args else newvalue args determine weight node weight node trace newvalue oldvalue args else weight node newvalue newvalue args issp newvalue process ade trace node node node return weight functionality supported link new stochastic procedures trace register customizations implement trace inference transitions make use rocess ade trace node isaaa node node node spref isaaa node node undoing simulation pet fragments venture also includes requires unevaluation procedures dual evaluation procedures described earlier pet fragments need removed trace two situations first forget instructions triggered expression corresponding directive removed second inference changes values certain requestpsps make cause families longer requested first case random choices permanently removed whereas second case random choices may need restored proposal rejected seudocode uneval unapply trace fragment unevaluated must visit application nodes trace fragment psps chance unincorporate input output pairs operations needed essentially inverses simulation procedures described designed visit nodes reverse order evaluation ensure compatibility exchangeable coupling give pseudocode operations eliding details garbage neval family trace node scaffold weight elseif node weight xtract trace scaffold else weight napply trace node scaffold false operandnode weight weight neval family trace operandnode scaffold weight weight neval family trace scaffold return weight unapply stochastic procedure venture must undo random choices also unapply requests generated napply trace node scaffold weight napply psp trace node scaffold weight weight extract esrparents trace outputnode scaffold weight weight neval equests trace scaffold weight weight napply psp trace scaffold return weight unapply primitive stochastic procedure remove random choices trace unincorporate update weight accordingly note store value restore later necessary previous work anecdotally explored possibility preserving unused trace fragments treating auxiliary variables following treatment component model parameters algorithm neal theory delay garbage collection multiple copies trace fragment maintained branch point could support adaptation posterior detailed empirical evaluation strategies pending comprehensive benchmark suite venture well implementation napply psp trace node scaffold psp args node node node issp node teardown ade trace node node oldvalue node oldvalue args node weight node trace oldvalue args else weight node oldvalue node return weight handling requests unevaluation involves two additional subtleties first latent random choices must handled appropriately second requests must unevaluated application refers neval equests trace node scaffold spaux node node weight request node node lsr reversed weight weight spaux lsr exp env reversed esrparent esrparent node esrparent weight weight neval family trace esrparent scaffold return weight finally destroyed output psp maker application unapplied destroyed auxiliary storage garbage collected unapplication maker must happen applications made unapplied due constraint unevaluation visits nodes reverse order evaluation regeneration information auxiliary storage needs preserved latent eardown ade trace node isaaa node node node isaaa node node enforcing constraints via constrain unconstrain unfortunately observations may positive probability given program execution venture may happen sample right data noise added data generating expression ensure positive probability may take long time venture incorporate observed data hand general problem finding program execution output positive probability intractable general encode sat without even making use contingent evaluation special primitives venture designed attempt invert programs venture implements middle ground designed sufficiently stochastic probabilistic programs mind particular however order support common cases bayesian statistics support inverting simple kind determinism introduce method constrain directive value recursively walks backwards along edges find outermost application different kind psp constrained value positive probability psp application constrain value done otherwise unevaluate entire program evaluate hopes execution constrainable simple versions constrain unconstrain implemented follows onstrain trace node value return onstrain trace value elseif node return onstrain trace node value else psp args node node node args weight value args node value value args node return weight nconstrain trace node return nconstrain trace elseif node return nconstrain trace node else oldvalue node psp args node node node oldvalue args weight oldvalue args oldvalue args return weight instructive consider following program assume bernoulli assume bernoulli assume bernoulli observe xor true xor happens true chance constrain adjust random choices pet sampled already drawn true conditioned distribution however probability decreases rapidly xor deterministic repeated rejection may appear option avoid problem frequently restrict observes whose expressions form applications fixed stochastic procedure always successfully initialize begin inference venture designed sufficiently stochastic probabilistic programs mind see freer rationale also note ensure ergodicity periodic wholetrace proposals attempt restart prior via infer directive sufficient although kind independence sampler unlikely efficient note without additional restrictions observations presence constrain could introduce significant complexity constraining node used multiple places would necessarily give execution positive probability since probability children changed resolve simplify algorithms make following strong assumption venture programs venture program valid must guarantee node gets constrained inference opposed call observe every node along every outgoing directed path must either deterministic output psp psp lookup node moreover observed nodes path must agree observed value addition code given implementation constrain also propagates constrained value along outgoing deterministic paths details implementation beyond scope paper partitioning traces scaffolds scalable incremental inference scalable inference algorithms bayesian networks take advantage network representation conditional dependence seen probabilistic execution traces make analogous scalability gains possible probabilistic programs written languages representing conditional dependences also existential dependencies exchangeable coupling however pets spi also introduce complexities avoided setting bayesian networks example bayesian network random variables could possibly directly depend given random variable easy identify immediate children node according direct graph structure network pet value random choice may change children exist additionally bayesian networks nodes equipped conditional probability densities sps lack conditional density functions permitted venture joint densities chains random choices venture may available let alone conditional density choice given immediate descendants describe scaffolds mechanism used venture handle complexities scaffolds carve coherent subproblems inference context global inference problem analogously inference subproblems bayesian network arise conditioning subset nodes querying nodes contained within motivation notation let execution let denote set nodes wish propose new values block propose new values nodes may end trace probability example consider following program assume normal predict propose change value propagate change application report probability sampling output given inputs similarly program assume normal predict normal normal propose new value sign flips requester report probability old request given new inputs sps report likelihoods introduce similar problem program assume normal predict propose new value compute acceptance ratio since way account probability old value given new arguments three cases problem solved proposing new values nodes downstream nodes however want resimulate rest program make single proposal want find middleground rejecting running entire program default propose new values downstream nodes reach applications stochastic psps definitely compute logdensity original output given new values inputs say nodes absorb flow change call absorbing nodes note case statement may switch one branch another nodes old branch may longer exist proposed trace nodes may come existence simplify notation explicitly condition parents random variable derivations example bayesian network write although would unacceptable case bayesian networks marginalize condition arbitrarily discussion probabilistic programming ever compute probabilities nodes given parents notation problematic partitioning traces defining scaffolds let execution let denote set nodes wish propose new values block call principal nodes proposal let denote proposed trace introduce following definitions nodes definitely still exist whose values might change includes well nodes want absorb deterministic nodes unable absorb applications sps note symmetry brush brush nodes may longer requested resampled includes nodes branches may abandoned requests whose operator may change regenerated subset nodes may either change value longer exist conversely consists precisely set new values proposing children equivalently absorbing nodes absorbing differences logdensity output given new value logdensity output old value appears acceptance ratio torus nodes guaranteed still exist whose values guaranteed change construction torus torus torus parents nodes excluding nodes sets nodes whose values would required simulate torus calculate new log densities definitely exist whose values change set nodes would never referenced regenerating figure scaffold partitions trace five groups gold nodes definitely still exist proposal trace whose values may change drg blue nodes definitely compute likelihoods absorbing green nodes may longer exist brush dark grey nodes parents nodes three groups parents light grey nodes need never visited ignored note future graphs distinguish last two groups note may since may lookup different symbols different environments may also request different simulations definitions give following partitions refer pair scaffold set principal nodes see scaffold fundamental entity inference methods consider induces set scaffold executions reached resimulating along scaffold consulting parents needed moreover scaffold scaffold yield set factorization scaffold brush constructing scaffold given trace set principal nodes construct scaffold follows first walk downstream principal nodes every path either terminates reaches application stochastic psp certain compute logdensity along mark nodes appropriate however nodes marked process may actually brush therefore second step recursively disable requests may change value may longer exist mark brush every node figure scaffold construction principal node shown red resampling nodes shown gold absorbing nodes shown blue brush shown green construct scaffold venture first walks downstream red principal node identifying nodes whose values could possibly change stopping nodes guaranteed able absorb change albeit perhaps probability density venture identifies brush nodes may longer exist depending values chosen nodes resampled using separate recursion brush identified definite regeneration graph gold border blue straightforward identify see main text additional details requested disabled requests remove every node brush figure gives overview process figure illustrates need removing brush source trace making proposal pseudocode constructing scaffolds overall construction procedure scaffolds involves several passes linear size scaffold onstruct caffold trace principalnodes kernelinfo cdrg cabsorbing caaa ind andidate caffold trace principalnodes brush ind rush trace cdrg cabsorbing caaa drg absorbing aaa removebrush cdrg cabsorbing caaa brush border ind order drg absorbing aaa regencounts ompute egen ounts trace drg absorbing aaa border brush lkernels loadkernels trace drg kernelinfo return scaffold drg absorbing aaa border regencounts lkernels finding candidate scaffold given set principal nodes involves walking downstream principal node possible absorb find brush venture must count number times given requested family reachable drg equals number times exp quote quote value exp value request env exp value branch exp value exp quote value exp quote value exp value request request exp value exp value request exp value exp value exp value venture families figure need detaching brush making proposal trace includes invocation make symmetric dirichlet discrete produces sps whose applications exchangeably coupled trace result one application determines whether another application made thus second application brush scaffold generated first application detach brush first may propose first application conditioned application exist new trace family requested references family originate drg therefore possible family existence depend values drg nodes ind rush trace cdrg cabsorbing caaa disablecounts map disabledrequestnodes set brush set node cdrg isable equests trace node disablecounts disabledrequestnodes brush return brush isable equests trace node disablecounts disabledrequestnodes brush node disabledrequests return esrparent disablecounts esrparent disablecounts esrparent disablecounts esrparent esrparent isable family trace esrparent disablecounts disabledrequestnodes brush isable family trace node disablecounts disabledrequestnodes brush node isable equests trace disablecounts disabledrequestnodes brush isable family trace disablecounts disabledrequestnodes brush operatornode isable family trace operatornode disablecounts disabledrequestnodes brush border consists nodes resampling stops includes absorbing nodes aaa nodes also resampling nodes children predict directives ind order trace drg absorbing aaa border union absorbing aaa node drg haschildinaord trace node drg absorbing node return border stochastic regeneration proceed correctly nodes scaffold must annotated counts allow determination given node longer referenced therefore removed ompute egen ounts trace drg absorbing aaa border brush regencounts map node drg node aaa regencounts node added shortly elseif node border regencounts node node else regencounts node node determine number times reference aaa node regenerated node union drg absorbing parent node aybe ncrementaaar egen ount parent node brush aybe ncrementaaar egen ount elseif aybe ncrementaaar egen ount node return regencounts aybe ncrementaaar egen ount trace node regencounts value node node regencounts regencounts bsorbing pplications discussed earlier cases need visit absorbing nodes explicitly account logdensities keep track outputs report joint logdensity applications case say application absorbing applications aaa preliminary walk find scaffold stop upon reaching aaa node figure gives graphical illustration note aaa node value may change handling aaa introduces many subtle bookkeeping challenges code computing regencounts aaa nodes seem mysterious inspecting special cases aaa nodes regen extract next section breaking global inference problem collections local inference problems principal nodes chosen random choices current trace border correspond random choices constrained observes terminal resampling nodes assumes subsequently referred well predicts definite regeneration graph contain random choices guaranteed exist every execution probabilistic program conditionally simulating entire program reduced conditionally simulating completion torus given absorbing nodes figure scaffolds without optimization repeated applications stochastic procedures common probabilistic programs machine learning statistics shows pet fragment typical repetition principal node red resampling nodes yellow constructor blue note size scaffold grows linearly number applications shows pet fragment constructor whose resulting repeatedly applied absorb applications pink absorbs applications implementing weight calculations needed inference via information auxiliary storage scaffold size longer scales linearly number applications equivalence important feature venture design inference strategy works entire programs run program fragment conditioned remainder vice versa venture break problem sampling conditioned distribution entire pet might extremely high dimensional collections overlapping lowerdimensional inference subproblems venture strategy applicable overall problem becomes applicable subproblem current research exploring use venture programs construct custom proposals predict directive mapped principal node resampling node border observe statement matched absorbing node local kernels scaffolds later sections describe techniques building transition operators efficiently modify contents scaffolds sampling definite regeneration graph brush distribution leaves conditional invariant transition operators built stochastic proposals individual random choices call local kernels kernels must able sample new values individual random choices well calculate ratio forward reverse transition probabilities necessary using sampled value proposal scheme venture supports local kernels two types simulation delta kernels ocal simulation kernels resimulation bottom proposals local simulation kernel random choice look previous values making proposal simplest simulation kernel one samples according simulate provided associated stochastic procedure sometimes refer resimulation kernel let denote order local kernels called sample sample returns new value along following weight let denote reverse order local kernels called unsample unsample returns following weight resimulation kernel weights local kernel identically simulation kernels also conditioned anything torus without breaking asymptotic convergence values random choices torus available proposal time also unaffected transition simulation kernels thus used make proposals using values source trace input provides one mechanism augmenting topdown processing typical probabilistic programming custom information ocal delta kernels reuse random choices venture also supports delta kernels kernels transition trace access previous state addition contents torus gaussian drift kernel typical example satisfy following contract sample returns state weight unsample returns kernels applied stochastic procedures exhibit exchangeable coupling applications used particle methods since receive source state input also provide mechanism old value random choice parent choices changed delta kernels thus reduce unnecessary undesirable resampling stochastic regeneration algorithms scaffolds show coherently modify trace given partition valid scaffold using algorithm introduce called stochastic regeneration variations stochastic regeneration expressed simple parameterizations used implement wide range stochastic inference strategies simplest use stochastic regeneration implement transition operator pets follows first border scaffold detached trace arbitrary order random choices within trace fragment detached stored extract sometimes abbreviate process detach next border scaffold regenerated attached reverse order yielding new trace sometimes abbreviate process regen new trace accepted rejected accepted algorithm finished rejected detached original extract fed regeneration algorithm restores trace original state assuming simulation density evaluation constituent psps constant runtime process scales size scaffold brush size entire pet nodes influence influenced transition never visited pet nodes regenerated opposite order visited detach see figure trace illustrates need symmetry detaching along scaffold takes scaffold trace converts trace torus full omegadb point torus ready new proposal omegadb contains information needed restore original trace first step disconnect detach trace fragment border scaffold second step extract random choices trace fragment store omegadb exp quote quote value exp value request env exp value branch exp value exp quote value exp quote value exp value request exp value exp value exp value exp value exp value exp value exp value exp value request exp value exp value request request request exp value exp value exp value exp value exp value venture families figure due partially exchangeable sps detach regen must visit nodes opposite orders procedure coupling applied absorbing border brush probability absorbing nodes depends order regeneration takes place venture detaching needs compute probability absorbing nodes would calculated proposed trace regenerating along scaffold particular would visited absorbing application first regen must visit absorbing application last detach venture solves problem detach always detach nodes opposite order regen would generated etach xtract trace border scaffold weight node reversed border node weight weight etach trace node scaffold else weight weight nconstrain trace node weight weight xtract trace node scaffold return weight detach node border venture must unincorpoate value psp previously responsible generating adjust weights extract parents etach trace node scaffold psp args node node groundvalue node groundvalue args weight groundvalue args weight xtract parents trace node scaffold return weight extract parents node venture loops parents correct order xtract parents trace node scaffold weight xtract esrparents trace node scaffold parent reversed node weight weight xtract trace parent scaffold return weight xtract esrparents trace node scaffold weight parent reversed node weight weight xtract trace parent scaffold return weight key idea process node longer referenced others regencount unevaluated also node request node unevaluating requests generates unevaluated may trigger trace fragments corresponding request unevaluated longer referenced xtract trace node scaffold weight value node node weight weight xtract trace scaffold node node node node else weight weight neval equests trace node scaffold weight weight napply psp trace node scaffold weight weight xtract parents trace node scaffold return weight regenerating new trace along scaffold often abbreviated regen takes scaffold torus omegadb well control parameter indicates whether restoring replaying random choices omegadb key idea regen ensures pet constructed via sequence pets pets parents node including choices given random choice directly depends regenerated node also takes map nodes gradient log densities use variational kernels discuss later advanced inference section egenerate ndattach trace border scaffold restore weight node border node weight weight attach trace node scaffold restore else weight weight egenerate trace node scaffold restore weight weight onstrain trace node return weight attach node border one first regenerates parents attaches trace fragment current trace making responsible generating current node border involves updating weight also incorporating value stochastic procedure newly responsible generating attach trace node scaffold restore psp args node node groundvalue node weight egenerate parents trace node scaffold restore weight weight groundvalue args groundvalue args return weight egenerate parents trace node scaffold restore weight parent node weight weight egenerate trace parent scaffold restore weight weight egenerate esrparents trace node scaffold restore return weight egenerate esrparents trace node scaffold restore weight parent node weight weight egenerate trace parent scaffold restore return weight regenerating node involves updating regeneration counts ensure regeneration detach correctly called new trace egenerate trace node scaffold restore weight node node weight weight egenerate parents trace node scaffold restore node sourcenode else weight weight pply psp trace node scaffold restore weight weight val equests trace node scaffold restore node value node node weight weight egenerate trace scaffold restore return weight building invariant transition operators using stochastic regeneration stochastic regeneration used implement variety inference schemes probabilistic execution traces addition modifying trace undoable fashion described stochastic regeneration provides numerical weights needed inference schemes eights assuming simulation kernels assume local kernels used stochastic procedure application simulation kernel value proposes independent value random choice let fact regen order order local kernels called regen also construction reverse order local kernels would called detach regen returns kregen detach returns kregen kregen kernel proposes calling local kernels regen order detach computes reverse probability correctly since local kernel unapplied reverse order would applied regen weights used implement transitions note local kernels resimulation kernels likelihood ratio induced absorbing nodes involved acceptance ratio eights assuming delta kernels assume least one local kernels delta kernel value proposes depends value random choice modified case regen detach return useful quantities individually product detachandregen case simulation kernels suitable implementing transitions kregen kregen versus inference schemes contents scaffold brush depend values principal nodes given transition example every random choice whose existence might affected value chosen principal node descendant principal node must resampled must included brush regenerated transition even particular value sampled principal node consistent old execution trace could asymptotically efficient circumstances exploit independencies values random choices existence unfortunately comes cost significant additional complexity probabilistic programming systems example original implementation mansinghka well earlier version venture context sensitive monte significantly limited venture believe earlier version venture achieved correct order growth scale parameters latent dirichlet allocation none supported reprogrammable inference seems likely scaffold construction stochastic regeneration interleaved retain flexibility current architecture achieving efficiency update scheme sensitive independencies leave future work inference strategies via stochastic regeneration venture inference programming language provides number primitive inference mechanisms effect strategy evolution pet according transition operator leaves posterior distribution pets invariant modifications constrained lie within scaffold strategies applied scaffold including global scaffold implemented using stochastic regeneration regen detach performing mutation trace place returning weights used rejection steps necessary maintain invariance describing specific inference strategies establish preliminary results simplify reasoning transition operators pets factorization acceptance ratio let pet representing execution ordering nodes suppose visit node order application random choice computing probability density output given input passing pair incorporate method ciated handle exchangeable coupling define total probability density sequence random choices product probability densities calculated recall every satisfies property probability sequence pairs invariant permutation thus arbitrary subset nodes necessarily equivalence holds probability arbitrary fragments example consider following program assume make ccoin predict predict let execution say predicts true probability first predict depends whether weighted second predict total probability matter order proceed however probabilities equal prefix orderings view complete trace associated modified probabilistic program proposition let torus let permutation likewise proof factor disjoint subsets since set prefix orderings result follows immediately auxiliary variables stochastic selection kernels would like able use construct valid transition operators leave conditional distribution pets invariant cases also would like transition operators individually ergodic random choice guaranteed executions application drg global scaffold straightforward standard cycle mixture hybrid kernels see bonawitz andrieu mansinghka used sequence proposals individual variables alternately interested inference strategies choose random choices uniformly random make compound proposal first selects random variable change using seed scaffold makes within scaffold choice random choices involves simple state dependence nearly trivial integrate number paths trace another trace sequence choices straightforward accommodate single step venture supports broader range transition operators proposals proposal kernel randomly chosen distribution let possibly infinite set stochastic function traces induced density every let proposal kernel pets let current pet following cycle kernel extended state space preserves sample propose accept uniform scheme treats choice kernel transient auxiliary variable created solely duration ordinary proposal depends invariance ensured accounting effect construction used straightforwardly justify sophisticated custom proposal strategies algorithm neal without need approximation stochastic index selection makes straightforward compositionally analyze richer transition operators pets example one select scaffolds choosing single principal node uniformly random typical church implementations alternately one could estimate entropy conditional distributions random choice given containing scaffold choose scaffolds probability proportional estimate one could also select principal nodes starting query variable interest random choice representing important prediction action variable walk pet favoring random choices close query variable anticipate evolution venture inference programming language depend expansion refinement library techniques converting proposals invariant transition operators via stochastic regeneration describe basic algorithm incremental inference pet tools developed far generic version transition operator straightforward specify sample set principal nodes principals record probability principals sampling construct principals set principal nodes set detachandregen calculate principals accept principals principals correctness follows immediately applies broad class customized kernels stochastic regeneration visit portions pet conditionally independent random choices affected transition scalable approaches based transformational compilation particular typical machine learning problems datasets size number random choices scales linearly connections pet sparse thus inference sweep transitions enough consider random choice roughly requires time opposed time transformational compilers approximating optimal proposals via stochastic variational inference use auxiliary variable technique lift local proposal one preserves global stationarity show learn local variational approximation posterior distribution given using regen detach wrap discussed osing optimization problem let trace definite regeneration graph inducing torus let family distributions torus every defined parameters control local kernels along reverts resimulation kernel brush goal onto space involves solving following optimization problem minimize log subject tochastic gradient descent one generic approach stochastic gradient descent assumptions differentiability may hold arbitrary programs gradient objective function following nice form originally shown wingate weber entire traces log log log log log log log log log log used log log thus given setting approximate gradient using monte carlo trajectories scheme proposed wingate weber sample trajectory calculate noisy gradient estimate take noisy gradient descent step proposal etropolis astings solve optimization problem construct proposal distribution sample accept reject single proposal note solving optimization problem inspect nodes might therefore use compute probability reverse transition sing regen detach stochastic regeneration recursions easily extended carry additional metadata enables evaluate local gradients random choice store information local kernel modifies additional state capturing variational parameters done turns stochastic gradient optimization variational approximation use proposal expressed using stochastic regeneration construct detach every application node whose operator change compute gradients initialize variational parameters current arguments loop satisfied quality variational approximation attach variational kernels define call regen sample compute log well local gradients log take gradient step detach restore compute detach regen sample compute accept uniform inference enumerative gibbs particle markov chain monte carlo many inference strategies involve weighing multiple alternative states one another way approximating optimal transition operators portions hypothesis space gibbs sampling discrete variable bayesian network implemented enumerating possible values variable substituting network evaluating joint probability network configuration renormalizing sampling normalized importance sampling approximating gibbs involves proposing multiple values variable prior weighting likelihood normalizing sampling sampled set techniques applied iteratively larger subsets variables example discrete gibbs iterated sequence variables generate proposal particle filter extended via enumeration perhaps dynamic programming larger subsets random variables normalized importance sampling also iterated yield sequential importance sampling resampling embedded within markov chain yield particle markov chain monte carlo techniques useful think inference strategies instantiations idea weighted speculative execution technique within inference programming ensemble executions represented evolved stochastically weighted selected via absorbing nodes although ultimately one executions used refer alternative state particle implement gibbs sampling particle markov chain monte carlo techniques using general machinery inference methods allows develop common techniques handling dependent random choices brush proposal analysis apply inference strategies mutation versus simultaneous particles memory particle methods also provide new opportunities tradeoffs inference although semantics particle methods involve multiple alternative traces may always desirable even possible represent alternate traces memory simultaneously consider scaffolds whose definite regeneration graph brush invokes complex external simulator modify code simulator relies expensive external compute resource multiplexed necessary implement particle methods via mutation single pet place otherwise likely efficient represent particles memory simultaneously sharing common pet scaffold source cloning auxiliary states stochastic procedures referred multiple particles choice also enables parallel evolution collections particles via multiple independent threads synchronizing whenever resampling occurs venture supports mutation simultaneous representation particles necessary recover asymptotic scaling competitive custom sequential monte carlo techniques paper focus transition operators associated particle methods give pseudocode implementing terms generic api simultaneous particles data structures needed implement api efficiently algorithm analysis efficient implementations involve novel applications techniques functional programming persistent data structures beyond scope paper operator constructing transition operators although auxiliary variable technique introduced previously sufficient lifting local proposal global proposal helpful introduce variant technique closed application kernel results applying technique valid proposal kernel parameterized randomly selected used proposal subsequent application give significant additional flexibility explaining justifying inference schemes etropolis astings suppose distribution interest let proposal kernel create new kernel based satisfies detailed balance respect rule instead representing kernel keep around constituent elements applying involves sample let accept new trace iff straightforward show transition operator preserves detailed balance respect therefore leaves invariant ergodic convergence proved fairly mild assumptions andrieu tate dependent ixtures ernels would like able construct compound kernels stochastically chosen certify detailed balance compound kernel given easily verified conditions component kernel stochastic kernel selection rule let density interest conditioned density resampling nodes given absorbing nodes scaffold let possibly infinite set stochastic function traces density induced use select kernel random given trace also evaluate reversing transition let transition operator traces simulated let procedure evaluates factors acceptance ratio constructed consider following kernel parameterized scalars determined sample used choose kernel sample let accept iff goal define kernel preserves detailed balance analogously kernels described assume stochastic function satisfies following symmetry condition expresses constraint reach selecting according applying nonzero probability reaching possible selecting according applying let set indices kernels reachable set indices kernels reachable establish detailed balance kernel suffices show due symmetry condition implies therefore detailed balance satisfied following holds hypothesis satisfies detailed balance respect let multiply sides sum sides establish operator define mixmh operator follows mixmh takes input stochastic index sampler set base kernels parameterized index sampled index sampler mixmh let apply mixmh sample apply kernels goal define sketch correctness arguments transition operators first describe particle sets obtained repeated applications single kernel formulation incorporates boosted acceptance ratio designed recover special case one new particle enerating particles repeatedly applying seed kernel suppose torus pair attach local simulation kernels nodes define distribution torus regenandattach return let multiset weighted particles define kernel follows nds nds number duplicates isolation nds nds boosting comes cost subtle ergodicity violations transition operator used hypothesis space contains right symmetries simpler analyses particles including current trace resampled possible indeed straightforward yield acceptance rules frequently therefore explore space less efficiently avoid ergodicity issues sample kernel starting generating additional particles kregen probability generating probability sampling particles multiplied number distinct orders could sampled kregen nds claim nds kregen nds kregen proof nds kregen nds kregen nds kregen nds kregen nds nds kregen nds nds kregen nds kregen nds kregen thus apply mixmh becomes nds nds nds nds nds nds mhn operator pattern multiset weighted particles generated seed kernel sampled sufficiently common operation give name mhn mixmh use variations throughout section multiset functions index mixmh sampling step particle chosen functions base proposal kernel etropolis astings special case particle methods simulation kernel actually special case since case parallelizable extensions kind locally independent scheme also natural example one could make multiple proposals weigh one another could viewed importance sampling approximation optimal gibbs proposal scaffold proposal simulation kernel including one conditions downstream information said restriction simulation kernels significant gaussian drift kernels example permitted numerative ibbs special case using different kernels particle many ways generate weighted particle sets instance may want allow different kernel particle represent enumeration set variables way distinct tuple cartesian product domains variables correspond different kernel suppose variables want enumerate yielding total combinations assume first combination sample follows call regen times generate new particles passing except deterministic kernels replacing old kernels nodes enumerating analysis even simpler case particles distinct apply mixmh becomes particle arkov chain onte arlo adding iteration resampling often useful iterate methods resample collections particles based weights stochastically allocating particles regions execution space appear locally promising insight heart particle filtering sequential importance sampling resampling doucet well sophisticated sequential monte carlo techniques example andrieu introduces particle markov chain monte carlo methods family techniques using sequential monte carlo make sensible proposals subsets variables part larger mcmc schemes also use conditional smc generate suppose local simulation kernels attached nodes determines simulation kernel generates samples propagating along via regen suppose group sinks groups turn partitions groups according regen recursion refer establish notation first let consider simpler case basic sequential monte carlo instead conditional smc propose along inductively propagate time sampling independently proposal kernel return value regen group sinks assumed conditioned parent particle instructive view sampling procedure way generating single sample auxiliary variables density suppose perform conditional smc sweep sample first sample index time step index forced resampling source particle sample particles standard csmc probability starting precisely however want index kernel multiset semantics complete particles index equivalence class agree everything last time step agree multiset complete particles nds thus apply mixmh nds nds nds nds nds nds nds nds nds nds think strict generalization mhn kernel split sequence kernels scheme called pgibbs venture inference programming language enables approximation blocked gibbs sampling arbitrary scaffolds cycles mixtures pgibbs kernels recovers wide range existing particle markov chain monte carlo schemes well novel algorithms seudocode pgibbs simultaneous particles give pseudocode implementing pgibbs transition operator using interface simultaneous particles particle interface permits particles constructed source pets one another sharing maximum amount state possible stochastic regeneration used clear referring final weight refer consistent previous sections prepare source trace conditional smc populate array particles calculate weights ibbs trace border scaffold rhoweights rhodbs rhoweights rhodbs etach xtract trace border scaffold particles particleweights particles particle trace particleweights egenerate ndattach particles border scaffold true rhodbs particleweights egenerate ndattach particles border scaffold false nil newparticles newparticleweights newparticles particle particles newparticleweights egenerate ndattach newparticles border scaffold true rhodbs parentindex ample ategorical particleweights newparticles particle particles parentindex newparticleweights egenerate ndattach newparticles border scaffold false nil particles newparticles particleweights newparticleweights finalindex ample ategorical particleweights weightminusxi particleweights emove finalindex weightminusrho particleweights emove alpha weightminusrho weightminusxi return particles finalindex alpha implementation illustrates use versions regen detach act particles initialization particles using parent particles well source traces last four lines simply constructing randomly chosen kernel applied corresponding proposing replace current race contents particle conditional independence parallelizing transitions briefly describe conditional independence relationships easy extract probabilistic execution traces analysis clarifies potential dependencies pet introduced choice make probabilistic closures objects language independence relationships could used support probabilistic program analyses also justify correctness parallelized kernel composition operators inference programming language also expose parallelism distinct parallelism markov blankets envelopes conditional independence bayesian networks markov blanket set nodes provides useful characterization important conditional independencies independencies permit parallel simulation markov chain transition operators constrast scaffold provides useful factorization logdensity trace permit parallel simulation transition operators section show formulate notion locality permits parallel transitions pets first review markov blankets bayesian networks let sets nodes bayesian network children parents excluding proposals made two regeneration graphs parallel although proposals may read values refer markov blanket thought set nodes one must read write resampling computing new probabilities children situation becomes complicated case pets two main reasons first parents known priori since know simulated see nodes reads second auxiliary states mutated known priori since psps applied known envelopes let denote auxiliary states sps psps applied define set nodes read written bayesian networks define scope proposal scope set parts pet written created generating torus define envelope envelope scope set nodes must read written proposal note general may nodes scope written never read terminal resampling nodes sufficient statistics uncollapsed sps ignore distinction make following claim suppose proposal temporally overlaps proposal luck envelope scope envelope scope transitions clashed words says two proposals clash long node one proposal reads differs two traces proposal transitions clashed could simulated simultaneously envelope scaffold define envelope envelope scope scope envelope scope correspond union envelope scope taken possible completions torus two proposals guaranteed clash envelope scope envelope scope let random variables corresponding results invocations regen including weights scaffolds definite regeneration graphs one approach formalizing relationship conditional independence parallel simulation venture would first try show holds second try show random variables safely simulated parallel conditionally independent probabilistic programmers identify two scaffolds holds schedule transitions simultaneously also circumstances speculative simultaneous simulation may beneficial example clash small cheap procedures approximate transition ignores dependency used proposal serial transition may also possible use notion envelope predict beforehand transitions run simultaneously need bound markov blanket scope one option would statically analyze venture program fragments implementing kind escape analysis random choices another would introduce language constructs stochastic procedures came hints markov blanket composition future work explore efficacy techniques practice related work venture builds contains complementary insights number probabilistic programming languages implementations inference techniques example monte bonawitz mansinghka first commercially developed prototype interpreter probabilistic programming language architecture direct antecedent venture example included decomposition local transitions terms proposal blankets changes preliminary prototypes venture approximate scheme languages perov mansinghka also introduced early versions ideas paper attempt control asymptotic cost inference transition salient inefficiency many church systems typical machine learning problems datapoints single sweep inference random variables requires runtime scales quadratic scaling combined absolute constant factors made impossible apply systems beyond hundreds datapoints lightweight implementations based direct transformational compilation augmented interpretation bher wingate basic consequence approach entire program must make change single latent variable variety approaches tried mitigate problem earlier implementations first prototype implementation church mansinghka atop blaise system bonawitz implementation also monte attempts made control scope approximating venture notions pets partition scaffolds similarly shred implementation church yang uses techniques program analysis coupled compilation try avoid involved lightweight implementations also reduce overhead approaches exhibit different limitations respect generality scalability absolute achievable efficiency runtime predictability example blaise implementation church exhibited quadratic scaling cases involved substantial runtime overheads program analysis compilation techniques used shred incur runtime costs significant absolute terms difficult predict priori additionally impose constraints integration custom primitives language finally approaches applied directly approaches inference outside languages blog milch first probabilistic programming language particular relevance example original blog inference engine based infinite contingent bayesian networks icbn milch icbns characterize valid factorizations probability distribution represent dependencies whose existence contingent values variables thus provide data structure blog serves analogous function pets venture also gibbs sampling algorithm introduced blog arora exploits decomposition possible world related scaffolds used define scope inference venture however blog support random variables probabilistic procedures provide programmable inference mechanism blog also support collapsed primitives exhibit exchangeable coupling applications primitives lack calculable probability densities primitives create destroy latent variables providing external inference mechanism figaro system probabilistic programming pfeffer shares several goals venture although differences language semantics inference architecture substantial venture virtual machine figaro based embedded design implemented scala library figaro models represented data objects scala model construction inference proceed via calling scala api enables figaro leverage mature toolchain scala jvm avoids need reimplement difficult standard programming language features scratch remains seen apply machinery models depend probabilistic procedures primitives exhibiting exchangeable coupling primitives enable users extend figaro custom model fragments equipped arbitrary inference schemes ibal pfeffer ancestor figaro based embedding stochastic choice functional language arguably similar venture terms interface however inference strategies around ibal designed imply restrictions stochastic choices analogous hybrid inference primitives embody generalizations proposals global inference probabilistic programs idea global inference probabilistic programs via stochastic optimization implemented using repeated program first proposed wingate weber similarly global implementation particle gibbs via conditional sequential monte carlo implemented via repeated forward independently concurrently developed wood approaches implemented using lightweight scheme proposed wingate thus lack dependency tracking exhibit unfavorable asymptotic scaling mean field conditional smc schemes developed paper implemented via stochastic regeneration building pets efficiency supporting full spi techniques paper also used subsets random choices probabilistic program composed inference techniques yielding hybrid inference strategies may asymptotically efficient homogeneous ones best knowledge venture first probabilistic programming platform support compositional inference programming language multiple computationally universal primitives also probabilistic programming system support probabilistic procedures first class objects particular implementation arbitrary probabilistic procedures external latent variables ordinary primitives venture also probabilistic programming system integrates exact approximate sampling techniques based standard markov chain sequential monte carlo variational inference discussion described venture along key concepts data structures algorithms needed implement scalable interpreter programmable inference venture includes common stochastic procedure interface supports probabilistic procedures procedures procedures exchangeable coupling invocations procedures maintain external latent variables supply external inference code seen probabilistic execution traces venture based generalize key ideas bayesian networks support partitioning traces scaffolds corresponding inference subproblems defined stochastic regeneration algorithms scaffolds shown use implement multiple inference strategies also given example venture programs illustrate aspects modeling inference languages provides coverage venture terms models inference strategies problems needs carefully assessed remains seen whether current set inference strategies truly sufficient full range problems arising across spectrum bayesian data analysis machine learning robotics however expanding set inference primitives may straightforward one strategy rests analogies pets bayesian networks example may possible define analogues junction trees fragments pets conditioning random choices existential dependencies along similar lines extensions enumerative gibbs inference instruction could leverage insights koller variants based message passing techniques expectation propagation might also fruitful inference techniques could build using pets spi include slice sampling hamiltonian monte carlo hamiltonian monte carlo already seen use probabilistic programming stan development team wingate however implementations apply arbitrary venture scopes proposals trigger resampling random choices due presence primitives brush generalizations hamiltonian slice methods use auxiliary variable machinery presented paper straightforward currently included development branches venture system although applications successfully implemented using unoptimized prototypes performance surface venture largely uncharacterized intentional understanding probabilistic programming limited understanding functional programming miranda introduced turner standard graphical models machine learning correspond short nested loops usually fully unrolled execution therefore exercise small subsets capabilities typical expressive languages thus seems premature focus optimizing runtime performance better foundations established theoretical principles needed assess tradeoffs inference accuracy asymptotic scaling memory consumption absolute runtime currently unclear comparisons runtime easy make easily mislead even small changes problem formulation dataset size accuracy lead large changes runtime asymptotic scaling runtime many probabilistic programming systems unknown problems relevant scale parameters hard identify systematic comparisons inference strategies control implementation constants scale thresholds yet performed although venture asymptotic scaling competitive samplers theory rigorous empirical assessment needed mathematically rigorous cost models view precondition comparative benchmarking thus could important research direction additionally relationships asymptotic scaling forward simulation asymptotic scaling typical inference strategies characterized possible search effective equivalences custom inference strategies model transformations performance engineering research venture build standard techniques languages well exploitation structure probabilistic models inference strategies immediate opportunities include runtime specialization via type hints scaffold contents compilation transition operators arising specific inference instructions optimizations spi minimize copying data pets could used support memory manager exploits locality reference along pets conditional independence generally improve cache efficiency example one could envelope specific inference instruction entire pet fragment similar optimizations beneficial multicore implementations also opportunities asymptotic savings deterministic pets compressed principle memory requirements inference scale amount randomness consumed model execution time preliminary explorations ideas yielded promising results perov mansinghka much work remains done practice believe judicious migration performance sensitive regions probabilistic program inference code important runtime compiler optimizations debugging profiling probabilistic programs venture makes interactive modeling inference possible providing expressive modeling language scalable programmable inference removing need integrated probabilistic models inference algorithms implementations venture removes one main bottlenecks building probabilistic computing systems far bottleneck however design validation debugging optimization probabilistic programs model programs inference programs still requires expertise promise probabilistic programming instead learn multiple nonoverlapping fields future modelers able learn coherent body probabilistic programming principles able use standard probabilistic programming tools workflows navigate design validation debugging optimization process could evolve analogously programmers learn design test debug optimize traditional programs using mix intuitive modeling mathematical analysis experimentation supported sophisticated tools venture implements stochastic semantics inference designed cohere bayesian reasoning via exact sampling limit infinite computation facilitates development mathematically rigorous debugging strategies first precise gap exact approximate sampling sometimes measured examples provides crucial control debugging second amount computation devoted inference increases quality approximation increase result potentially substantial computational cost always possible reduce impact approximate inference distinct modeling issues data issues even path forward scales true distribution effectively impossible sample even roughly characterize example mises family theorems gives general conditions posteriors concentrate onto hypotheses satisfy various invariants exploited debugging tools tool could increase amount synthetic data inference used test probabilistic program posterior concentrates point probabilistic program begins resemble problem single right answer significantly simplifying debugging testing compared regimes true distribution arbitrarily spread many tools techniques doubt needed tease apart impact model mismatch inadequate inappropriate data insufficient inference traditional profilers may also natural analogs probabilistic programming profiler venture program might track hot spots scaffolds source code locations disproportionate fraction runtime spent well cold spots scaffolds transitions rejected unusual frequency cold spots might serve useful warning signs limited purely local convergence inference venture maintains execution trace possible link execution trace location source code location responsible potentially help probabilistic programmers identify modify unnecessarily deterministic constrained model fragments together could help venture programmers identify portions program ought moved optimized foreign inference code inference programming venture inference programming language many known limitations would straightforward address example currently lacks iteration construct notion procedural abstraction inference strategies parameterized reused ability dynamically evaluate modeling expressions stochastically generate scope block names analogue eval gives inference programs ability programmatically execute new directives hosting venture virtual machine additionally current set inference primitives implemented via compound statements language support analogue stochastic regeneration could potentially exposed future work address issues reintroducing full expressiveness lisp inference expression language determining extensions needed capture interactions modeling approximate inference substantial inference programming extensions may also fruitful example may fruitful incorporate support inference procedures reusable proposal schemes scaffolds match patterns proposal mechanism implemented another venture program scaffold contents constitute formal arguments random choice border could mapped observe new values resampling nodes could come predict inference proposal program assumed converged possible construct valid transition operator mechanism would enable venture programmers use probabilistic modeling approximate inference design inference strategies turning every modeling idiom venture potential inference tool similar mechanism could facilitate use custom venture programs skeleton variational approximations finally inference programming language needs extended relax constraints soundness instead restricting programmers transition operators guaranteed construction leave conditioned distribution traces invariant language permit experts introduce arbitrary transition operators traces interesting develop probabilistic programs automate aspects inference going beyond traditional formulations interpretation compilation example theoretically possible develop probabilistic programs work inference optimizers programs would take venture program input including placeholder infer instructions produce output new venture program infer instructions likely perform better programs could also transform modeling instructions instructions introducing data improve computational inferential performance depending architecture programs could viewed inference planners compilers integrated combination two probabilistic models approximate inference bayesian learning could deployed augment engineering planning algorithm compiler transformations even objective function used summarize inference performance machine learning schemes algorithm selection lagoudakis littman viewed natural special cases venture also makes possible integrate control probabilistic program interpreter running another probabilistic program control program could intercept modify inference instructions running venture program based approximate inferences live performance data perhaps influence machine code generation process invoked compiler inference programming also supports integration approximate compilers based probabilistic modeling approximately bayesian learning consider probabilistic program specific set observes predicts expressions observes define space possible inputs corresponding set literal values one per observation expressions predicts define space possible outputs corresponding assignment sampled value predict compiler venture might generate equivalent program restricted precisely given pattern inputs outputs optimizing implementation infer instructions program given restriction also possible use modeling inference approximately emulate program writing probabilistic program models predicts observes parameters structure estimated data generated original probabilistic program viewed approximate compilation implemented via inference target language given hypothesis space model emulator program creation kind emulator use proposal would natural integrate additional inference instructions probabilistic programming systems ultimately support complete spectrum blackbox truly automatic inference highly customized inference strategies far venture focused extremes points spectrum require extensions venture instruction language encode specifications exact approximate inference within particular runtime accuracy parameters developing suitable specification language cost model enable precise control scope automatic inference important challenge future research conclusion similarities programming languages striking given enormous design space languages lisp java python support many programming idioms used simulate one another without changing asymptotic order growth program runtime languages also presents similar foreign interfaces interoperation external software used data analysis system building deployed fields ranging robotics statistics also used machine intelligence research grounded probabilistic modeling inference one measure flexibility interchangeability provided use probabilistic programming research languages used implement expressive probabilistic programming languages example multiple versions church venture written python lisp java blog implemented java python know possible attain flexibility extensibility efficiency lisp java python single probabilistic programming language especially due complexity inference thus far probabilistic programming languages opted narrower scope support subset standard approximate inference strategies notion inference programming translating sufficiently expressive probabilistic programming languages without distorting asymptotic scaling forward simulation difficult especially given lack standard cost model providing faithful translations distort asymptotic scaling inference runtime accuracy harder still despite challenges may possible develop probabilistic languages used specify solve probabilistic modeling approximate inference problems many fields ideally languages would able cover modeling idioms inference strategies fields robotics statistics machine learning also meeting key representational needs cognitive science artificial intelligence hope venture principles behind design implementation represent significant step towards development probabilistic programming platform computationally universal suitable practice generalpurpose use references harold abelson gerald jay sussman structure interpretation computer programs christophe andrieu nando freitas arnaud doucet michael jordan introduction mcmc machine learning machine learning christophe andrieu arnaud doucet roman holenstein particle markov chain monte carlo methods journal royal statistical society series statistical methodology nimar arora stuart russell paul kidwell erik sudderth global seismic monitoring probabilistic inference nips pages nimar arora rodrigo salvo braz erik sudderth stuart russell gibbs sampling stochastic languages arxiv preprint james baker trainable grammars speech recognition journal acoustical society america bishop pattern recognition machine learning springer blei jordan latent dirichlet allocation journal machine learning research keith bonawitz vikash mansinghka monte interactive system massively parallel probabilistic programming keith bonawitz composable probabilistic inference blaise katalin michael blum oscar gaggiotti olivier approximate bayesian computation abc practice trends ecology evolution luc raedt kristian kersting probabilistic inductive logic programming springer pierre del moral arnaud doucet ajay jasra sequential monte carlo samplers journal royal statistical society series statistical methodology doucet freitas gordon editors sequential monte carlo methods practice robin dowell sean eddy evaluation several lightweight stochastic grammars rna secondary structure prediction bmc bioinformatics david duvenaud james robert lloyd roger grosse joshua tenenbaum zoubin ghahramani structure discovery nonparametric regression compositional kernel search arxiv preprint cameron freer vikash mansinghka daniel roy probabilistic programs probably computationally tractable nips workshop advanced monte carlo methods applications brendan frey bayesian networks pattern classification data compression channel coding phd thesis citeseer friedman koller bayesian network structure bayesian approach structure discovery machine learning nir friedman michal linial iftach nachman dana using bayesian networks analyze expression data journal computational biology gelman carlin stern rubin bayesian data analysis chapman hall new york noah goodman joshua tenenbaum probabilistic models cognition noah goodman vikash mansinghka daniel roy keith bonowitz joshua tenenbaum church language generative models uncertainty artificial intelligence may peter green nils lid hjort sylvia richardson highly structured stochastic systems oxford university press thomas griffiths zoubin ghahramani infinite latent feature models indian buffet process nips volume pages roger grosse ruslan salakhutdinov william freeman joshua tenenbaum exploiting compositionality explore large space model structures arxiv preprint david heckerman tutorial learning bayesian networks springer frederick jelinek john lafferty robert mercer basic methods probabilistic context free grammars springer mark johnson thomas griffiths sharon goldwater adaptor grammars framework specifying compositional nonparametric bayesian models advances neural information processing systems daphne koller david mcallester avi pfeffer effective bayesian inference stochastic programs pages michail lagoudakis michael littman learning select branching rules dpll procedure satisfiability electronic notes discrete mathematics liang jordan klein learning programs hierarchical bayesian approach international conference machine learning icml pages lunn thomas best spiegelhalter winbugs bayesian modelling framework concepts structure extensibility statistics computing october christopher manning hinrich foundations statistical natural language processing mit press vikash mansinghka natively probabilistic computation phd thesis massachusetts institute technology cambridge june vikash mansinghka tejas kulkarni yura perov josh tenenbaum approximate bayesian image interpretation using generative probabilistic graphics programs advances neural information processing systems pages mansinghka kemp griffiths tenenbaum structured priors structure learning proceedings conference uncertainty artificial intelligence uai paul marjoram john molitor vincent plagnol simon markov chain monte carlo without likelihoods proceedings national academy sciences andrew mccallum karl schultz sameer singh factorie probabilistic programming via imperatively defined factor graphs advances neural information processing systems nips brian milch bhaskara marthi david sontag stuart russell daniel ong andrey kolobov approximate inference infinite contingent bayesian networks proc aistats pages brian milch bhaskara marthi stuart russell david sontag daniel ong andrey kolobov blog probabilistic models unknown objects introduction statistical relational learning page minka winn guiver knowles http microsoft research cambridge see radford neal markov chain sampling methods dirichlet process mixture models technical report september david nott peter green bayesian variable selection algorithm journal computational graphical statistics songhwai stuart russell shankar sastry markov chain monte carlo data association tracking automatic control ieee transactions peter orbanz daniel roy bayesian models graphs arrays exchangeable random structures hanna pasula bhaskara marthi brian milch stuart russell ilya shpitser identity uncertainty citation matching advances neural information processing systems pages yura perov vikash mansinghka exploiting conditional independence efficient automatic multicore inference church avi pfeffer ibal probabilistic rational programming language proceedings international joint conference artificial intelligence pages avi pfeffer figaro probabilistic programming language charles river analytics technical report page david poole independent choice logic modelling multiple agents uncertainty artificial intelligence carl edward rasmussen infinite gaussian mixture model nips volume pages rasmussen cki williams gaussian processes machine learning adaptive computation machine learning john reynolds definitional interpreters programming languages proceedings acm annual conference pages acm matthew richardson pedro domingos markov logic networks machine learning roy mansinghka goodman tenenbaum stochastic programming perspective nonparametric bayes icml nonparametric bayes workshop volume page stuart russell peter norvig artificial intelligence modern approach prentice hall englewood cliffs edition taisuke sato yoshitaka kameya prism language modeling ijcai volume pages citeseer stan development team stan modeling language users guide reference manual version url http robert swendsen wang replica monte carlo simulation spin glasses physical review letters joshua tenenbaum charles kemp thomas griffiths noah goodman grow mind structure statistics abstraction science sebastian thrun wolfram burgard dieter fox probabilistic robotics mit press tina toni david welch natalja strelkowa andreas ipsen michael stumpf approximate bayesian computation scheme parameter inference model selection dynamical systems journal royal society interface david turner miranda functional language polymorphic types functional programming languages computer architecture pages springer whiteley lee heine role interaction sequential monte carlo algorithms arxiv september david wingate theo weber automated variational inference probabilistic programming david wingate noah goodman andreas stuhlmueller jeffrey siskind nonstandard interpretations probabilistic programs efficient inference advances neural information processing systems pages david wingate andreas stuhlmueller noah goodman lightweight implementations probabilistic programming languages via transformational compilation proceedings international conference artificial intelligence statistics frank wood van meent vikash mansinghka new approach probabilistic programming inference artificial intelligence statistics jeff reduced traces jiting church lin frank hutter holger hoos kevin satzilla algorithm selection sat artif intell res jair lingfeng yang yeh noah goodman pat hanrahan compilation mcmc probabilistic programs jing anne smith paul wang alexander hartemink erich jarvis advances bayesian network inference generating causal networks observational biological data bioinformatics
| 6 |
toward smart power grids communication network design power grids synchronization hojjat farhad siamak electrical engineering department shahid bahonar university kerman kerman iran faculty post telecommunications tehran iran advanced communications research institute sharif university technology tehran iran pouladi key words ant colony system acs communication network laplacian matrix smart grid swing equation introduction synchronization power grids one critical issues system stability network synchrony able deliver stable electric power disturbance avoid large scale black synchronization power networks provided without assist communication infrastructures case network physical links like transmission lines used couple generators case synchronicity determined using swing equation largest eigenvalue laplacian matrix power network side case available communication infrastructure measurements generators sent generators feedback control case synchronicity determined largest eigenvalue weighted sum laplacian matrices power network communication network communication network modeled using swing equation laplacian matrices power communication networks reduce admittance abstract smart power grids keeping synchronicity generators corresponding controls great importance simple model employed terms swing equation represent interactions among dynamics generators feedback control case communication network available control done based transmitted measurements communication network stability system denoted largest eigenvalue weighted sum laplacian matrices communication infrastructure power network work use graph theory model communication network graph problem ant colony system acs employed optimum design graph synchronization power grids performance evaluation proposed method new england power system versus methods exhaustive search rayleigh quotient approximation indicates feasibility effectiveness method even large scale smart power grids toward smart power grids communication network design power grids synchronization international power system conference matrix impact synchronization power networks studied continuum model proposed power network partial differential equations employed describing dynamics derivation generator couple similar impact network topology also considered main difference existing literature considered communication network model graph models problem finding route specific characteristics considered problem needs considering possible routes metaheuristic algorithms great interest solving large scale problems particularly ones academia industry crucial benefits mentioned nondeterministic structures less complexity low computational load time usage solving complex problems may derivatives short period paper investigates new method initiated ant colony system acs optimum design planning communication infrastructure power grids synchronization using communication network graph method attempted study design topology communication network power network efficiently synchronized rest paper organized follows section short survey acs introduced next model power grid presented section iii proposed method discussed section later performance proposed method evaluated section end paper concluded mentioning research challenges section discharge journey path acs endeavors work base artificial characteristics adapted real ants addition visibility discrete time memory computational approach problem solving many applications complex problems wireless communications intelligent systems according typical acs algorithm consisted following steps interested readers referred details problem graph depiction artificial ants mostly employed solving discrete problems discrete environments result considering discrete problems graphs set nodes routes ants distribution initializing order move ants graph ants must first placed set random origin nodes applications trial errors well nodes density region define number ants ants probability distribution rule ant wants move node another node transition rule necessary transition probability ant node node given ijk tabu otherwise pheromone intensity cost direct route nodes respectively relative importance controlled parameters respectively tabuk set unavailable edges ant update global trail every ant assembled solution end cycle intensity pheromone updated pheromone trail updating rule given ant colony system natural world ants find shortest path food source nest using information pheromone liquid ijnew ijold toward smart power grids communication network design power grids synchronization international power system conference angle neighboring nodes transmission line two adjacent nodes network flowing current given constant parameter named pheromone evaporation amount pheromone laid route nodes ant ijk route traversed ant current cycle otherwise impedance transmission line voltages generators magnitudes considered constant considering kirchoff current law defining shunt admittance current conveying voltage source node defined constant parameter cost value found solution ant stopping procedure typical algorithms acs procedure completed arriving predefined number runs iii system model plan optimum communication network first need model communication synchronized power grid first power grid including dynamics generators presented graph synchronization communication infrastructure studied yei yei therefore electric power pei determined pei cos sin graph modeled power network typical power grid consisted geographical sites well transmission lines model network graph consider site node transmission line edge graph model respect generator supplies current power using constant voltage site equipped one synchronous generator two adjacent generators denoted using swing equation denoted status generator described rotor rotation angle number neighbors node assumed small keep first order term taylor expansion dynamics power grid respect defined pmi equation clearly illustrates dynamics different nodes coupled difference rotation angles letting mechanical power rotor inertia constant mechanical damping constant mechanical power electrical power defined function toward smart power grids communication network design power grids synchronization international power system conference otherwise summarize dynamics according aij fig illustration virtual network first order dynamics nodes virtual network negative constant means generator linearly increases mechanical power rate proportionally frequency drop higher order error generator operates close stable point aij otherwise communication based synchronized model considering communication network available power grid synchronization except two physically adjacent generators second order equation converted first order one setting set set corresponds another generator angle node generator system dynamics communication infrastructure rewritten matrix form communication network means generators adjacent denotes control generator information form adjacent generators communication network employed variables considered different virtual nodes although actually belong real node therefore nodes corresponding called acceleration node angle node respectively virtual network denoted fig variable defined laplacian matrix power network communication network applying unitary transformation decoupled system state achieved solving characteristic equation since practice information transmission may catch physical disturbance propagation toward smart power grids communication network design power grids synchronization international power system conference communication link generators obtain measurement information generators connected center means generators connected control center proposed method possible combinations generators considered connected center acs moves forward useless links omitted final stage optimum links remain mind brief acs descriptions section different steps proposed algorithm pseudocode illustrated fig described follow initialize step initial values algorithm parameters number ants evaporation coefficients network topology loaded locate ants ants located control center node stage call ant blocked junction alive ant since ant traverse node ant blocked node chance continuing transition toward another node possible route move backward construct probability active ant arrives node probability moving node node ant determined based cost function condition synchronization power network case following proposition using proposition necessary sufficient condition synchronization power network model communication infrastructure maximum eigenvalue matrix matrix considered laplacian matrix following weighted graph pijk optimum communication network design section introduce proposed method optimum design communication infrastructure method employed acs selecting communication lines network using corresponding graph model network fig understudy topology control center number generators connected center tabuk otherwise lcij direct route pheromone intensity edge nodes controls importance respectively set tabuk set blocked edges toward smart power grids communication network design power grids synchronization international power system conference update tabu list step edge ant chosen added tabu list selected probability calculated anymore ant blocked node omitted active ant list words step kills blocked arrived ant current iteration update local pheromone traditional acs pheromone system consisted two main rules one applied whilst constructing solutions local pheromone update rule applied ants finished constructing solution global pheromone update rule node selection pheromone amount edge nodes updated ant fig illustration power network communication network modeled graph problem procedure acs approach communication network design initialize iteration locate ants ant ant active construct probability select edge update tabu list end update local pheromone next ant update global pheromone next iteration select best direction end acs approach communication network design amount award edge less cost obtains pheromone award global pheromone update traditional step completed loop global pheromone updating defined proposed acs approach communication network design pseudocode ijn ijo select edge random parameter uniform probability compared parameter usually fixed comparison result picks one two selection methods active ant continue route next node evaporation select best direction iterations edges maximum amount pheromone selected communication links list connected nodes arg ihk oulette heel arg max node otherw ise numerical results section performance proposed acs approach communication infrastructure planning power grids evaluated active ant selects edge highest probability otherwise roulette wheel rule selected choose next node probabilities toward smart power grids communication network design power grids synchronization international power system conference bus new england power system used assumption node identical generator system run iterations initial ants parameters set based trial error well system performance desktop computer intel cpu ram employed simulations matlab environment fig performance proposed acs approach compared greedy method well random selection communication links clearly small largest eigenvalue results easier synchronization system figure illustrates random method least performance major largest eigenvalue among two methods acs significant difference two methods best performance shows number links increases topology gets complex acs still pioneer among approaches different number links fig performance greedy algorithms presented exhaustive search step rayleigh quotient based selection proposed acs approach paper compared stated greedy algorithms almost similar behavior front different number links however proposed acs approach due heuristic characteristic well search behavior artificial ants smallest eigenvalues different number links exhaustive search guarantee optimality step guarantee global optimality illustrated fig proposed acs method normalized cost average first cycles algorithm number iterations increases system arrives stable value fig comparison acs greedy exhaustive search random communication link selection different number links fig comparison acs greedy exhaustive search step greedy rayleigh quotient approach different number links fig normalized cost acs approach different iterations fig normalized cost acs approach different number ants toward smart power grids communication network design power grids synchronization international power system conference infrastructure planning using different techniques fuzzy controllers cycle converged convergence happened earlynot late demonstrates parameters acs algorithm set reasonably another evaluation performance acs approach evaluated different number ants fig result demonstrates considering one ant systems link finder average normalized costs high however activating ants average costs decreases converging specific value cost average references han synchronization power networks without communication infrastructures smart grid communications ieee conference bullo spectral analysis synchronization lossless power network model smart grid communications ieee conference thorp seyler phadke electromechanical wave propagation large electric power systems ieee trans circuits systems fundamental theory applications conclusion challenges paper investigates new algorithm based ant colony system acs optimum communication network planning power grids develops toward smart grids dynamics power network first described swing equation since using communication infrastructures synchronization purpose synchronization determined weighted sum laplacian matrices power network communication network together order optimum design communication network topology mapped graph problem employing acs algorithm optimum communication links selected simulation results show performance proposed method versus typical approaches literature real scenarios scale network grows typical algorithms unable optimum network planning due nphard characteristics problem however demonstrated literature heuristic methods wellperformed tools problems though paper introduces acs approach problem several challenges remained incorporating difference generators model management parashar thorp seyler continuum modeling electromechanical dynamics power systems ieee trans circuits regular papers salehinejad talebi dynamic fuzzy colony system based route selection system applied computational intelligence soft computing vol dorigo gambardella ant colony system cooperating learning approach traveling salesman problem ieee trans evol computing machowski bialek bumby power system dynamics stability control edition wiley salehinejad pouladi talebi metaheuristic approach spectrum assignment opportunistic spectrum access ieee international conference telecommunications salehinejad pouladi talebi intelligent navigation emergencyvehicles proc int conf developments engineering dubai
| 9 |
versus approaches security distributed systems alejandro hernandez flemming nielson department informatics technical university denmark aher nielson consider use techniques flexible way deal security policies distributed systems recent work suggests use aspects analysing future behaviour programs make access control decisions based gives flavour dealing information flow rather mere access control show paper beneficial augment approach components traditional approach reference approaches mandatory access control developments performed coordination language aiming describe policy elegantly possible furthermore resulting language capability combining policies providing even flexibility power introduction distributed systems designed manage large amounts information must secured provide confidentiality information managed emerging field targeted security approaches recently framework named aspectkb proposed possible model process distributed systems capture security properties realistic way attaching security policies location combining relevant security policies interaction locations takes place way expressing security policies aspectkb framework refers traditional nondistributed style assuring security statically analyses possible behaviours system order avoid potential misuse future aspectkb exploited making access control decisions dynamically yet considering state locations possibly potential future behaviour paper shall consider multilevel access control policy model show complications trying capture policy distributed framework general particular framework whose security policies focus looking future since multilevel policies better suited past analysis system reached current state propose extension aspectkb framework allowing express also policies look past adding notion localised state locations modelled extended framework allowing security policies access states make decisions multilevel policies policy easily captured show since original aspectkb framework already intended combine different security policies extension done paper policies look past policies look future expressed even combined benefits trying capture specific policies one also allows model every policy original way providing flexibility resulting extended framework bliudze bruni grohmann silva eds third interaction concurrency experience ice eptcs hernandez nielson work licensed creative commons attribution license approaches security distributed systems moreover shall argue situations expressing policy original way could precisely capture intended insight would mean extended framework powerful well shall start discussing latter issue remainder section subsection section present review policy original formulation assess challenges adapting distributed setting section gives brief review logic used dealing combination policies section present extended framework show precisely capture policy also discuss resulting framework flexible existing one section conclude limitations looking future framework shall dealing throughout paper formal language aspectkb framework follows process algebraic approach processes modelled actions taking place specific locations interacting locations modelled well furthermore security policies also modelled following manner policies express intentions analysing continuation namely process current action involved processes possible know advance process might future reflects style providing security however style adequate sequential programs indeed since process statically analysed one continues current action possible outcomes may occur due processes could predicted means deciding whether allow interaction happen necessary look future one process lead two possible ways obtaining imprecise decisions either understanding let assume pessimistically expect particular action done process could processes know lead insecure state situation may disallow process execute action cases might process performing anything would lead insecure state understanding let assume optimistically expect particular action done process lead insecure state process perform another related action leads state situation may allow process execute action cases might process makes system reach insecure state due interactions could avoided action disallowed let discuss simple example without going syntactic semantic details still thinking distributed processes policies let think security policy different security levels every location assigned level want information leaked security level lower ones allow process running given location read data another location long following two conditions met first location data right security level higher one process running second process try future write information locations security levels lower level location data right since writing may influenced reading previously done let assume particular situation locations say security levels say let assume security levels ordered values natural numbers let assume location security level locations security level location security level figure contains three cases situation showing hernandez nielson read write sec level sec level write sec level write read sec level sec level sec level secure case overapproximation would incorrectly disallow sec level sec level write read sec level insecure case detectable approach insecure case underapproximation would incorrectly allow figure examples situations might happen locations security levels different layers illustrated figure process location tries read information location tries write information location process clearly forbidden meet second condition policy trying capture although meets first one course done following approach looking future since know process trying read try write allowed however let think another case illustrated figure let say process running location whose first action read information location tries write information location level meet second condition policy since information process could write read therefore allowed let consider next extension example illustrated figure assume fifth location security level assume process running writes information process running read case future writing process running case done forbidden might influenced new information learned location anyway since process writes one running process algebraic way modelling permit know advance happen taken approach looking past would checked insecure operation writing right moment writing would known information would leaked therefore would avoid write operation could take approach using always avoid type write operation since former level latter level would imprecise restrictive since sometimes nothing insecure write operation shown case figure taking approach using underapproximation would mean allowing process perform read subsequent write since write operation insecure secure enough case figure case figure found possible situations using approach distributed setting completely precise therefore another approach might taken instance approaches security distributed systems looking past rest work shall studying deal looking past extend framework achieve shall see resulting framework allows combine approaches therefore obtaining advantages particular shall see simple example seen one possible instance something easily precisely captured policy assessment model subsection saw distributed systems approach adequate sequential programs section review another approach blp short policy discuss challenges using distributed setting aiming show adequate original formulation operating system view blp blp model traditional mandatory access control model briefly introduce inspired abstracting unnecessary details contribute study state computer system checked security looking state representing sets must introduced set subjects may use information stored system set objects pieces information stored system read write set operations subject may object lattice security levels every state system composed set tuples form tuple would mean subject operation object tuple functions types functions supposed total functions give respectively maximum security level subject clearance current security level subject security level object classification formally state policies blp model specifies two properties every state meet order considered secure state satisfies property iff read means object read subject level higher level subject able reach usually called subject log system lower security level corresponding clearance security level changed logs simple security hernandez nielson property consists two parts state satisfies first part let name property iff write means object written subject level lower level subject currently usually called side state satisfies second part let name property iff write read means specific subject note use quantifications operating many objects read written object read could higher level object written prevents subject read object write one state said secure satisfies properties challenges distribution blp model originally meant operating systems particular feature centralised meaning central controller operating system takes care everything happens system particular control cases restrict processes try access resources moreover one key concept needed checking blp policy compliance state since operating systems centralised state calculations knowing whether blp policy met lack central controller distributed setting central controller many locations run parallel share information location know locations therefore location allowed access resource way locations forbid whatever wants resource particular notion state processes interact synchronise central entity knows happened whole system far clear distributed framework trivially able meet security properties originally developed simpler systems example centralised ones sequential programs case approach seen simple examples lose precision case blp next subsection propose extension help adapt policy distributed setting extending blp original formulation blp policy relies three functions two applied every subject one every object computed operating system every time action executed check whether resulting state still secure decide whether allow action propose extension domains common signatures since setting without central controller might want call possible entity system without distinguishing objects subjects also propose fourth function captures information past interactions entity later paper see latter function used form localised state read star property formulations blp model property consists first part uses instead second part consequence however kind formulation restrictive since subject perform read operations levels clearance level logged approaches security distributed systems types three existing functions changed definitions extended straighforward way follows call new function since keeps track part history system apply function particular input subject resp object learn kinds interactions subject resp object involved past therefore output function would kind current state argument subject resp object capture notion state function fixed original three functions indeed output function particular subject least upper bound security levels objects read subject far particular object least upper bound security levels subjects written object far formally expressed follows assuming virtual global state depends interactions happened next one read write means every time interaction takes place changing state output input may higher equal actually higher depending values entities keeping resulting least upper bound expect indeed simple result tells every set also observe shall use four functions capture extended version blp distributed setting brief review belnap logic granting access according security policy traditional boolean values enough grants denies access however distributed setting policies might contradictory sufficiently informative two values might enough shall consider extension boolean logic proposed belnap used combining security policies extension boolean logic two values considered read bottom top traditional would mean policy accepts interaction whereas traditional would mean policy accept interaction since different locations might aim different security properties policies could contradictory may lack information particular interaction situations represented two extra values meaning decision meaning contradiction conflict set values call four four possible extend usual boolean operations define new ones obtaining set four equipped two partial orderings say shown figure hernandez nielson figure belnap bilattice four usual boolean extended computing greatest lower bound lattice usual computing least upper bound thereby obtaining results boolean logic operands belong analogously new operators four defined computing greatest lower bound operator least upper bound operator negation operator extended leaving two new values unchanged implication extended follows otherwise four another useful operator priority returns first operand unless case returns second operand would always consider first operand suggests unless decision case second operand considered framework security mentioned aspectkb framework allows express systems manner achieved extending klaim coordination language located processes interact locations try gather put information maybe usually named tuple spaces possibility attaching location regardless whether process tuple location security policy govern interactions location may involved turns aspectkb language aspectoriented language whenever interaction takes place relevant policies considered semantics either grant deny interaction using belnap logic deciding consistent way section extension framework made mixing process locations tuple locations entity locations attaching aspects security policies extra information attached location refers security levels sense multilevel security policy moreover mechanisms language explicitly keep track information certain level abstraction regarding interactions taken place giving flavour localised notice could also done extending truth tables usual boolean operators defining new ones new operators would mean however truth tables cells boolean logic truth tables cells making difficult remember operator produces approaches security distributed systems net proc act loc annot lst locst read lst pol lst implicit table syntax nets processes actions states state semantics language keep besides one write aspects using extra information basically output functions mentioned subsection considering every entity location either subject object whole system every location potential input functions allow capture among others blp policies without losing precision following informal introduction extension shall call due enhanced features shall present formalities syntax semantics syntax syntax language given tables table gives syntax nets basic modules described language net parallel composition located processes located tuples data together annotation explained process parallel composition processes choice processes following action replicated process process performing action shall written allowed actions reading location read resp without deleting data read writing every location annotation whose first part lst intended keep track interactions location involved localised state consists pieces information namely elements lattice security levels introduced subsection since every location input four functions since result evaluating keep attached location result evaluating four functions second part pol annotation every location actual security policy governing location expressed using syntax table policy belnap combination policies boolean value single aspect latter consists cut action together continuation trapped aspect condition cond boolean applicability condition recommendation rec belnap logic advice aspect define aspect one may refer security levels stored trapped interaction single value lattice former one write aspect naming five syntactic names specified category lev later matched semantics specific values kept trapped interaction latter one write aspect providing specific value category lev permits among choices finally operator easily defined compositional way checks whether action occurs continuation process one argue information inside locations namely tuples also gives flavour state yet information changes according processes rely information guaranteeing property hernandez nielson pol pol pol asp pol pol pol pol pol pol pol pol pol pol pol pol true false asp asp asp rec cut cond cut cut cut actt read rec rec rec rec rec rec rec rec rec rec rec rec rec true false cond cond cond true false lev table syntax aspects security policies semantics semantics given reduction relation makes use structural congruence nets defined table also operation match matching input patterns actual data could easily defined inductive way structure arguments reaction rules defined table prescribe system may evolve presence process location target location lines rule boolean condition obtained evaluating security policies locations involved computation using evaluation function formally defined subsection done either allow disallow process compute also makes use function grant also formally defined subsection turning belnap truth values boolean truth values action disallowed involved process simply terminates thereby becoming otherwise process evolves next paragraphs explain case read action process location subject substitution using result matching done match operation moreover localised state location might modified changing historic component annotation least upper bound previous value security level target location follows suggestion equation case action data stored target location however done directly actually another virtual location created special localised state intended permit virtual location holding process keep running without interfered virtual location holding data historic component annotation least upper bound previous value location security level process location written data course possible value original simulate subject lower level clearance process annotated value lower value never change components localised state note also security policy annotating location never changes either meaning policies granting access lines semantic rule check tells whether interaction allowed purpose policies locations taking part interaction combined using belnap operator result evaluation operator passed function grant function grant defined grant four aim granting access approaches security distributed systems read match grant pols polt read pols match grant pols polt pols grant pols polt polt table reaction semantics whenever result less equal policies agree also policies lack decision would mean actually forbid interaction related use combining policies aim whenever policies contradictory result evaluation gives four thereby denying access long least one policy evidence interaction evaluation function table defined inductively structure infix policy base cases policy constant true false aspect belongs asp latter case first postfix parameter specific action continuation checked cut aspect generic action continuation using function check could easily defined inductive way structure arguments achieved using function extract produces list literals occur action continuation way instance extract done pattern matching components given parameter pushing list function check determines whether substitution performed cut matches parameter given needed cut possibly consist variables representing locations even arguments action cut may specified rec follows conservative principle actually grant access policy denying interaction hernandez nielson table structural congruence rec cut cond case check extract cut extract fail rec cond cond pol true false table meaning policies pol cond substituted using determine result achieved using usual meaning cond could straightforwardly adapted meaning rec due semantics table first parameter always actual action taking place second postfix parameter consisting five values lattice used produce another special substitution also used together determine result recommendation rec due semantics table security levels annotated actual interacting locations given indeed taken target location ones identifying classification location historic annotation taken process location ones identifying clearance current level location historic annotation noticed substitutes according checking performed cut first parameter actual action substitutes according five syntactic names prescribed syntax table lev therefore describing system syntactic names could used describe recommendations rec later used check actual security levels interacting locations already pointed subsection capturing blp developed formal framework shall show extended blp policy subsection elegantly captured shall also show easily decide cases example subsection secure without losing precision unlike approach remember process calculus even though original formulation blp compliance state policy checked every state check transition might take insecure state also remember provides possibility describing aspects writing recommendation rec five syntactic names mentioned approaches security distributed systems later substituted evaluation function basically using distinctive names aim capture blp policy first aspects let focus first prescribes subject read object higher security level operations read information locations read actions aspects capture following read true true note aspect trapping particular operation without caring parameters trivial applicability condition whenever aspects trap action recommendation considered granting access security level subject lower object since two names replaced corresponding security levels actual interacting locations thanks tables prescribes subject write object lower security level level subject currently follow similar approach considering write operations since deleting data form write implicit information could communicated aspects follows true true whenever aspects trap action recommendation grant access security level object lower one subject currently note use instead let consider basically one initiated proposal made paper due difficulty capturing precisely distributed setting note semantics keep track least upper bound security levels objects read particular subject location updates whenever subject reads something lower current value similar observation done object locations let consider subject location might read high information long security level allows otherwise either aspect would denied subsequent write low location must denied principle either aspect might decide unless subject logged system low security level case using localised state subject location making use syntactic name provided syntax expressing aspects define following aspects true true understood similar way aspects difference considering localised state subject location instead level subject logged system hernandez nielson analogous considerations done object location define following aspects finishing capture whole blp policy read true true combining aspects defining eight aspects idea combine attach every location every time interaction take place semantics consider aspects allowing interaction happen since blp model says state secure properties satisfied need make sure none aspects representing properties detects possible insecure interaction would mean least one properties satisfied capturing situation belnap operator must used combine aspects attaching locations ready state first lemma lemma distributed system insecure sense section aspects deny insecure interaction converse need make extra observation discussed following initialising historic value aspects defined check among values historic component attached location value kept updated semantics initially one must give particular value component chosen value affect correctness aspects detecting insecure interactions fulfil requirement lose precision unlike approach value follows suggestion equation observation ready state converse previous lemma lemma aspects deny interaction hypothetical resulting global state interaction actually allowed insecure sense section furthermore easily verify three examples figure precisely captured particular taken account could happen process location writes location figure process actually influenced must explicitly read data since semantics put another virtual location higher historic component process influenced aspects actually aspect prevent write otherwise write allowed simple example let consider simple example show combine looking future past assume airline database information passengers kept historic component database location initialised process could read approaches security distributed systems data written processes could according security level data written aspect prescribes clearanceu historyairlinedb read pass true one notice special case aspect written like emphasise example one process locations allowed read data database due previous aspect government whose clearance enough satisfy rec aspect indeed historic component database high enough since data written might sensitive passengers however times heightened security due probable threats government able audit passengers therefore allowing read database necessary anyway allowed long government later give passengers data press keep satisfying right privacy passengers following aspect prescribes data pressrelease government read pass data test threatlevel high airlinedb little aspect looks see achieved presence aspect government allowed perform read action long tuple threatlevel high airline database airline already notified heightened security situation also long government process trying read data leak data press future one conditions set cond aspect whereas rec reason related fact aspect temporary one aim combine previous one moreover combination done way government actually allowed read database although aspect aspect might deny therefore operator needed combining two aspects priority whole security policy airline database would process location government heightened security situation declared aspect considered otherwise either action trapped aspect process location government condition cond threat level high resulting cases four aspect considering aspect example even though simple clearly shows three features framework use aspects security allows temporarily modify distributed system without dig bussiness logic processes use belnap logic allows easily combine policies providing flexibility framework combination looking past future provides even flexibility giving power express exactly intended precisely satisfying properties many realistic examples look future electronic health records domain using policy priority aspect could even remain instead temporary one since ignored cases long tuple threatlevel high removed situation normalised note hernandez nielson first two features already present aspectkb framework particular first one widely used community third one powerful provided new framework conclusion studied problem enforcing multilevel security distributed system precisely possible approach poses problem guess processes locations may thereby losing precision therefore extended existing framework deal notion localised state given power access information past performance system thereby allowing capture policy precision resulting framework provides way combine policies look future past due belnap logic gives flexibility framework capturing precisely intended security policies also gives power previous framework acknowledgements work partially funded danish strategic research council project aspects security citizens partially integrated project sensoria contract would like thank alan mycroft comments early version paper finally really appreciated comments reviewers helpful references kiczales programming lncs springer bell lapadula secure computer systems mathematical foundations technical report mitre belnap computer think contemporary aspects philosophy oriel press bruns huth policies via belnap logic effective efficient composition analysis ieee computer society gelernter carriero coordination languages significance communications acm gollmann computer security wiley hankin nielson riis nielson advice belnap policies ieee computer society available http hankin nielson riis nielson yang advice coordination lncs springer available http mccune berger caceres jaeger sailer shamon system distributed mandatory access control acsac nicola ferrari pugliese klaim kernel language agents interaction mobility ieee trans soft engineering sabelfeld myers security ieee selected areas communications yang hankin nielson riis nielson access control tuple spaces manuscript submitted journal
| 6 |
new numerical abstract domain based matrices arxiv mar antoine normale paris france mine http mine abstract paper presents new numerical abstract domain static analysis abstract interpretation domain allows represent invariants form variables values integer real constant abstract elements represented matrices widely used design new operators meet needs abstract interpretation result complete lattice infinite height featuring widening narrowing common transfer functions focus giving efficient representation number claim domain always performs precisely interval domain illustrate tradeoff domain implemented simple abstract interpreters toy imperative parallel languages allowed prove algorithms correct introduction abstract interpretation proved useful tool eliminating bugs software allows design automatic sound analyzers programming languages abstract interpretation general framework interested discovering numerical invariants say arithmetic relations hold numerical variables program invariants useful tracking common errors division zero array access paper propose practical algorithms discover invariants form numerical program variables numeric constant method works integers reals even rationals sake brevity omit proofs theorems paper complete proof theorems found author thesis previous related work static analysis developed approaches automatically find numerical invariants based numerical abstract domains representing form invariants want find famous examples lattice intervals described instance cousot cousot isop paper lattice polyhedra described cousot halbwachs popl paper represent respectively invariants form whereas interval analysis memory time precise polyhedron analysis much precise huge memory number variables invariants form widely used modelchecking community special representation called matrices dbms introduced well many operators order timed automata see yovine paper larsen larsson pettersson rtss paper unfortunately operators tied modelchecking little interest static analysis contribution paper presents new abstract numerical domain based dbm representation together full set new operators transfer functions adapted static analysis sections present results potential constraint sets introduce briefly matrices section presents operators transfer functions intersection adapted abstract interpretation section use operators build lattices complete certain conditions section shows practical results obtained example implementation section gives ideas improvement matrices let finite set variables value numerical set set integers set rationals set reals focus paper representation constraints form choosing one variable always equal represent constraints using potential constraints say constraints form choose program variables constant rewritten assume work potential constraints set matrices extend adding element standard operations min max extended usual use operations may lead indeterminate forms set potential constraints represented uniquely matrix assume without loss generality exist two potential constraints left member different right members matrix associated potential constraint set called matrix dbm defined follows mij elsewhere potential graphs dbm seen adjacency matrix directed graph edges weighted set nodes set edges weight function defined mij mij mij denote finite set nodes representing path node node vik cycle path call dbm denote set points satisfy potential constraints mij remember variable special semantics always equal thus interest sort denoted defined call subset respectively dbm figure shows example dbm together corresponding potential graph constraint set order order induces order set dbms mij nij order partial also complete bounds denote associated equality relation simply matrix equality converse true particular see figure fig constraint set corresponding dbm potential graph fig three different dbms figure remark even comparable respect closure emptiness inclusion equality tests saw figure two different dbms represent section show exists normal form dbm present algorithm find existence computability normal form important since often abstract representations key equality testing used fixpoint computation case dbms also allows carry analysis precision operators defined next section emptiness testing following theorem theorem dbm empty exists associated potential graph cycle strictly negative total weight checking cycles strictly negative weight done using algorithm runs algorithm found cormen leiserson rivest classical algorithmics textbook closure normal form let dbm associated potential graph since cycle strictly negative weight compute shortest path closure adjacency matrix denoted defined mii min mik idea closure relies fact path constraint mik derived adding potential constraints vik mik implicit potential constraint appear directly dbm computing closure replace potential constraint mij tightest implicit constraint find diagonal element indeed smallest value reach figure instance closure dbms theorem inf saturates say theorem states smallest respect represents given thus closed form normal form theorem crucial property prove accuracy operators defined next section graph algorithm used compute closure dbm suggest straightforward described cormen leiserson rivest textbook time cost equality inclusion testing case empty easy cases use following consequence theorem theorem besides emptiness test closure may need order test equality inclusion compare matrices respect ordering done time cost projection define projection dbm respect variable interval containing possible values exists point following theorem consequence saturation property closure gives algorithmic way compute projection theorem interval bounds included finite operators transfer functions section define operators transfer functions used abstract semantics except intersection operator new operators basically extensions standard operators defined domain intervals algorithms presented either constant time quadratic time intersection let define intersection dbm min mij nij following theorem theorem stating intersection always exact however resulting dbm seldom closed even arguments closed least upper bound set stable introduce union operator result define least upper bound dbm max mij nij indeed least upper bound respect order following theorem tells effect operator theorem inf consequence smallest respect ordering contains closed theorem states upper bound set respect order precision concern need find least upper bound set theorem consequence saturation property close arguments applying operator get precise union one argument empty least upper bound want simply argument emptiness tests closure add time cost always convex union two may convex widening computing semantics program one often encounters loops leading fixpoint computation involving infinite iteration sequences order compute finite time upper approximation fixpoint widening operators introduced cousot thesis widening sort union every increasing chain stationary finite number iterations define widening operator mij nij mij elsewhere following properties prove indeed widening theorem finite chain property chain defined increasing ultimately stationary limit widening operator intriguing interactions closure like least upper bound widening operator gives precise results right argument closed rewarding change case first argument sometimes worse try force closure first argument changing finite chain property theorem longer satisfied illustrated figure originally cousot cousot defined widening intervals elsewhere elsewhere following theorem proves sequence computed widening always precise standard widening intervals theorem following iterating sequence fig example infinite strictly increasing chain defined sequence precise sequence following sense remark technique described cousot cousot plilp paper improving precision standard widening intervals also applied widening allows instance deriving widening always gives better results simple sign analysis case resulting widening dbms remain precise resulting widening intervals narrowing narrowing operators introduced cousot thesis order restore finite time information may lost widening applications define narrowing operator nij mij mij elsewhere following properties prove indeed narrowing theorem finite decreasing chain property chain decreasing chain defined decreasing ultimately stationary given sequence chain decreasing partial order partial order one way ensure best accuracy well finiteness chain force closure right argument changing unlike widening forcing elements chain closed poses problem forget given dbm variable forget operator computes dbm informations lost opposite projection operator define operator min mij mik mkj elsewhere obtained projecting subspace orthogonal extruding result direction theorem guard given arithmetic equality inequality call dbm guard transfer function tries find new dbm satisfies since general impossible try theorem satisfies example definition definition min mij mij elsewhere cases settled choosing respectively case special case cases simply choose guard transfer function exact assignment assignment defined variable arithmetic expression given dbm representing possible values take variables set program point look dbm denoted representing possibles values variables set assignment possible general case assignment transfer function try find upper approximation set theorem instance use following definition definition mij mij mij elsewhere use forget operator guard transfer function case special case choose cases use standard interval arithmetic find interval define elsewhere assignment transfer function exact comparison abstract domain intervals time precision numerical abstract domains compared experimentally example programs see section example however claim dbm domain always performs better domain intervals legitimate assertion compare informally effect abstract operations dbm interval domains thanks theorems definitions intersection union abstract operators guard assignment transfer functions precise interval counterpart thanks theorem approximate fixpoint computation widening always accurate standard widening intervals one could prove easily iteration narrowing precise standard narrowing intervals means abstract semantics based operators transfer functions defined always precise corresponding abstract semantics lattice structures section design two lattice structures one set dbms one set closed dbms first one useful analyze fixpoint transfer abstract concrete semantics second one allows design meaning even galois set abstract concrete lattice following abstract interpretation framework described cousot cousot popl paper dbm lattice set dbms together order relation least upper bound greatest lower bound almost lattice needs least element extend obvious way get greatest element dbm coefficients equal theorem lattice lattice complete complete however two problems lattice first easily assimilate lattice two different dbms least upper bound operator precise upper approximation union two force arguments closed closed dbm lattice overcome difficulties build another lattice based closed dbms first consider set closed dbms least element added define greatest element partial order relation least upper bound greatest lower bound elsewhere either elsewhere elsewhere thanks theorem every unique representation representation empty set build meaning function extension elsewhere theorem lattice complete lattice complete cousot cousot prop canonical galois insertion abstraction function defined lattice features nice meaning function precise union approximation thus tempting force operators transfer functions live forcing closure result however saw work widening fixpoint computation must performed lattice results algorithms dbms presented implemented ocaml used perform forward analysis parallel languages numerical variables procedure present neither concrete abstract semantics actual forward analysis algorithm used analyzers follow exactly abstract interpretation scheme described cousot cousot popl paper bourdoncle fmpa paper detailed author thesis theorems prove operators transfer functions defined indeed abstractions domain dbms usual operators transfer functions concrete domain shown cousot cousot sufficient prove soundness analyses imperative programs toy forward analyzer imperative language follows almost exactly analyzer described cousot halbwachs popl paper except abstract domain polyhedra replaced domain tested analyzer bubble sort heap sort algorithms managed prove automatically produce error accessing array elements although find many invariants cousot halbwachs two examples sufficient prove correctness detail common examples sake brevity parallel programs toy analyzer parallel language allows analyzing fixed set processes running concurrently communicating global variables use nondeterministic interleaving method order analyze possible control flows context managed prove automatically bakery algorithm introduced lamport synchronizing two parallel processes never lets two processes time critical sections detail example bakery algorithm initialization two global shared variables two processes spawned synchronize variables representing priority one process time enter critical section figure analyzer parallel processes fed initialization code control flow graphs figure control graph set control point nodes edges labeled either action performed edge taken assignment example guard imposing condition taking edge test example analyzer computes nondeterministic interleaving product control flow graph computes iteratively abstract invariants holding product control point outputs invariants shown figure state never reached means time critical section proves correctness bakery algorithm remark analyzer also discovered invariants holding state true done critical section done true done critical section done fig bakery algorithm critical section critical section fig control flow graphs processes bakery algorithm fig result analyzer nondeterministic interleaving product graph bakery algorithm extensions future work precision improvement analysis find coarse set invariants held program since finding invariants form programs possible losses precision three causes union widening loops assignment guard transfer functions made crude approximations definitions room improving assignment guard transfer functions even though exactness impossible dbm lattices complete exists precise transfer functions theorems hold however functions may difficult compute finite union one imagine represent finite unions using finite set dbms instead single one abstract state allows exact union operator may lead memory time cost explosion abstract states contain dbms one may need time time replace set dbms union approximation community also developed specific structures represent finite unions less costly sets diagrams introduced larsen weise pearson difference decision diagrams introduced lichtenberg andersen hulgaard csl paper structures made compact thanks sharing isomorphic however existence normal forms structures conjecture time writing local path reduction algorithms exist one imagine adapting structures abstract interpretation way adapted dbm paper space time cost improvement space often big concern abstract interpretation dbm representation proposed paper fixed memory number variables program actual implementation decided use graph hollow stores edges finite weight observed great space gain dbms use many algorithms also faster hollow matrices chose use complex efficient johnson closure cormen leiserson rivest textbook algorithm larsen larsson pettersson rtss paper presents minimal form algorithm finds dbm fewest finite edges representing given minimal form could useful storing used direct computation algorithms requiring closed dbms representation improvement invariants manipulate term precision complexity interval polyhedron analysis interesting look domains allowing representation forms invariants dbms order increase granularity numerical domains currently working improvement dbms allows represent small time space complexity overhead invariants form conclusion presented paper new numerical abstract domain inspired domain intervals matrices domain allows manipulate invariants form worst case memory cost per abstract state worst case time cost per abstract operation number variables program approach made possible prove correctness nontrivial algorithms beyond scope interval analysis much smaller cost polyhedron analysis also proved analysis always gives better results interval analysis slightly greater cost acknowledgments grateful feret hymans monniaux cousot danvy anonymous referees helpful comments suggestions references bourdoncle efficient chaotic iteration strategies widenings fmpa number lncs pages cormen leiserson rivest introduction algorithms mit press cousot construction approximation points fixes monotones sur treillis analyse programmes sciences scientifique grenoble france cousot cousot static determination dynamic properties programs proc int symposium programming pages dunod paris france cousot cousot systematic design program analysis frameworks acm popl pages acm press cousot cousot abstract interpretation application logic programs journal logic programming cousot cousot comparing galois connection approaches abstract interpretation invited paper plilp number lncs pages august cousot halbwachs automatic discovery linear restraints among variables program acm popl pages acm press lamport new solution dijkstra concurrent programming problem communications acm august larsen larsson pettersson efficient verification systems compact data structure reduction ieee rtss pages ieee press december larsen weise pearson clock difference diagrams nordic journal computing october representation difference sum constraint set application automatic program analysis master thesis paris france http lichtenberg andersen hulgaard difference decision diagrams csl volume lncs pages september yovine timed automata embedded systems number lncs pages october
| 6 |
dec hodge numbers divisors threefold hypersurfaces andreas braun cody long liam mcallister michael stillman benjamin sungb mathematical institute university oxford oxford department physics northeastern university boston usa department physics cornell university ithaca usa department mathematics cornell university ithaca usa mcallister mike prove formula hodge numbers divisors threefold hypersurfaces toric varieties euclidean branes wrapping divisors affect vacuum structure compactifications type iib string theory determining nonperturbative couplings due euclidean branes divisor requires counting fermion zero modes depend hodge numbers suppose smooth threefold hypersurface toric variety let restriction divisor give formula terms combinatorial data moreover construct complex describe efficient algorithm makes possible first time computation sheaf cohomology divisors large illustration compute hodge numbers class divisors threefold results step toward systematic computation euclidean brane superpotentials hypersurfaces december contents introduction notation preliminaries polytopes toric varieties picard group ravioli complex puff complex interpretation example contractible graphs example conclusions complexes distributive lattices ideals complexes ideals sequences corresponding divisors curves background spectral sequences hypercohomology spectral sequence strata computation divisor preliminaries relating toric data computation divisor spectral sequence computation hodge hypercohomology spectral sequence cohomology stratification proof generalization numbers stratification toric hypersurfaces hodge numbers toric varieties stratification toric varieties divisors polytopes resolution singularities computing numbers strata hypersurfaces reflexive polytopes topology subvarieties threefolds hodge numbers toric divisors introduction compactifications type iib string theory orientifolds threefolds fourfolds provide important classes effective theories supersymmetry vacuum structure theories depends potential moduli parameterize sizes holomorphic submanifolds manifold particular lieu potential moduli positive vacuum energy four dimensions induces instability overall volume compactification therefore realistic particle physics cosmology require computation moduli potential principle minimum moduli potential could created competition among purely perturbative corrections potential however present constructions metastable vacua require contributions superpotential moduli nonrenormalization theorem terms necessarily nonperturbative resulting euclidean branes wrapping cycles fact divisors compact important goal determine divisors threefold fourfold support nonperturbative superpotential terms euclidean branes witten shown euclidean smooth effective vertical divisor smooth fourfold give nonvanishing contribution superpotential whenever rigid meaning hodge numbers obey abbreviate rigidity corresponds absence massless bosonic deformations implies zero modes dirac operator two universal goldstino modes result supersymmetries broken general circumstances either singular fluxes included divisor threefold conditions nonperturbative superpotential subtle hodge numbers still essential data reason aim compute hodge numbers effective divisors manifolds manageable case smooth hypersurfaces toric varieties many years computational algorithms implementations computing sheaf cohomology coherent sheaves toric varieties faster implementations computing sheaf cohomology line bundles toric varieties unfortunately implementations fail finish even modest sizes say strong gauge dynamics four dimensions arising wrapping compact space alternative geometric requirements cycles closely parallel euclidean brane case refer latter work key open problem computational algebraic geometry find algorithms work effective divisors coherent sheaves threefold hypersurfaces arising database reflexive polytopes well fourfold hypersurfaces toric varieties arrive fully general answer however important special case smooth threefold hypersurface toric variety establish restriction effective divisor formula terms combinatorial data formula given theorem main result see theorem allows one read hodge numbers many divisors inspection moreover straightforward turn formula algorithm computes hodge numbers squarefree divisor arising database principal tools proof stratification hypercohomology spectral sequence complex correspondence establish divisors hypersurface complexes constructed triangulation associated reflexive polytope let briefly summarize results stratification decomposition toric variety tori leads extremely simple expressions hodge numbers particular subvarieties hypersurface among subvarieties prime toric divisors corresponds lattice point reflexive polytope review hodge numbers odi intersections prime toric divisors fully specified elementary combinatorial data namely simplicial complex corresponding triangulation together number lattice points interior face divisor union collection distinct prime toric divip sors order compute one appropriately combine data characterizing constituent prime toric divisors achieve establish appendix sheaf sequence associated exact examine corresponding hypercohomology spectral sequence formally methods already sufficient derive expression find valuable carry computation express result terms particular complex encodes data construction amounts attaching cells manner determined number lattice points relative interior dual face divisor naturally determines subcomplex via hypercohomology spectral sequence able relate sheaf cohomology cellular homology complex particular show proving theorem subtlety applying results compute euclidean brane superpotentials nontrivial sum prime toric divisors rigid necessarily reducible involves normal crossing singularities intersect normal crossing singularities present obstacle defining computing complicate connection number fermion zero modes new zero modes appear intersection loci thus hodge numbers compute mark significant step toward computing superpotential provide final answer systematically counting fermion zero modes associated normal crossing singularities along lines important problem future organization paper follows set notation recall elementary properties hypersurfaces toric varieties given divisor threefold hypersurface define corresponding complex constructed homology encodes sheaf cohomology prove main result theorem asserts illustrate findings example threefold conclude appendix defines complexes proves relevant properties appendix review key results stratification appendix directly compute counting lattice points dual faces special case restricted single notation preliminaries polytopes toric varieties paper consider threefold hypersurfaces simplicial toric varieties studied batyrev see also appendix review threefolds constructed pairs polytopes vertices obeying pair lattice polytopes obeying called reflexive pair dimension finite number reflexive pairs equivalence dimension enumerated given reflexive polytope choose fine star triangulation since simplex contains origin triangulation determines fan corresponding simplicial toric variety generic linear combination monomials cox ring correspond fine means triangulation uses lattice points triangulation star origin unique interior point reflexive polytope contained every fourdimensional simplex regularity condition ensures hence projective lattice points polytope smooth threefold hypersurface toric variety general pointlike orbifold singularities generic intersect points smooth denote set faces set faces dimension sets vertices edges respectively face unique face defined given face denote number lattice points relative interior given face define genus define genus number interior lattice points dual face simplex define corresponding minimal face minface face containing lattice points define dim minface minface picard group nonzero lattice point associated ray fan corresponding homogeneous coordinate toric variety may hence given point notion associate toric divisor extended triangulation associating subvariety let intersection hypersurface intersection nonzero simplex contained therefore useful define omits simplices pass facets correspond varieties intersect generic hypersurface slight abuse language refer triangulation hodge numbers subvarieties given rather simple formulae divisor associated lattice point find see appendix odi particular divisors associated points interior reducible connecting pair lattice points corresponds intersection cij hodge numbers curve associated interior irreducible genus curve associated interior union disjoint containing three lattice points corresponds intersection three divisors corresponding consists points results summarized result easily generalizes arbitrary dimension see appendix moreover appendix give similar albeit slightly complicated triangulationdependent formulae hodge numbers shown hodge numbers hypersurface obey simple combinatorial relations well particular rank picard group satisfies identify divisors obey four linear relations generate picard group first divisors associated lattice points irreducible however divisor associated lattice point interior irreducible connected components denote irreducible divisors generate picard group written collectively reindex refer prime toric divisors even though descend prime toric divisors unless favorable unless prime toric divisor associated lattice point odi subset associated divisor possibly reducible defined call divisor main purpose work analyze divisors ravioli complex every contained relative interior face gives rise closed subvariety complex dimension simplices correspond irreducible subvarieties however interior simplex corresponds subvariety connected irreducible components denote intersection structure determined triangulation choose ordering minface minface next intersection structure written finally special case minface write define complex called ravioli complex accounts intersection structure recall simplices correspond subvarieties possibly disconnected reducible cells correspond connected irreducible components subvarieties encoded ravioli complex defined set consists irreducible connected components intersections elements dimension called simplicial complex general case except simplex interior face replaced disjoint copies boundaries disjoint copies identified general simplicial complex instead necessarily uniquely specified contain homology cohomology complexes readily obtained fact always simplicial complex present difficulties computation visualization origin name clear figures associate divisor follows points points corresponding divisors appearing pairs triples may general connected complex together set connecting therefore define unique subcomplex example shown bottom image figure points correspond notice even case favorable obeying necessarily equal term standard topology symbol appearing name confused polytope see background complexes figure simplicial complex defined triangulation corresponding ravioli complex subcomplex associated divisor upper figure shows two adjacent faces separated thick line triangulation middle figure shows associated ravioli complex case genus face left zero genus face right one two sheets right face lower figure shows union four irreducible divisors associated points colored red figure ravioli complex subvarieties cells encode intersections among prime toric divisors correspond intersections pairs divisors irreducible curves correspond triple intersections divisors note always corresponds single point puff complex let briefly recapitulate sheaf cohomology prime toric divisors intersections fully specified one knows simplicial complex determined triangulation together genera faces number records extent subvarieties corresponding simplices contained reducible promoting simplicial complex encoded information reducibility directly complex heuristically viewed sets data subvarieties next step account data close analogy promotion define encodes data heuristically made precise following definition puff complex complex constructed ravioli complex follows vertex edge natural inclusions latter inclusion induces cellular structure attached interior induce inclusions defined following pushout diagram defined following pushout diagram qiv qie property complexes relevant attached complex map identifies boundary complex figure layers puff complex single face lower row shows three layers triangulated face top figure example genera various faces complex associated divisor corresponding image morphism words construct attach bouquet vertex bouquet circles point interior edge bouquet cylinders connected component restricted spaces glued together along common points particular cylinders complete edge pinched two vertices bounding forming collection voids find useful divide puff complex layers definition jth layer puff complex subset resulting replacing points interior replacing interior cylinders definition particular figure contains sketch different layers puff complex simple example homology readily obtained distinct connected components examined separately may take without loss generality contributions come edge bouquet cylinders connected component strictly interior contributions come bouquet pinched cylinders edge bouquet vertex layer prove classes contributions correspondence classes contributions hypercohomology spectral sequence computes cohomology words homology complex encodes cohomology see correspondence computational heuristic utility spectral sequence computation hodge numbers section prove principal result theorem let divisor defined denote puff complex associated subcomplex determined respectively hodge numbers given rest section prove theorem method based examining hypercohomology spectral sequence use stratification identify page sequence use somewhat indirect argument show differentials page zero hypercohomology spectral sequence start let prime toric divisors defined given toric divisor compute cohomology suppose indexed set divisors dimensionally transverse smooth variety proposition appendix generalized sequence odi odij odijk exact sequence sheaves hypercohomology spectral sequence complex allow compute cohomology define odi odij odijk complex sheaves gives rise hypercohomology spectral sequence differentials denoted first page spectral sequence indexed second page reads ker coker coker ker ker nonzero differential page third final page reads ker coker ker ker coker page differential zero hypercohomology spectral sequence converges cohomology meaning dim dim ker dim dim ker dim ker dim dim dim coker dim coker special case zero map simplifications dim dim ker dim ker dim dim dim coker dim coker dependence divisor understood denote maps divisor specified write etc cohomology stratification use stratification compute cohomology organized spectral sequence given proposition row page hypercohomology spectral sequence coho mology equal ravioli complex dim ker dim dim ker dim dim coker dim note written succinctly form dim dim even though dim nonzero proof identify row page spectral sequence cohomology complex complex following take set kfaces particular set faces set faces one connected component connected component occur two different intersections otherwise component would contained intersection three four happen since dimensionally transverse finally corresponds points triple intersections three divisors point appear one intersection notice delta complex whose cohomology sequence cedi cec cep maps follows connected component edi edj components zero irreducible component point intersection sign consider complex row page spectral sequence see cep cec cedi complex complex terms identification easy see maps identical morphisms canonically induced generalized complex proposition generalizes rows second page proposition dim dim proof already established case note stratification gives odi unless corresponds vertex positive genus summing gives dim proving proposition case established noting row page spectral sequence stratification breaks direct sum complexes summed edges intersecting complex parentheses simply relative cochain complex pair note edge therefore cohomology direct sum complexes relative cohomology definition equals corollary given divisor dim dim dim dim dim dim proof section prove following also corollary proves theorem lemma corresponds divisor map zero map prove result first splitting divisors disjoint support consider subgraph suppose graph components contained strict interior edge let sum divisors corresponding lattice points ith component let sum rest divisors define lemma map zero dim dim proof map zero domain dimension dim zero construction lemma dim obi proof given component consists divisors associated lattice points interior edge genus obi lemma fix integer let curve whose irreducible components two intersect one point proof collection rational curves complex corresponds collection one end rbi end edge containing rbi vertex two curves intersect share notice possible intersect way form nontrivial loop would require closed loop three connected however rbi strict interior edge loop formed thus lemma dim dim proof prove induction consider use induction show dim dim oai statement trivial use long exact sequence associated sequence oai obi combined lemmas one obtains oai lemma dim dim proof four cases element appear contains vertex genus contains entire edge genus contains least two full sheets finally contains collection full contain void since definition lattice points edge containing contains vertices must proof lemma therefore also proof theorem lemmas imply dim dim dim dim corollary proving theorem generalization aspects computation generalize immediately definition let simplicial toric variety dimension let smooth hypersurface let divisor construction definition generalizes denote puff complex associated subcomplex determined respectively note constructions immediate results stratification subvarieties apply next sequence generalizing contains nonzero terms resulting hypercohomology spectral sequence converges cohomology easy see row zero page one hypercohomology spectral sequence cohomology equal moreover proposition holds arbitrary however diagonal maps generalizing every diagonal maps shown identically zero proved case would however examining diagonal maps directly manner done would somewhat involved defer proper analysis maps state following conjecture let definition set exists show except obeying rather restrictive condition interpretation example briefly discuss interpretation result illustrate utility formula example contractible graphs equation depends certain topological properties complex two complexes correspond divisors related one another deformations preserve topological properties identical hodge numbers leads useful tool enumerating certain divisors explain given triangulation associated simplicial complex divisor determines unique simplicial complex well corresponding complex number distinct divisors given puff complex corresponds hypersurface thus task working hodge numbers possible divisors appears formidable large fortunately explain theorem implies divisors fall equivalence classes suppose begin set lattice points define divisor complex let add remove one lattice point changed operation uniquely defines new divisor new complex equation define single contraction operation changing adding removing lattice point specified way define contraction arbitrary composition single contractions contraction equivalence relation divisors members contraction equivalence class hodge numbers however two divisors hodge numbers necessarily belong contraction equivalence class example consider complex figure associated divisor correspond points equation gives containing perform contraction example first deleting deleting figure present involved example figure example contraction graph correspond points interior first removed removed example demonstration utility theorem calculate hodge numbers nontrivial divisors hypersurface toric variety corresponding fine star regular triangulation largest polytope database example convex hull columns dim face int pts genus table genera vertices edges faces nonzero lattice points interior therefore intersect toric divisors face structure simple vertices indexed corresponding columns vertices edges triangles triangulation genera faces given table consider largest triangle vertices labeled face lattice points shown figure sub faces figure largest largest reflexive polytope banded complex show red hodge numbers corresponding divisor easily read except vertex genus puff complex restricted simply restricted single cell attached vertex choose excluding point set points whose corresponding complex connected cycles corresponding divisor rigid divisor corresponding entire face hodge numbers divisor corresponding line connecting hodge numbers nontrivial example form simplicial complexes cycles complex defined taking boundary points none interior points defines divisor hodge numbers figure show choice complicated subcomplex containing cycles theorem allows easily read hodge numbers corresponding divisor triangulation illustrative purposes triangulation obtained regular triangulation difficult render figure facet defined points several complexes drawn red points indicate lattice points included complex red lines paths may include additional lattice points red triangles indicate entire faces included complex note point corresponding scaled visualization purposes complex included entire boundary facet theorem hodge numbers corresponding divisors oda odb odc odd example contraction one consider complexes restricted single depicted figure show facet defined points several complexes drawn red points indicate lattice points included complex red lines paths may include additional lattice points red triangles indicate entire faces included complex note point corresponding scaled visualization purposes complex included entire boundary facet theorem immediately read hodge numbers corresponding divisors oda odb odc odd conclusions work computed hodge numbers divisors threefold hypersurfaces toric varieties given data simplicial complex corresponding triangulation reflexive polytope constructed complex simultaneously encodes data dual construction involves attaching certain cells manner determined specification divisor determines subcomplex well exact sheaf sequence examining corresponding hypercohomology spectral sequence proved side manifestly dimension sheaf cohomology group side dimension simplicial cellular homology group results step forward study divisors hypersurfaces theorem permits extremely efficient computation hodge numbers squarefree divisors threefolds even conjecture extends methods fourfolds enlarging range divisors computable hodge numbers ultimate goal work determine effective divisors hypersurface support euclidean brane superpotential terms theorem represents significant progress toward goal advances necessary give completely general answer first although hodge numbers provide essential information number fermion zero modes euclidean brane smooth additional zero modes associated singular loci smooth rigid divisors prime toric divisors nontrivial divisor rigid necessarily involves normal crossing singularities components intersect one therefore ask count fermion zero modes divisor normal crossings cases normal crossings shown yield new fermion zero modes rigidity sufficient condition superpotential moreover exist nontrivial smooth divisors hodge numbers suitably magnetized euclidean wrapping divisor worldvolume flux lift zero modes counted lead superpotential term even though rigid finally donagi wijnholt proposed general fermion zero modes counted logarithmic cohomology verifying applying idea natural task future many possible extensions work effects worldvolume flux could plausibly incorporated along lines including effects bulk fluxes appears challenging step would compute hodge numbers effective divisors advances directions would allow truly systematic computation nonperturbative superpotential moduli compactifications hypersurfaces acknowledgments grateful yuri berest ralph blumenhagen thomas grimm jim halverson tristan john stout wati taylor timo weigand discussions thank john stout creating figure thank organizers string phenomenology virginia tech providing stimulating environment discussing work thanks theory groups mit northeastern stanford hospitality portions work carried thank theory group mit hospitality portions work carried work supported scft grant erc grant higgsbndl work supported nsf grant work supported part nsf grant work supported part nsf grant work supported nsf rtg grant complexes section establish exactness generalized sequence give related background spectral sequences given closed subvarieties subschemes variety scheme complex always exact sequence one view sequence ideals ring related exact sequences generalizations sequences cases two subvarieties ideals quotients level generality resulting sequence often exact prove exactness several cases interest paper start case ideals suppose ideals ring define ideal sequence cochain complex iip differential defined one checks immediately ker sequence fact complex notice reduced cochain complex exact quotient complex cochain complex defined completely analogous manner iip differential defined natural quotient maps one checks complex ker exact sequence complexes coupled exactness middle complex complex exact precisely exact finally given closed subvarieties subschemes variety scheme define sheaf sequence parallel taking define maps manner quotients ideals resulting complex form odi odi distributive lattices ideals complexes ideals relationship distributive lattices ideals exactness mayervietoris ideal sequences explained section summarize results show several sets ideals interest form distributive lattices sequences exact key technical fact allow show certain complexes sheaves exact fix commutative ring ideals set ideals generates lattice ideals join two ideals sum meet two ideals intersection smallest set ideals contains closed two operations lattice ideals generated lattice ideals called distributive every three ideals lattice one equivalently lattice distributive every three ideals lattice one importance notion following useful characterization due maszczyk theorem maszczyk set ideals ring following equivalent lattice ideals generated distributive exact hence following typical distributive lattice ideals example let polynomial ring field lattice ideals generated set monomial ideals distributive lattice ideals monomial ideal unique description irredundant intersection monomial prime ideals hxi intersection uniquely defined set subsets ideals intersection minimal primes key example generalization case regular sequence consider case local ring graded ring cases permutation remains regular sequence given define map section call ideal ideal monomial ideal generated generated products say monomial ideal written intersection hfi hxi theorem let regular sequence maximal ideal local ring lattice ideals generated exactly set ideals lattice ideals distributive follows following precise lemma prove induction lemma let regular sequence let following statements hold ideal ideals obey ideals ideals ifn every ideal monomial ideal ideals also ideals proof prove induction case immediate suppose statements true number elements less part induction hypothesis hfi appear intersection hfi hfi whose intersection right hand side clearly contained left hand side let one shows contained right hand side thus therefore part since therefore exists plug back formula get since already right hand side subtract obtaining element right hand side precisely since nonzero divisor combining induction hypothesis part therefore hence contained right hand side follows immediately taking follows immediately ifn ifn ideal always let ifn ideal inductive hypothesis gives monomial therefore instead induction hypothesis gives decomposition part gives required decomposition otherwise part decompositions putting together gives decomposition follows immediately inductive hypothesis follows immediately part proof theorem set ideals plainly closed addition ideals lemma lattice ideals generated contains set ideals also lemma set ideals closed intersection lattice ideals generated set ideals finally lemma lattice distributive sequences corresponding divisors curves recall called prime divisor irreducible codimension one proposition simplicial toric variety prime torusinvariant weil divisors sheaf sequence exact sequence proof let cox ring corresponds ring variable consider ideal sequence exact generate distributive lattice also exact sheafification exact sequence graded remains exact sheafification precisely proposition smooth variety effective divisors intersection set either empty codimension sheaf sequence exact sequence proof prove exactness locally let point suppose set pass localizing sheaf complex results complex cartier divisor defined locally ring note hypothesis elements form regular sequence maximal ideal therefore theorem shows exact implies also exact implies original complex exact proving proposition notice proof generalized case equidimensional necessarily smooth cartier divisors given intersection properties also exists corresponding exact sequence case curves simply using corresponding normalization without necessitating technology lemma suppose smooth projective variety assume smooth irreducible curves corresponding closed subscheme nodal reducible curve components mayervietoris sequence exact sequence proof hypothesis normalization scheme corresponding disjoint union smooth irreducible components thus normalization induces following short exact sequence oci quotient sheaf torsion sheaf support precisely nodes considering sequence locally nodes thus obtain following exact sequence oci opj natural closed immersion correspond nodes higher direct images closed immersions vanish hence obtain exact sequence yielding exactness background spectral sequences highlight essential aspects hypercohomology spectral sequences needed follow computation begin describing basic usage two spectral sequences corresponding double complex instead going detail filtrations assume entries second subsequent pages spectral sequence vector spaces describing first quadrant spectral sequences need hypercohomology start bounded double complex vector spaces collection vector spaces outside box horizontal differentials vertical differentials satisfying compatibility condition anticommute generally could allow modules objects abelian category maps would morphisms category two spectral sequences corresponding erp erp describe first second easily obtained spectral sequence set pages erp page array vector spaces erp together map zeroth page starts spot map vertical differential differential step satisfies equation maps first page obtained zeroth page setting ker differential simply horizontal map induced horizontal differential general given page spectral sequence page array ker iterating procedure spectral sequence eventually converges terms page stabilize first quadrant spectral sequence spectral sequence converges cohomology total complex tot total complex complex term defined tot natural differential defined hypercohomology spectral sequence explain utility spectral sequence discussed computing sheaf cohomology let fix smooth complex projective variety closed subvarieties assume intersection set either empty codimension proposition corresponding sheaf sequence exact sequence let denote complex localized degree term let odi denote corresponding complex beginning degree complexes hence objects bounded derived category isomorphic natural global sections functor qcoh vec deriving right restricting full subcategory bounded complexes quasicoherent sheaves coherent cohomology obtain induced derived functor vec given complex coherent sheaves compute hypercohomology groups higher derived functors taking cohomology namely order compute hypercohomology groups complex take complex injective objects given compute cohomology complex often constructed total complex resolution existence proved applying horseshoe lemma particular hypercohomology independent resolution however practice injective resolutions usually hard find explicitly one often resorts taking acyclic resolutions wish compute odi general situation given scheme given complex following computational tool lemma exists spectral sequence converging stratification toric hypersurfaces hodge numbers strata appendix review number results stratification toric varieties associated stratification hypersurfaces particular review algorithm used conveniently read hodge numbers toric strata although techniques principle succinct exposition especially physics literature lacking refer reader introduction toric geometry original reference compute hodge numbers toric strata hypersurfaces see number applications close physics toric varieties stratification definition toric variety characterized containing open dense algebraic torus action extends whole variety one think toric variety partial compactification various algebraic tori added thought limits original action extends naturally whole variety structure summarized combinatorial object called fan fan collection strongly convex rational polyhedral cones face every cone counts cone fan two cones intersect along face strong convexity demands subspace except origin ambient vector space contained cone one think cone spanned finite number rays primitive generators elements lattice finally mostly make assumption every cone simplicial cone spanned rays case associated toric variety orbifold singularities factorial let label generators rays index running toric variety associated fan described analogy complex projective space associating homogeneous coordinate ray fan forming quotient data encoded fan follows first exceptional set stanley reisner ideal corresponds collections cones share common cone wheneve case set contains subspace associated collection coordinates vanishes simultaneously abelian group given kernel homomorphism form basis lattice note implies whenever linear relation associated subgroup acts cases interest completely described relations like general however may contain discrete factors note discussion implies complex dimension equals real dimension following also use notation denote toric variety determined fan coordinates defined parameterize open dense giving toric variety name strata added turn nontrivial toric variety encoded fan follows first associate unique cone origin cone naturally associated homogeneous coordinates corresponding generating rays choosing basis lattice vectors contained dual cone sits dual lattice may take limit set generating cone generators lands algebraic torus definition fan ensures strata consistently sewn together denoting set cones hence stratify toric variety according data fan terms homogenous coordinates stratum cone described figure fan contains cone three rays cones three cones setting let discuss examples fan corresponding lives composed three cones origin ray generated ray generated recover standard presentation open dense described coordinate equivalently strata corresponding two cones simply given respectively hence recover description riemann sphere adding point infinity fan corresponding shown figure gives standard presentation fan directly read stratification data becomes bulk paper interested geometry toric varieties per rather geometry algebraic subvarieties simplest setting algebraic subvariety given vanishing locus section line bundle algebraic subvariety one easily obtains stratification toric strata meet transversely case define stratum every cone stratification hypersurface simply let illustrate simple example hypersurface degree consider vanishing locus homogeneous polynomial degree homogeneous coordinates call resulting curve adjunction one finds hypersurface riemann surface genus let see stratified using first consider lowest dimensional strata correspond cones three cones fan corresponds point defined setting two three homogeneous coordinates zero sufficiently generic polynomial points never lie strata contribute indeed supposed three cones fan stratum obtained setting one three homogeneous coordinates zero forbidding remaining two vanishing intersecting find consists points every cone finally complex stratum riemann surface genus points excised show reproduce numbers combinatorially using newton polytope toric varieties divisors polytopes order describe determine geometry strata combinatorially need present situation interest slightly different perspective note first data defining topology consists toric variety equivalently fan line bundle write divisor class terms toric divisors corresponding rays group holomorphic sections given polytope known newton polytope defined explicitly may use monomials basis group sections provides convenient way find zeroth cohomology group fact one divisor line bundle counts global holomorphic sections interestingly determines line bundle blowdown toric variety polytope associate normal fan gives rise toric variety along divisor works follows every face polytope associate cone point lying inside point lying inside dual cones defined form complete fan called normal fan faces associated kdimensional cones particular cones highest dimension associated vertices toric variety polytope determines line bundle cartier divisor via strongly convex support function cone maximal dimension described using dual vertex setting hmi point also determines cones lower dimension divisor determined numbers recover basis sections line bundle via relations used associate linear support function divisor line bundle associated ample strongly convex convex different cone maximal dimension toric variety projective iff fan normal fan lattice polytope let come back example hypersurface determined may write terms toric divisors shown figure cones generated vectors newton polytope corresponding vertices integral points correspond monomials homogeneity degree using particular resolution singularities correspondence faces cones may write stratification terms faces instead cones faces dimension correspond cones dimension dimension strata note counts face corresponds cone discuss appendix determine topology strata appearing newton polytope ignored far however normal fan cases contrary simple example give rise smooth toric variety fact fan even need hence interested refinement fan order resolve hypersurface fortunately process results minor modifications stratification slight abuse notation use letter also resolved hypersurface refinement stratification associated faces becomes exceptional set refinement cone associated face every cone cone normal fan associated face corresponding stratum computing numbers strata order compute hodge numbers toric hypersurfaces generally toric strata hypersurfaces need introduce piece machinery strata appearing stratification naturally carry mixed hodge structure simple terms means numbers nonzero even see proper introduction data packed numbers also call numbers following convenient number reasons first case pure hodge structure agree sign usual hodge numbers secondly behave way topological euler characteristic unions fan simplicial cones generated rays fans simplicial give rise toric varieties singularities general orbifold singularities particular every weil divisor varieties products spaces hence knowing numbers sufficient compute hodge numbers toric hypersurface information supplied form algorithm work reviewing algorithm let first discuss toric varieties illustrate method smooth toric variety hence see direct consequence formula combined stratification toric variety read fan standard formula nontrivial hodge numbers smooth toric variety letting denote number cones write smooth hypersurface strata following relations shown allow compute numbers first counts number lattice points relative interior sum runs faces dimension contained face note sum one term corresponds face remaining numbers satisfy sum rule function defined denotes polytope found scaling face face dimension simple formula finally higher hodge numbers one subsequent application formulae one derive combinatorial formulae numbers strata arbitrarily high convenient use denote number points let derive numbers strata first stratum consists points hence immediately find furthermore write similarly find let return example degree hypersurface stratification already discussed see newton polytope given open dense torus gives rise open dense subset riemann surface number points excised numbers given stratum associated three consists points hence recover results derived example hypersurfaces reflexive polytopes let come back prime interest manifolds assume dimension vector space containing fan get toric variety want embedd hypersurface dimension order hypersurface defining polynomial must section anticanonical bundle corresponding divisor hence given notation hence identify sections set lattice points polytope general section zero locus defines hypersurface given complex constants general vertices polytope lattice points guaranteed generic section defines smooth even irreducible hypersurface vertices contained case call lattice polytope follows vertices sitting lattice polytope defined called polar dual called reflexive pair lattice polytopes lattice polytope whose polar dual also lattice polytope called reflexive necessary condition reflexivity origin unique interior point polytope question repeating construction hypersurfaces toric varieties naturally constructed reflexive pairs lattice polytopes starting lattice polytope may construct normal fan case reflexive pair equal fan faces course fan general define smooth toric variety fact cones need even simplicial however natural maximal projective crepant partial mpcp desingularization found follows using follows refinement fan introduces rays generated lattice points crepant preserves property may hence find mpcp desingularization fan refinement lattice points employed projective toric variety equivalent finding fine regular star triangulation fine means lattice points used star means every simplex contains origin projectivity toric variety equivalent fan normal fan lattice polytope toric variety projective construction necessarily true refinement triangulations associated fan normal fan polytope called regular projective coherent literature see details finally fine triangulation general sufficient completely singularities meeting generic hypersurface dimension reason fine triangulation reflexive polytope implies corresponding cones lead singularities first dimension singularities persist even fine triangulations reflexive polytope fourfolds threefolds considering polytopes simplices lead pointlike singularities ambient toric variety even fine triangulations meet generic hypersurface contrast cones associated polytopes lead singularities along curves may meet generic fourfold hypersurface simplices triangulations hence cones fan cut surface note one may start triangulation restrict triangulation faces simply construct cones course always find resolutions introducing rays generated lattice points outside pair reflexive polytopes correspondence faces faces defined resolution induced refinement stratum hypersurface corresponds subvariety resolution changed according new simplices introduced dual face simplex dimension cone dimension corresponds subvariety dimension hence simplex dimension contained interior face corresponds subvariety form note vertices dual faces maximal dimension correspond varieties contribute stratification persists resolution simple intersection argument shows none divisors corresponding points interior face maximal dimension hence none strata corresponding simplices interior face maximal dimension intersect smooth hypersurface face maximal dimension also called facet find normal vector dual vertex means linear relation form integers let assume refined point interior facet associated divisor nonzero intersection divisors also lies others necessarily lie different cones fan means relation implies sum toric divisors coming points hypersurface given zero locus section sum toric divisors find using argument hence meet generic hypersurface correspondingly refinement introducing influence strata corresponding simplices dimension interior thought open subset intersections divisors least one corresponds interior point none simplices interior face maximal dimension gives rise subvariety meeting correspondingly strata appear stratification fact simplices contained faces maximal dimension contribute hypersurfaces means ignore faces constructing triangulation using methods explained one derive combinatorial formulas hodge numbers toric hypersurfaces depend specific triangulation chosen hypersurface dimension embedded toric variety dimension consider pair reflexive polytopes dimension stratification gives note numbers make sense smooth hypersurface guaranteed without investigation hypersurfaces dimension although formulas derived using stratification technique straightforward explanation particular formula counts number complex structure deformations counting number monomial deformations appearing defining equation subtracting dimension automorphism group finally last term corrects fact deformations realized polynomial deformations similarly formula counts number inequivalent divisors meet correction term taking account divisors become reducible apparent formulae exchanging roles exchanges mirror symmetry realized toric calabiyau hypersurfaces topology subvarieties threefolds section describe topology subvarieties hypersurfaces toric varieties obtained restricting toric subvarieties ambient space ease notation restrict case threefolds similar analysis may carried higher dimensions already explained need consider simplices strata associated simplices interior meet smooth hypersurface triangulation corresponds cone fan hence open stratum depending location simplex defining equation hypersurface constrain factors others lie entirely hypersurface reason resolution process work singular varieties determined every face gives rise stratum dimension stratum given hypersurface resolution process described appendix yields factor determined simplices contained relative interior face every contained relative interior face contributes hence contributes stratum note factor common strata originating simplices contained chosen face threefold correspondence variety vertices curve strata interior collection points strata interior faces closed subvarieties hence associated simplex found collecting simplices attached taking disjoint union associated strata numbers additive provides efficient way find hodge numbers associated subvariety may neglect simplices contained relative interior faces maximal dimension three case contribute strata hypersurface following explicitly write resulting stratifications various subvarieties compute hodge numbers point adopt different notation rest appendix main text paper focus threefolds need distinguish vertices edges facets denote respectively dual edges vertices polytope consequently denoted hope confuse reader vertices let consider divisor associated lattice point vertex vertex dual face contributes open stratum furthermore contained edges dual faces ending contributing well dual contributing finally faces dual contributing hence divisors contain irreducible hypersurface open dense set compactified strata note collection points collecting strata find stratification stratification hand start computing hodge numbers first two strata contribute find first stratum contributes find finally compute find contribution computed well contribution edge emanating vertex furthermore interior dual connected vertex contribute summing result simplices interior let first consider divisors originating points interior edges dual corresponding form two edge containing correspond containing interior dual contribute points times contribute points open dense stratum divisors originates simplex simply product curve genus points excised times gets compactified remaining strata may think open dense subsets intersection neighboring divisors two along edge partially compactify remaining points sit points excised open curve may hence think divisors follows flat fibrations curve genus points fiber degenerates chain determined number attached lying neighboring see details works first note points excised due face find strata corresponding simplices interior dual face hence points dual faces generic fiber degenerates number equal number attached interior cartoon shown figure analysis fibration structure expect confirmed direct computation using stratification explained stratification multiplying due whereas points correspond reducible fibre consisting touching generic fibre points genus curve figure fibration structure divisor associated lattice point interior edge polytope base genus curve generic fiber neighboring points fiber degenerates many contain two containing multiplying corresponds containing interior face corresponds containing interior irreducible computation becomes stratum contributes already highest stratum count interior points faces none hodge numbers fit fibration structure discussed finally let compute need predicted analysis fibration carried similarly one may analyze curves correspond interior stratification stratum curve genus number points excised second term due unique attached every face consists points face containing supplies points compactify follows immediately genus fits fact union strata corresponding simplices interior sits curve two neighboring divisors intersect common base fibrations simplices interior let first consider interior open dense subset originating given containing compactify contributing pts stratification simplices interior collection points divisors considered general reducible irreducible component toric variety pstar determined stratification starting triangulation may construct star fan find fan toric variety pstar see figure figure left hand side neighborhood lattice point inside triangulation colored simplices containing red contribute star fan star shown right hand side immediately follows pstar result easily recovered stratification similarly closed subvariety associated interior times closed subvariety associated consists points implies two three divisors associated points interior nonzero intersection intersect collection disjoint points example let consider slightly nontrivial example see machinery work consider reflexive polytope vertices well vertices spanning numbers interior points dual vertices dual vertices edges dual finally dual edges together numbers interior points hodge numbers corresponding mirror pair threefolds quickly found numbers evaluating single requires triangulation face integral points bounding edges well triangulations shown figure let discuss topology divisors correspond lattice points figure nontrivial along five possible triangulations sitting face remarked hodge numbers depend triangulation let choose triangulation shown upper left figure divisors correspond vertices conclude vanishes three matter triangulation chosen let compute hodge numbers triangulation upper left figure evaluate note except interior points simplicial hence except ones shown figure contain interior furthermore number edges containing three vertices questions found finally find last number depends triangulation chosen let investigate points interior cases start contained hence learn evaluate triangulation considering three contained contribute apart form single containing conclude described general means think fibration another fiber degenerates union three single point base two points interior edges contained edge triangulation chosen connects single vertex inside whereas connects two hence finally interior copies toric variety toric variety directly read star fan hirzebruch surface triangulation chosen hence similar discussion easily made triangulations consider flop taking triangulation upper left one upper right decrease one whereas increased one hodge numbers toric divisors manifolds higher dimension technique used used find topological data toric divisors restricted hypersurface whereas hodge numbers depend triangulation one derive remarkably simple formula hodge numbers smooth associated pair reflexive polytopes lattice point relative interior face dimension associated divisor face dual face containing holds divisors connected formula furthermore prove following central result section note reduces corresponding relations derived threefolds threefold case divisors associated vertices divisors associated points interior edges let assume given pair reflexive polytopes triangulation giving rise smooth projective toric let toric divisor associated lattice point contained relative interior face dimension interested hodge numbers divisor toric divisor composed strata associated cones contain ray descend subset strata sum numbers find hodge numbers first note cases trivial first case give rise divisor second case face dual face dimension denoted enjoys stratification form strata form divisors disconnected components smooth toric varieties hence nontrivial hodge numbers divisors hence assume following let start writing stratification arbitrary divisor descending toric divisor given usual pair dual faces divisor inside face strata neighboring faces containing contribute expressed saying faces contribute face toric strata term originate various simplices generally enough singularities miss generic hypersurface cones lattice volume sitting inside faces maximal dimension triangulation faces dual contain point particular originates point originates every interior containing finally simplices give rise points expression main tool deriving strata potentially contribute hence need evaluate sum simplices side first conclusion drawn directly whenever case none strata contribute would need count points faces dimension face even faces geometric reason divisor associated point inside face dimension thought exceptional divisor originating resolution discussed correspondingly divisor fibration toric variety dimension degenerates various subloci irreducible manifold dimension hence highest possible nonzero already established indeed get nonzero contribution whenever case stratum contributes find strata sum right hand side runs simplices contain including sign alternates according dimension simplex question neglect remaining simplices arranged form polyhedron found intersecting various simplices centered alternating sum contribute correspond vertices polyhedon polyhedron topologically sphere write strata smooth compact manifold contributes second term shown case use proceed show depending number strata contribute starting write every term sum find alternating sum simplices containing face dual evaluate various already found sum simply gives relating euler characteristic sphere higher values essentially use similar argument case point sits hyperplane defined face furthermore face bounded hyperplanes dimension greater equal set simplices connecting correspond triangulation open subset sphere dimension euler characteristic even odd dimensions fix sign note points sphere correspond turn factor sum points contribute computation euler characteristic contribution sum simplices always equal using hence led note face containing appears multiple times alternating signs expression particular single face appear multiple times single term sum let consider single face find often appears signs first note may equally well phrase problem terms faces given face containing face dual factor multiplying fixed face dimension sum simply contribution proportional dual face hence given counting faces containing contained compute quantity interpret euler characteristic topological space follows consider sphere dimension centered orthogonal face faces contributing sum except give rise decomposition one closed half sphere euler characteristic dimension contribution stratum contributes alternating sum dimension contributes euler characteristic sum still neglecting face face contributes two terms always cancel sum vanishes pair faces hence sum vanishes except one term contributes completes proof computation divisor appendix give alternative computation special case defined compute directly terms counting lattice points arriving result coincides theorem subcase computation provides alternative perspective spectral sequence preliminaries begin assembling elementary results divisors hypersurfaces toric varieties let simplicial toric variety denote divisor write hypersurface let proposition serre duality gives effective let assume effective using also addition effective toric fourfold relation using serre duality long exact sequence cohomology induced koszul sequence one immediately finds hodge numbers obey similarly establish proposition space define following relations hold reads consider koszul sequence induces long exact sequence cohomology odb odb odb odb applying leads first equality second equality follows corollary close parallel proposition show following proposition following relations hold consider koszul sequence reads applying long exact sequence cohomology induced yields particular corollary relating toric data state condition defines special case treated appendix divisor corresponding collection definition let contained lattice points call single call divisor notice contains layers ravioli complex thus corresponds also subcomplex examine simplicial complex associated divisor let let lemma let associated simplicial complex follows spectral sequence associated generalized sequence given proposition appendix divisor let correcorollary let sponding divisor follows divisor single relate sections sections divisor let correlemma let sponding divisor reads tensor koszul sequence written general divisor effective toric divisors therefore show serre duality equation need show consider koszul sequence equivalently show using long exact sequence induced find lemma follows upon using thus proved divisor let correcorollary let sponding divisor computation divisor equipped calculate arbitrary divisor first establish compute proposition let toric variety corresponding fan let toric divisors define define polyhedra pdb proof given proposition divisor let corresponding lemma let divisor let vertices complete edges complete faces included counted lattice points proof sections computed counting suitable lattice points consider divisor specified set points label points set first note effective assumption also divisor origin corresponds global section counted lattice points additional sections count sections including points set one one checking number sections changes let words let assumption contained single last sum one term find useful write form anticipates result completely general divisor corresponds lattice point equals divisor number lattice points equals number lattice points hand number lattice points thus points obeying definition lattice points face dual point need count points also satisfy three types vertices points interior edges points interior include set particular order first consider case vertex need solve equation defines facet however point boundary dual defined violates points violate interior dual therefore next let correspond point internal edge condition violated unless entire edge including vertices bounding included since implies therefore divisor corresponding complete edge vertices similar manner find including points internal set contribute every point face included corollary lemma deduce corollary let divisor references kachru kallosh linde trivedi sitter vacua string theory phys rev balasubramanian berglund conlon quevedo systematics moduli stabilisation flux compactifications jhep conlon quevedo suruliz flux compactifications moduli spectrum soft supersymmetry breaking jhep witten nonperturbative superpotentials string theory eisenbud stillman cohomology toric varieties local cohomology monomial supports symbolic comput grayson stillman software system research algebraic available https smith normaltoricvarieties package computing toric varieties distributed blumenhagen jurke rahn roschy cohomology line bundles computational algorithm blumenhagen jurke rahn roschy cohomology line bundles applications kreuzer skarke data http hep itp tuwien stillman stringtorics package computing toric varieties string theory appear donagi wijnholt msw instantons jhep donagi katz wijnholt weak coupling degeneration log spaces clingher donagi wijnholt sen limit adv theor math phys braun hodge numbers divisors fourfold hypersurfaces work progress batyrev dual polyhedra mirror symmetry hypersurfaces toric varieties alg geom kreuzer skarke classification reflexive polyhedra commun math phys kreuzer skarke classification reflexive polyhedra adv theor math phys kreuzer skarke complete classification reflexive polyhedra four dimensions klemm lian roan yau fourfolds theory theory compactifications nucl phys braun tops building blocks manifolds jhep greiner grimm periods fourfolds toric hypersurfaces applications jhep hatcher algebraic topology cambridge university press cambridge bianchi collinucci martucci magnetized instantons jhep maszczyk distributive lattices cohomology arxiv danilov khovanskii newton polyhedra algorithm computing numbers math ussr izv cox little schenck toric varieties graduate studies mathematics american mathematical danilov geometry toric varieties kreuzer toric geometry compactifications ukr phys fulton introduction toric varieties princeton university press princeton voisin schneps hodge theory complex algebraic geometry cambridge studies advanced mathematics cambridge university press voisin schneps hodge theory complex algebraic geometry cambridge studies advanced mathematics cambridge university press loera rambau santos triangulations springer heidelberg dordrecht london new york
| 0 |
learning rational wavelet transform lifting framework oct naushad ansari student member ieee anubha gupta senior member ieee learning extensively applied several applications ability adapt class signals interest often transform learned using large amount training data limited data may available many applications motivated propose wavelet transform learning lifting framework given signal significant contributions work existing theory lifting framework dyadic wavelet extended generic rational wavelet design dyadic special case proposed work allows learn rational wavelet transform given signal require large training data since design proposed methodology called rational wavelet transform learning lifting framework proposed method inherits advantages lifting learned rational wavelet transform always invertible method modular corresponding system also incorporate nonlinear filters required may enhance use rwt applications far restricted observed perform better compared standard wavelet transforms applications compressed sensing based signal reconstruction index learning rational wavelet lifting framework wavelet ntroduction ransform learning currently active research area explored several applications including denoising compressed sensing magnetic resonance images mri etc transform learning advantage adapts class signals interest often observed perform better existing sparsifying transforms total variation discrete cosine transform dct discrete wavelet transform dwt said applications general transform learning posed optimization problem satisfying constraints specific applications transform domain sparsity signals widely used constraint along additional constraints transform learned say minimization frobenius norm transform requirement joint learning transform basis transform domain signal constraints renders optimization problem closed form solution thus general problems solved using greedy algorithms large number variables learned transform well transform coefficients along solution makes computationally expensive naushad ansari anubha gupta sbilab deptt ece iiitdelhi india emails naushada anubha naushad ansari financially supported csir council scientific industrial research govt india research work recently deep learning convolutional neural network cnn based approaches gaining momentum used several applications general methods cnn require large amount training data learning hence methods may run challenges applications single snapshots signals speech music electrocardiogram ecg signals available large amount training data required learning transform absent hence one uses existing transforms signal independent motivates look strategy learn transform applications among existing transforms although fourier transform dct find use many applications dwt provides efficient representation variety signals efficient signal representation stems fact dwt tends capture signal information significant coefficients owing advantage wavelets applied successfully many applications including compression denoising biomedical imaging texture analysis pattern recognition etc addition wavelet analysis provides option choose among existing basis design new basis thus motivates one learn basis given signal interest particular application may perform better fixed basis since translates associated wavelet filters form basis functional space square summable sequences wavelet transform learning implies learning wavelet filter coefficients reduces number parameters required learned wavelet learning compared traditional transform learning also requirement learning fewer coefficients allows one learn basis short single snapshot signal small training data motivates explore wavelet transform learning small data work also show closed form solution exists learning wavelet transform leading fast implementation proposed method without need look greedy solution note applications dwt use dyadic wavelet transform wavelet transform implemented via filterbank downsampling two frequency domain process equivalent decomposition signal spectrum two uniform frequency bands wavelet transform introduces flexibility analysis decomposes signal uniform frequency bands however applications speech audio signal processing may require frequency band decomposition rational wavelet transform rwt prove helpful applications requiring partitioning signal spectrum decimation factors corresponding rational filterbank rfb may different subband rational numbers rwt used applications example rwt applied phonetic classification used synthesizing multispectral image merging spot panchromatic image landsat thematic mapper multispectral image wavelet shrinkage based denoising presented using signal independent rational filterbank designed rwt used extracting features images click frauds detection rational orthogonal wavelet transform used design optimum receiver based broadband doppler compensation structure broadband passive sonar detection number rwt designs proposed literature including fir finite impulse response orthonormal rational filterbank design overcomplete fir rwt designs iir infinite impulse response rational filterbank design biorthogonal fir rational filterbank design frequency response masking technique based design rational fir filterbank complex rational filterbank design however far rwt designed used applications meant meet certain fixed requirements frequency domain instead learning transform given signal interest example designs signal independent hence concept transform learning given signal used far learning rational wavelets lifting shown simple yet powerful tool custom wavelet apart custom wavelet learning lifting provides several advantages wavelets designed spatial domain designed wavelet transform always invertible design modular design dsp digital signal processing hardware friendly implementation viewpoint however framework developed far used custom dyadic mband wavelets used learn rational wavelet transform best knowledge moreover existing architecture lifting framework extended directly rational filterbank structure different rates subband branches discussion note lifting framework help learning rational wavelets given signals simple modular fashion also easy implement hardware also may lead enhanced use rational wavelet transforms applications similar dyadic wavelets far restricted motivated success transform learning applications flexibility rational wavelet transform respect signal spectral splitting advantages lifting learning custom design wavelets propose learn rational wavelet transform given signal using lifting framework proposed methodology called learning rational wavelet transform lifting framework following salient work theory lifting extended dyadic wavelets rational wavelets dyadic wavelet special case concept rate converters introduced predict update stages handle variable subband sample rates theory proposed learn rational wavelet transform given signal rational wavelets decimation ratio designed fir analysis synthesis filters learned easily implemented hardware closed form solution presented learning rational wavelet thus greedy solution required making computationally efficient proposed learned short snapshot single signal hence extends use transform learning requirement large training data small data snapshots utility demonstrated application compressed reconstruction observed perform better existing dyadic wavelet transforms paper organized follows section briefly describe theory lifting corresponding dyadic wavelet system theory rational wavelet section iii present proposed theory learning learned examples section presents simulation results application compressive sensing based signal reconstruction conclusions presented section notations scalars represented lower case letters vectors matrices represented bold lower case bold upper case letters respectively rief background section provide brief reviews theory dyadic wavelet lifting structure rational wavelet system polyphase decomposition theory required explanation proposed work theory lifting dyadic wavelet general dyadic wavelet structure shown fig analysis lowpass highpass filters synthesis lowpass highpass filters respectively lifting technique either factoring existing wavelet filters finite sequence smaller filtering steps constructing new customized wavelets existing wavelets general lifting structure three stages split predict update fig split stage splits input signal two disjoint sets even odd indexed samples fig original input signal recovered fully interlacing even odd indexed sample stream wavelet transform associated corresponding filterbank also called lazy wavelet transform lazy wavelet transform obtained standard dyadic structure shown fig choosing analysis filters synthesis filters predict stage one two disjoint sets samples predicted set samples example odd samples predicted neighboring even samples using predictor filter predicted samples subtracted actual samples calculate prediction error step equivalent applying highpass filter input signal stage modifies analysis highpass synthesis lowpass filters without altering two filters according following relations gnew flnew input signals frequency bands general ith analysis branch rational structure shown denotes analysis filter denotes upsampling factor denotes downsampling factor example downsampling ratio branch equal synthesis end order downsampler upsampler reversed fig ith branch rationally decimated analysis filterbank update stage disjoint sample set updated using predicted signal obtained previous step via filtering shown general step equivalent applying lowpass filter signal analysis lowpass filter synthesis highpass filter according following relations gnew fhnew one major advantages lifting structure stage predict update invertible hence perfect reconstruction guaranteed every step also lifting structure modular filters incorporated ease fig two channel biorthogonal wavelet system fig wavelet structure even split combine samples two input streams rational filterbank said critically sampled following relation satisfied fig general rational wavelet structure fig stages lifting split predict update note throughout paper consider equal especially general given wavelet system shown converted equivalent rational wavelet structure downsampling ratios two branches note relatively prime also critically sampled rational wavelet system example analysis filters synthesis filters shown combined using following equations rational wavelet equivalent structure wavelet system integral downsampling ratio shown fig whereas rational wavelet system rational ratios allows decomposition corresponding analysis synthesis lowpass filters equivalent rational wavelet structure similarly rest filters sides combined using following equations polyphase representation perfect reconstruction polyphase representation filters helpful filterbank analysis design consider critically sampled filterbank shown analysis filter written using polyphase representation synthesis filter written using polyphase representation iii roposed earning ethod ignal atched ational wavelet section present proposed method learning rational wavelet system using lifting framework first propose extension dyadic lazy wavelet transform lazy wavelet find equivalent rational lazy filterbank structures used proposed work rational lazy wavelet system explained earlier lazy wavelet system divides input signal two disjoint signals similarly analysis side lazy wavelet system divides input signal disjoint sets data samples given synthesis end disjoint sample sets combined interlaced reconstruct signal output mband lazy wavelet designed following choice analysis synthesis filters filterbank equivalently drawn using polyphase matrices shown fig wavelet structure order obtain corresponding rational lazy wavelet system dilation factor band lazy wavelet use obtain lowpass analysis synthesis filters similarly use obtain corresponding highpass analysis synthesis filters rational lazy wavelet relation polyphase matrices constant constant delay corresponding higpass filters analysis synthesis ends yields condition perfect reconstruction stated filters form rational lazy wavelet system equivalent lazy wavelet transform learn signalmatched rational wavelet system start rational lazy wavelet provides initial filters filters updated according signal characteristics obtain rational wavelet system analysis highpass synthesis lowpass filters updated predict stage whereas analysis lowpass block block kth block block fig analysis side rational lazy wavelet block consists samples input signal divided synthesis highpass filters updated update stage stages described following subsections predict stage discussed section lazy wavelet system divides input signal two disjoint sample sets wherein one set required predicted using set samples conventional lifting framework integer downsampling ratio branches refer fig output sample rate equal however rational wavelet system output sample rate two branches unequal hence predict polynomial branch conventional dyadic design used example note rational wavelet system higher lower rate branch samples used prediction downsampled upsampled factor equal rate lower higher predicted branch samples defined rate predicting branch samples rate predicted branch samples propose predict lower branch samples help upper branch samples input signal divided two disjoint sets label outputs respectively length input signal without loss generality assumed multiple thus given input signal divided blocks size subband output rational lazy wavelet system first samples every block move upper branch block next samples move lower branch block words rate upper branch output samples per block rate lower branch output samples per block shows blocks outputs explicitly motivates introduce concept rate converter equals output sample rate upper predicting branch lower predicted branch enable predict branch design words output upper branch upsampled downsampled match rate lower branch output noted downsampler upsampler connected consecutively since upsampler introduces spectral images frequency domain generally preceded filter also known interpolator filter hand downsampler stretches frequency spectrum signal followed filter known filter conditions imply rate converter required downsampling factor polynomial incorporated acts filter upsampler time filter downsampler structure polynomial placement polynomial shown fig call complete branch rate converter polynomial predict rate converter definition predict rate converter combination polynomial preceded upsampler followed downsampler shown predict rate converter new fig illustration predict rate converter predict rate converter choice lead appropriate repetition drop samples total number samples block contains equal number samples next present three subsections structure predict stage filter learn given signal update filters corresponding rational filterbank using learned structure predict stage filter outputs two analysis filterbank branches equal lower branch samples predicted upper branch samples help predict stage filter filter introduced polynomial shown good prediction current sample input signal contained predicted past future samples block preceding samples block future samples block evident thus structure predict polynomial chosen appropriately presented theorem composite predict branch new block size rate hence predicts providing block prediction error given fig predict rate converter theorem predict stage filter ensures every sample block branch predicted block samples proof discussed section passing input signal lazy filterbank analysis side obtain approximate detail coefficients respectively form blocks shown block approximate detail signals given prediction first upper branch signal passed upsampler polynomial leads signal shown contains every element repeated number times mathematically block signal given denotes floor function predict filter defined product two polynomials applied signal intuitively first polynomial position block block since every block contains number elements second polynomial chooses two consecutive samples every sample repeated times explained earlier thus second term help choosing either elements block future samples choosing one element block one block mathematically seen passing predict filter block signal obtained signal passed downsampler resulting block signal dnew noted first element block sample predicted elements blocks respectively also past future samples last element block predicted elements block iii clear elements block predicted block elements fact elements block denotes ceil function predicted using past future samples block elements rest elements predicted elements block proves name modified rate converter branch incorporating polynomial shown composite predict branch note choice polynomial provides best possible generic solution prediction using nearest neighbors different values example first polynomial omitted one may note samples predicted far away past samples far away future samples depending second polynomial thus although chosen many ways choose use polynomial provided theorem predict filter easily extended obtain filter even example given choose four consecutive samples immediate neighboring blocks general even length filter given relation gnew estimation predict stage filter given signal order estimate predict stage filter consider prediction error dnew shown dnew minimize using least squares criterion yielding solution min dnew length difference signal dnew column vector elements polynomial equation solved learn predict stage filter update rfb filters using learned dyadic wavelet design using lifting discussed section predict polynomial used update analysis highpass synthesis lowpass filters using however require derive similar equations rational filterbank present work let look lemma helpful defining equations lemma structure containing filter followed downsampler preceded upsampler fig replaced equivalent filter fig given proof refer section proof refer since relatively prime corresponding downsampler upsampler swapped simplify structure using noble identities simplification obtain structure two downsampler upsampler swap obtain structure part red dotted rectangle figure replaced equivalent filter using given wqm considering structure signals dnew written dnew relation equivalent applying new filter gnew lower branch rational wavelet system given gnew exp wqm wqm proves fig filter structure equivalent filter next present provides structure updated analysis highpass filter gnew using rfb theorem analysis highpass filter rational filterbank updated using predict polynomial used composite predict branch via following next synthesis lowpass filter updated flnew follows first analysis rfb containing gnew converted equivalent analysis filterbank structure shown using lowpass filter transformed upper filters band analysis filters highpass filter gnew rational wavelet transforms lower filters band analysis filterbank next polyphase matrix rnew obtained using using rnew obtain updated synthesis filters uniformly decimated filterbank synthesis filters lower filters unchanged upper swapping new new new new new new fig illustration proof structures equivalent filters updated predict branch using filters obtain flnew update stage adding signal shown new update rate converter subsection present structure update branch used lifting structure rfb present estimation update branch polynomial given signal theorem update corresponding filters rfb structure update branch predict stage used upper branch samples predict lower branch samples update stage update upper branch samples using lower branch samples since output sample rate two branches unequal require downsample lower branch samples factor given fig illustration update rate converter explained earlier upsampler required followed filter downsampler preceded filter use filter accomplishes placement shown similar predict stage call update branch composite update branch includes update rate converter update polynomial summer new composite update branch fig update stage definition update rate converter combination polynomial preceded upsampler followed downsampler shown similar structure predict stage filter structure update stage filter also chosen carefully elements upper branch samples updated nearest neighbors present theorem structure update stage filter ensures theorem update stage filter ensures every sample block branch updated block samples proof passing signal lazy wavelet blocks output signal formed described section block approximation detail coefficients given superscript denoted block update first detail coefficients passed upsampler polynomial leads signal contain every element repeated times mathematically block signal given update filter applied signal unlike predict stage require advancement block requires updated past future samples block contained blocks fig hence requires one polynomial chooses two consecutive samples every sample repeated times explained earlier passing update stage filter obtain signal downsampled factor resulting block given signal helps update signal upper branch note first term updated elements blocks respectively also past future samples last term updated elements block detail signal iii clear elements block updated block elements proves similar predict stage polynomial choice polynomial provides best possible generic solution update using nearest neighbors different values thus although chosen many ways choose use polynomial provided theorem update filter easily extended obtain even length filter example given choose four consecutive samples immediate neighboring blocks general even length filter defined following relation length update stage filter assumed even learning update stage filter given signal order estimate update stage filter consider updated signal anew shown anew passing approximate coefficients upsampler obtain multiple anew otherwise signal passed synthesis lowpass filter followed downsampler shown obtain xrl lfl xrl reconstructed signal lowpass branch synthesis side length input signal lfl length synthesis lowpass filter assuming input signals rich low frequency energy input signal moves lowpass branch hence signal xrl assumed close approximation input signal correspondingly following optimization problem solved learn update stage filter min xrl column vector coefficients polynomial equation observed signal xrl written terms update stage filter thus equation solved using criterion learn update stage filter update rfb filters using estimated similar predict stage discussed earlier propose equation update analysis lowpass filter using update stage polynomial rational filterbank theorem analysis lowpass filter rational filterbank updated using update polynomial used composite update branch via following relation gnew wqm new new new swapping glnew new new fig illustration proof structures equivalent proof refer following similar procedure theorem update structure equivalently converted filter given wqm considering structure signals anew written anew signal equivalent passing signal equivalent low pass filter gnew rfb given gnew wqm proves theorem similar predict stage synthesis highpass filter fhnew learned using updated analysis lowpass filter gnew polyphase matrices equations although presented specific case work decimation ratio proposed method general presents rational wavelet design theory generalized rational factors note resultant filters rational filterbank learned given signal updated based predict update stage filters learned signal using respectively also proposed method solution directly solved using least squares criterion design examples present examples corresponding rational filterbank learned proposed method presents parameters used learning following sampling rates two branches first row table provides values second fifth row presents lazy wavelet corresponding rational filterbank structure learning initialized sixth eighth row represents polynomial respectively seventh ninth row represents structure predict update polynomial respectively parameters structure learn matched given signals interest consider four signals different types ecg signal speech signal two music signals named signals shown learn sampling rates two branches matched ecg speech signals respectively since learn lifting framework always satisfies condition learned wavelet system achieves nmse normalized mean square error order presents coefficients predict polynomial update polynomial synthesis filters learned proposed method shows frequency response filters associated learned pplication ompressed ensing based reconstruction section explore performance learned applications compressed sensing magnitude magitude magnitude magnitude sample sample music sample music ecg signal fig signals used experiments sample speech signal table illustration learned based theory developed sampling rate two branches lazy lazy lazy lazy lpf lpf hpf lpf ecg hpf lpf speech fig frequency response synthesis filters presented hpf ecg hpf speech table coefficients predict polynomial update polynomial synthesis filters learned different sampling rates sampling frequency signal khz number samples signal used experiments sampling rate signal filter coefficients two branches polynomial ecg speech based reconstruction signals compressed sensing problem aims recover full signal small number linear measurement mathematically problem modeled original signal size compressively measured size measurement matrix size full signal reconstructed compressive measurements solving optimization problem signal sparsity transform domain prior wavelets extensively used sparsifying transforms regularized linear least square solved signal reconstruction min subject represents wavelet transform wavelet transform problem known basis pursuit used solver solve problem full signal reconstructed presents reconstruction performance different sampling rates four signals shown reconstruction performance measured via psnr peak signal noise ratio given max psnr reconstructed signal measurement matrix gaussian sampling ratio varied difference sampling threelevel wavelet transform decomposition used experiments results averaged independent trials following sampling rate considered represented respectively original signal available compressed sensing application proposed method requires signal learning matched rational wavelet thus propose sample data fully sampling ratio learn matched wavelet next apply learned matched wavelet reconstruction rest data sampled lower compressive sensing ratio one may also use another approach learn matched wavelet application however far limited special case dyadic matched wavelet explored extension application rational wavelet transform learning future work reconstruction performance compared standard orthogonal daubechies wavelets standard wavelets labeled respectively performance also compared overcomplete rational wavelets designed fair comparison consider sampling rate low frequency branch overcomplete rational wavelets used proposed use plete rational wavelets sampling rates represented table iii observed performs best among existing used also overcomplete rational wavelet performs better existing wavelets ecg signal sampling ratios less perform comparable inferior performance existing wavelets music speech signals hand proposed perform better improvement upto psnr comparison existing music signals particularly performs better signal sampling ratios performs better signal sampling ratio beyond performs better ecg signal existing wavelet performs better wavelets higher sampling ratios sampling ratio overcomplete rational wavelet outperforms existing proposed wavelets sampling ratio performs better existing well overcomplete rational wavelets improvement upto existing wavelets upto overcomplete rational wavelets similarly case speech signal performs better wavelets sampling ratio sampling ratio outperforms existing well overcomplete rational improvement upto note higher sampling ratios psnr reconstructed ecg speech signals high around respectively different wavelets hence reconstructed signal appears almost similar original signal quality reconstructed signal deteriorates decreasing sampling ratios proposed performs best much improvement overall performance rational matched wavelets superior lower sampling ratios compressive sensing application paper explored problem choosing optimal sampling rate learned wavelet particular signal particular application remains open problem explored future onclusion theory learning rational wavelet transform lifting framework namely method presented existing theory lifting framework extended dyadic rational wavelets critically sampled rational matched wavelet filterbank designed general rational sampling ratios concept rate converters introduced handle variable data rate subbands learned rational filterbank inherits advantages lifting framework learned analysis synthesis filters fir easily implementable hardware thus making rwt easily usable applications closed form solution presented learning rational wavelet thus greedy solution required making computationally efficient proposed transform learned short snapshot single signal hence extends use transform learning requirement large training data small data snapshots proof concept learned mrwtl applied reconstruction signals observed perform better compared existing wavelets although learned performs better application known apriori sampling ratios two branches rational filterbank optimal given signal particular application leave open problem future work table iii performance rational filterbank reconstruction signals rational wavelet learned data samples results averaged independent trials signal ecg speech sampling ratio psnr eferences ravishankar bresler learning sparsifying transforms ieee transactions signal processing vol wen ravishankar bresler video denoising online sparsifying transform learning image processing icip ieee international conference ieee ravishankar bresler efficient blind compressed sensing using sparsifying transforms convergence guarantees application magnetic resonance imaging siam journal imaging sciences vol xie chen image denoising inpainting deep neural networks advances neural information processing systems kulkarni lohit turaga kerviche ashok reconnet reconstruction images compressively sensed measurements proceedings ieee conference computer vision pattern recognition mallat wavelet tour signal processing academic press chang vetterli adaptive wavelet thresholding image denoising compression ieee transactions image processing vol gagie navarro puglisi new algorithms wavelet trees applications information retrieval theoretical computer science vol guido slaets almeida pereira new technique construct wavelet transform matching specified signal applications digital real time spike overlap pattern recognition digital signal processing vol najarian splinter biomedical signal image processing crc press depeursinge van ville texture learning using steerable riesz wavelets ieee transactions image processing vol blu new design algorithm orthonormal rational filter banks orthonormal rational wavelets signal processing ieee transactions vol choueiter glass implementation rational wavelets filter design phonetic classification audio speech language processing ieee transactions vol design fir filterbanks rational sampling factors using frm technique circuits systems iscas ieee international symposium ieee vetterli perfect reconstruction filter banks rational sampling factors signal processing ieee transactions vol blanc blu ranchin wald aloisi using iterated rational filter banks within arsis concept producing landsat multispectral images international journal remote sensing vol baussard nicolier truchetet rational multiresolution analysis fast wavelet transform application wavelet shrinkage denoising signal processing vol auscher wavelet bases rational dilation factor wavelets applications ziebarth greiner heizmann optimized sizeadaptive feature extraction based rational wavelet filters signal processing conference eusipco proceedings european ieee chertov malchykov pavlov wavelets detection attacks signals electronic systems icses international conference ieee white optimum receiver design broadband doppler compensation channels rational orthogonal wavelet signaling signal processing ieee transactions vol broadband passive sonar detection using rational orthogonal wavelet filter banks intelligent sensors sensor networks information processing issnip seventh international conference ieee bayram selesnick design orthonormal overcomplete wavelet transforms based rational sampling factors optics east international society optics photonics overcomplete discrete wavelet transforms rational dilation factors signal processing ieee transactions vol design overcomplete wavelet transforms signal processing ieee transactions vol nguyen design critically sampled rational rate filter banks multiple regularity orders associated discrete wavelet transforms signal processing ieee transactions vol rational discrete wavelet transform multiple regularity orders application experiments signal processing vol white complex rational orthogonal wavelet application communications signal processing letters ieee vol sweldens lifting scheme construction biorthogonal wavelets applied computational harmonic analysis vol dong shi directional lifting scheme image compression circuits systems iscas ieee international symposium ieee blackburn geometric lifting image processing icip ieee international conference ieee kale gerek lifting wavelet design block wavelet transform inversion acoustics speech signal processing icassp ieee international conference ieee sweldens wavelet families increasing order arbitrary dimensions image processing ieee transactions vol vaidyanathan multirate systems filter banks pearson education india ansari gupta rational wavelet design given signal ieee international conference digital signal processing dsp ieee romberg tao robust uncertainty principles exact signal reconstruction highly incomplete frequency information ieee transactions information theory vol donoho compressed sensing ieee transactions information theory vol chen donoho saunders atomic decomposition basis pursuit siam review vol ansari gupta joint framework signal reconstruction using matched wavelet estimated compressively sensed data data compression conference dcc ieee image reconstruction using matched wavelet estimated data sensed compressively using partial canonical identity matrix ieee transactions image processing
| 3 |
link prediction using shortest distances andrei lebedev jooyoung lee victor rivera manuel mazzara apr innnopolis university russia abstract paper apply efficient shortest distance routing algorithm link prediction problem test efficacy compare results base line methods well shortest path results show using distances similarity measure outperforms classical similarity measures jaccard keywords graph databases shortest paths link prediction graph matching similarity introduction connected world emphasis relationships isolated pieces information relational databases may compute relationships query time would result computationally expensive solution graph databases store connections first class citizens allowing access persistent connections almost one fundamental topological feature context graph theory graph databases implications artificial intelligence web communities computation distance vertices many efficient methods searching shortest path proposed hand distance query handling methods well developed spread many advantages traditional shortest path reveal much information simple shortest path extract distances graph databases efficient indexing algorithm needed pruned landmark labeling scheme presented utilize algorithm obtain distances develop similarity metric based predict links graphs figure example demonstrate superiority using distance simple distance metric measure relationship two vertices table summarizes connections represented figure based distance alone conclude similarity shortest distance black nodes graph however clear connected tightly greater number shortest paths framework described paper based previous work called pruned landmark labeling evaluate test proposed algorithm extensively prove efficacy algorithm apply algorithm link prediction problem compare existing solutions fig example representing connection pairs vertices using shortest distance table distances distances pairs black vertices figure vertex pair distance distance related work section introduce comparable methods computing shortest paths link prediction utilize distances predict links social graphs shortest paths one attempts find shortest paths presented achieves log complexity also covers programming problems including knapsack problem sequence alignment maximum inscribed polygons genealogical relationship discovery adopt algorithm presented discover shortest paths since achieves six orders magnitude faster computation given large graphs millions vertices tens millions edges link prediction link prediction social network well known problem extensively studied links predicted using either semantic topological information given network main idea link prediction problem measure similarity two vertices yet linked measured similarity high enough future link predicted attempts infer new interactions among members social network likely occur near future authors develop approaches link prediction based measures analyzing proximity nodes network prediction methods given graph set vertices set edges let respectively also assume vertices represented unique integers enable comparison two vertices furthermore let denote pab set paths dith shortest path pst following introduce structure querying indexing algorithm distance label set triplets vertex path length number paths length loop label sequence integers index sets distance labels loop labels ordering vertices order decreasing degrees number paths number paths vertex length exceeding using loop label query smallest elements compute multiset follows referring original work measured performance two ways index construction speed final index size implementation considers unweighted undirected graphs achieved reduction index size compared proposed method first implemented algorithm presented compute shortest paths two vertices use sum shortest paths similarity measure predict future links naive approach shows better results compared commonly used link prediction methods ksp equation shows similarity measure based distances ksp list shortest paths vertices experimental results setup experiments networks treated undirected unweighted graphs without multiple edges testing purposes randomly sample edges prediction evaluation sampling prediction evaluation tasks performed times dataset use auroc area roc curve evaluation metric used five different datasets ease comparisons results performance proposed method summarized table best performance dataset emphasized see shortest distance consistently performs better others except condmat difference negligible intuitively one might think bigger better predictor results suggests small enough even small value predict future links effectively experiments demonstrate distances capture structural similarity vertices better commonly used measures namely common neighbors jaccard preferential attachment table performance evaluation predictions datasets statistics graph described grqc hepth condmat jaccard adamic preferential conclusions future work paper defined new similarity metric two users social networks based shortest distances also found experiments simple metric outperforms common metrics also small suffices accurately predict future links since distances capture important topological properties vertices plan apply metric gene regulation networks discover unknown relationships among genes difficult infer using methods furthermore graph database growing technology days cases shortest path implementations already core example natural therefore investigate results context development order identify possible improvements performance gaps references https akiba hayashi nori iwata yoshida efficient shortestpath distance queries large networks pruned landmark labeling http akiba iwata yoshida fast exact distance queries large networks pruned landmark labeling proceedings acm sigmod international conference management data sigmod acm new york usa http angles gutierrez survey graph database models acm comput surv feb http cohen halperin kaplan zwick reachability distance queries via labels siam journal computing http eppstein finding shortest paths apr http goldberg harrelson computing shortest path search meets graph theory proceedings sixteenth annual acmsiam symposium discrete algorithms soda society industrial applied mathematics philadelphia usa http lee estimating degrees neighboring nodes online social networks international conference principles practice systems springer international publishing kleinberg problem social networks journal american society information science technology http chen jensen skovsgaard aware behavioral topic modeling microblog posts ieee data eng bull http liu yang jensen integrating preferences spatial location queries proceedings international conference scientific statistical database management ssdbm acm new york usa http robinson webber eifrem graph databases reilly media
| 8 |
article published ieee smc magazine vol human smart machine brain computer interface lee wang naoyuki kubota lin shinya kitaoka wang abstract machine learning become popular approach cybernetics systems always considered important research computational intelligence area nevertheless comes smart machines methodologies need consider systems cybernetics well include human loop purpose article follows integrate open source facebook research fair darkforest program facebook item response theory irt new open learning system namely ddf learning system integrate ddf robot namely robotic ddf system invite professional players attend activity play games site smart machine research team apply technology education playing games enhance children concentration learning mathematics languages topics detected brainwaves robot able speak words much point students assist teachers classroom future introduction coulom freelance developer programs said online games usually played faster pace favors computer humans still expected strong correlation performance serious tournament games hence held special event human smart machine colearning ieee smc http banff canada still games site playing internet purpose activity ieee smc follows integrate open source facebook research fair darkforest program facebook usa item response theory irt nutn taiwan new open learning system namely dynamic ddf dynamic darkforest learning system integrate ddf fujisoft robot led kubota tmu japan namely robotic ddf system invite professional players attend activity play games site smart machine chou taiwan chou taiwan chang taiwan invited play games deepzengo japan lin taiwan daisuke horie japan shuji takemura japan played games dynamic darkforest ddf taiwan embedded fair darkforest open engine addition research collaborative team national chiao tung university nctu national university tainan nutn university california san diego ucsd national center computing nchc jointly integrated brain computer interface bci current called system also firstly demonstrated attract scholars brain machine interaction bmi area ieee smc conference join smc society past held events world owing maturity deep learning technologies computer hardware google combined together monte carlo tree beat many top professional players without handicaps year first year hold human smart machines ieee smc however carried events humans playing computer programs almost decade fig shows past held events human computer competitions https funded ieee cis ieee smc taiwanese government nutn taiwanese association artificial intelligence taai handicaps human computer game stones however power computer programs increased handicap top professional players iii system special event ieee smc combined theory deep learning technology bci demonstrate playing brainwave technology developed long time however applying play world first case ieee conference world latest mobile wireless eeg system fully utilized innovation developed system wireless system developed research team brain research center nctu designed extract player brainwaves play compete ddf system directly fig shows competitive learning mode predictive learning mode scenario human smart machine ieee smc including invited players computer programs robot palro developed system fig shows system diagram lin played ddf without using hand robot palro reported next moves suggested ddf demonstrated ieee smc article published ieee smc magazine vol ieee smc canada figure past held events world competitive learning bci taiwan japan taiwan predictive learning mindo palro japan taiwan figure scenario human smart machine ieee smc including invited players computer programs robot palro developed system adopted visual evoked potential ssvep technology collect brain signals visual cortex channels performed signal processing cloud server five pilot players tested developed system reach accuracy classification task held special event demonstration system expressive breakthrough human brain interaction figure system diagram lin right played ddf without using hand robot palro reported next moves suggested ddf artificial intelligence humans wore wireless eeg headset instructed gaze coded visual stimulus screen system continuously decoded eeg send move command game server player wearing wireless eeg headset technician must check impedance scalp eeg electrodes first collecting good quality eeg signals article published ieee smc magazine vol fig shows impedance map channels kohm fig shows player eeg signals collected channels successfully perceiving visual stimulus controlling arrow ddf communicates bci via websocket protocol decoded eegs sent nchc server taiwan fair darkforest engine control stone move humans need use hands play anymore even play robot together robot able predict report next suggested moves reference playing next move epoch times calgary also reported special event topic world first playing using brainwaves banff https chou chou chang deepzengo runs node nvidia titan pascal ram storage node nvidia titan ram storage respectively deepzengo games three invited professional players figs games chou white deepzengo black resign well chang white deepzengo black resign respectively time series amplitude time amplitude amplitude time figure impedance check eeg channels player eeg signals extracted channels controlling arrow system game results one top computer programs deepzengo japan used play invited professional players including chou chou chang seven games without handicaps mode learning mode humans machines order make deepzengo different strength plays figure chou white deepzengo black black game resignation chang white deepzengo black black game chou talked experience event first experience play games deepzengo public obviously disadvantage layout three games rise challenges professional players domain knowledge game layout learned taught hundreds years joined human computer competitions hosted nutn academic organizations since past advised computer programs however advised chang said indisputable fact article published ieee smc magazine vol computer programs like alphago deepzengo completely defeat professional players yet personally experienced strength artificial intelligence played deepzengo special event ieee smc past layout situation judgment two difficult problems solve computer programs however machine much better humans mention ability handle life death overturns domain knowledge humans learned taught sense humans given different picture world simulated rich variety models led another lin invited play ddf brainwaves cooperated robot palro predictive learning mode take game lin palro black ddf white example fig shows curves predictive winning rate numbers simulations fig shows inferred results developed dynamic assessment agent represent black obvious advantage black possible advantage uncertain situation white possible advantage white obvious advantage respectively event lin said first time attend international conference indeed amazing experience wearing mindo play real sensors headgear measured brainwaves continually sparkling points monitor caused eyes little bit discomfort dampen interest biometric gadget enthusiasm playing using brainwaves developed technology applied wide variety fields example physically challenged use express inner thoughts glad great opportunity wear biometric headgear play hope see development https figure predictive winning rate numbers simulations curves inferred results dynamic assessment agent game lin palro black ddf white black game figure special event venue general chair anup basu foreground left program chair irene cheng left general chair yoping huang left program chair shinya kitaoka background right figure lee naoyuki kubota takenori obo background left right lin foreground playing ddf robot palro reported suggested next move lin article published ieee smc magazine vol conclusion smart machine one main themes ieee smc study players competing learning smart machine sure attract attention worldwide scholars smc conferences games played special event amateur players games ddf small computational resource deepzengo games invited professional players therefore deepzengo could coach professional players ddf small machine could student amateur players long properly adaptively adjust computational resources intelligent robot figs show selected photos special event future research team apply technology education playing games enhance children concentration learning mathematics languages topics detected brainwaves robot able speak words much point students assist teachers class future acknowledgement event ieee smc great success would like express heartfelt thanks everyone offered help joined well watched games demonstration would also like sincerely thank ieee smc society ieee smc president dimitar filev ieee smc past president philip chen organizing committee ieee smc especially general chair anup basu authors would like thank ministry science technology taiwan ieee smc outreach project financial support finally would like thank three professional players chou chang chou yang taiwanese government nchc taiwan osaka prefecture university opu japan supporting special event lwko received degree electrical engineering national chiao tung university hsinchu taiwan currently associate professor institute bioinformatics systems biology national chiao tung university taiwan ieee transactions neural networks learning systems naoyuki kubota kubota received degree nagoya university nagoya japan currently professor department system design tokyo metropolitan university tokyo japan lin grader taiwan amateur player ever since grade cooperating oase nutn taiwan carry research smart machine learning language shinya kitaoka one deepzengo team members currently engineer dwango media village dwango japan wang received degree computer science engineering university california san diego ucsd jolla currently staff research associate iii swartz center computational neuroscience ucsd usa received degrees electrical engineering purdue university west lafayette usa currently chair professor department electrical engineering national taiwan university science technology taiwan currently serves ieee transactions cybernetics ieee transactions fuzzy systems ieee access subject editor electrical engineering journal chinese institute engineers international journal fuzzy systems references authors lee leecs received degree computer science information engineering national cheng kung university tainan taiwan currently professor department computer science information engineering national university tainan awarded certificate appreciation outstanding contributions development ieee standard ieee standard fuzzy markup language emergent technologies technical committee ettc chair ieee computational intelligence society cis serves ieee tciaig wang received degree electrical engineering yuan university currently researcher ontology application software engineering oase laboratory department computer science information engineering national university tainan nutn taiwan gibney google secretly tested bot updated version google deepmind alphago program revealed mystery online player nature vol tian zhu better computer player neural network prediction international conference learning representations iclr san juan puerto rico may https lee wang yang hung lin shuo kubota dynamic assessment agent cooperative system game international journal uncertainty fuzziness systems vol silver huang maddison guez sifre van den driessche schrittwieser antonoglou panneershelvam lanctot dieleman grewe nham kalchbrenner sutskever lillicrap leach kavukcuoglu graepel hassabis mastering game deep neural networks tree search nature vol silver schrittwieser simonyan antonoglou huang guez hubert baker lai bolton chen lillicrap hui sifre van den driessche graepel hassabis mastering game without human knowledge nature vol lee wang yen wei chou chou wang yang human computer review prospect ieee computational intelligence magazine vol article published ieee smc magazine vol lee wang chaslot hoock rimmel teytaud tsai hsu hong computational intelligence mogo revealed taiwan computer tournaments ieee transactions computational intelligence games vol mar lee wang teytaud yen adaptive linguistic assessment system semantic analysis human performance evaluation game ieee transactions fuzzy systems vol apr lin liu cao wang huang king chen chuang braincomputer interfaces ieee systems man cybernetics magazine vol thomas vinod toward biometric systems ieee systems man cybernetics magazine vol
| 2 |
indistinguishability energy sensitivity asymptotically gaussian compressed encryption sep nam yul member ieee principle compressed sensing applied cryptosystem providing notion security sense known cryptosystem perfectly secure employs random gaussian sensing matrix updated encryption plaintext constant energy paper propose new cryptosystem employs secret bipolar keystream public unitary matrix suitable practical implementation generating renewing keystream fast efficient manner demonstrate sensing matrix asymptotically gaussian sufficiently large plaintext length guarantees reliable decryption legitimate recipient means probability metrics also show new cryptosystem indistinguishability adversary long keystream updated encryption plaintext constant energy finally investigate much security new cryptosystem sensitive energy variation plaintexts index encryption hellinger distance indistinguishability linear feedback shift register lfsr probability metrics generators total variation distance ntroduction compressed sensing recover sparse signal measurements believed incomplete signal called entries sparse signal linearly measured sensing matrix theory obeys restricted isometry property rip stable robust reconstruction guaranteed incomplete measurement reconstruction accomplished solving problem convex optimization greedy algorithms efficient measurement stable reconstruction technique interest variety research fields communications sensor networks image processing radar etc principle applied cryptosystem information security cryptosystem encrypts plaintext measurement process sensing matrix kept secret ciphertext decrypted reconstruction process legitimate recipient knowledge sensing matrix rachlin baron proved cryptosystem perfectly secure might computationally secure orsdemir showed computationally secure author school electrical engineering computer science eecs gwangju institute science technology gist korea nyyu key search technique via algebraic approach bianchi analyzed security cryptosystem employing random gaussian sensing matrix updated encryption precisely showed cryptosystem sensing random gaussian matrix perfectly secure long plaintext constant energy similar analysis made cryptosystem circulant sensing matrix efficient processes wireless channel characteristics could exploited wireless security cryptosystems technique also applied database systems random noise intentionally added measurements differential privacy practice variety cryptosystems concerning security multimedia imaging smart grid data suggested paper propose new cryptosystem employs secret bipolar keystream public unitary matrix suitable practical implementation generating renewing keystream encryption fast efficient manner keystream generator based linear feedback shift register lfsr plays crucial role efficient implementation demonstrate entries sensing matrix asymptotically gaussian distributed plaintext length sufficiently large sensing matrix obvious new cryptosystem named asymptotically gaussian sensing cryptosystem theoretically guarantees stable robust decryption legitimate recipient security analysis study indistinguishability cryptosystem total variation distance probability distributions ciphertexts conditioned pair plaintexts examined security measure indistinguishability upper lower bounds distance developed hellinger distance probability metrics examine success probability adversary distinguish pair potential plaintexts given ciphertext proving success probability kind attack random guess demonstrate cryptosystem indistinguishability long plaintext constant energy therefore cryptosystem normalization step encryption equalizing plaintext energy computationally secure finally investigate much security agots cryptosystem sensitive energy variation plaintexts worth studying energy sensitivity since one might need assign unequal energy plaintexts presence noise depending reliability demands consequence develop sufficient conditions minimum energy ratio plaintext length maximum power ratio respectively achieve asymptotic indistinguishability cryptosystem unequal plaintext energy since analysis relies gaussianity sensing matrix results energy sensitivity also applicable gaussian sensing cryptosystem paper organized follows section propose new cryptosystem employing secret bipolar keystream sensing matrix turns asymptotically gaussian also discuss efficient keystream generation cryptosystem section iii introduces indistinguishability along probability metrics total variation hellinger distances security analysis section studies indistinguishability energy sensitivity new cryptosystem presence noise section presents numerical results demonstrate security new cryptosystem finally concluding remarks given section notations matrix vector represented boldface upper lower case letter denote transpose determinant matrix respectively entry matrix kth row tth column also denotes kth row vector tth column vector diag diagonal matrix whose diagonal entries vector identity matrix denoted dimension determined context denotes transform dct matrix ddt vector denoted context clear denotes vector gaussian random vector mean covariance finally denotes average random vector random matrix ystem odel authors presented gaussian sensing cryptosystem random gaussian sensing matrix used encryption renewed next sense showed plaintext constant energy cryptosystem perfectly secure implies indistinguishability discussed next section practice generating gaussian entries encryption may require high complexity large memory encryption decryption efficient implementation section proposes new cryptosystem sensing matrix employs bipolar keystream asymptotically gaussian sensing matrices definition let public unitary matrix uut element magnitude let secret matrix assume element takes independently uniformly random new cryptosystem sensing matrix theoretically element taken random bernoulli distribution practice however consider keystream generator stream ciphers generate fast efficient manner employing efficient keystream generator allows construct update encryption low complexity small memory since keystream stream cipher designed nice pseudorandomness properties balance large period low autocorrelation large linear complexity assume entry keystream takes independently uniformly random facilitates reliability security analysis new cryptosystem theorem definition elements follow gaussian distribution asymptotically sufficiently large proof row represented diag kth row vector one row vector length respectively row row matrix absolute magnitude entries also column matrix maximum absolute magnitude entries structure theorem shows elements asymptotically gaussian sufficiently large completes proof asymptotic gaussianity theorem also holds elements generated efficient keystream generator assumption one takes independently uniformly random assumption validated numerical results section keystream generation secret matrix definition employ keystream generator based linear feedback shift register lfsr generate elements fast efficient manner example introduce generator ssg definition assume lfsr generates binary operation generator outputs discards obtain bipolar keystream arranged elements ssg keystream generation requires simple structure lfsr along operator moreover ssg keystream possesses nice pseudorandomness properties balance large period large linear complexity meier staffelbach showed ssg keystream balanced period least linear complexity least respectively although ssg keystream generator considered paper keystream generator also applied new cryptosystem element obtained keystream generator initial seed state generator essentially key new cryptosystem therefore key kept secret sender legitimate recipient structure keystream generator publicly known cryptosystem definition new cryptosystem encrypts plaintext producing ciphertext sux updated encryption presence noise legitimate recipient adversary noisy ciphertext asymptotically gaussian updated encryption new cryptosystem called asymptotically gaussian sensing cryptosystem throughout paper table summarizes cryptosystem proposed paper reliability stability cryptosystem legitimate recipient straightforward rip result random gaussian matrix fact gaussian sufficiently large proposition legitimate recipient sufficiently large cryptosystem theoretically guarantees stable robust decryption bounded errors plaintext long log iii ecurity easure section introduces security measure indistinguishability cryptosystem examine indistinguishability also discuss probability metrics total variation hellinger distances total variation hellinger distances paper make use total variation distance evaluate performance adversary indistinguishability experiment table experiment let dtv distance probability distributions readily checked probability adversary successfully distinguish plaintexts kind test bounded dtv dtv therefore dtv zero probability success random guess leads indistinguishability since computing dtv directly difficult may employ alternative distance metric bound distance particular hellinger distance denoted useful giving upper lower bounds distance dtv moreover ciphertext conditioned jointly gaussian random vector zero mean covariance matrix hellinger distance multivariate gaussian distributions given formal definitions properties hellinger distances readers referred throughout paper use examine success probability indistinguishability experiment cryptosystem taking gaussian distributed ciphertexts account ecurity nalysis indistinguishability assume cryptosystem produces ciphertext encrypting one two possible plaintexts length cryptosystem said indistinguishability adversary determine polynomial time two plaintexts corresponds ciphertext probability significantly better random guess words cryptosystem indistinguishability adversary unable learn partial information plaintext polynomial time given ciphertext table describes indistinguishability experiment presence eavesdropper used investigate indistinguishability cryptosystem paper general arbitrary orthonormal basis simplicity assume paper section show cryptosystem indistinguishable long plaintext constant energy moreover study much security agots cryptosystem sensitive energy variation plaintexts indistinguishability recall indistinguishability experiment table given plaintext uxh following lemma derives covariance matrix conditioned exploiting independency uniformity entries lemma cryptosystem covariance matrix conditioned given rrt table ymmetric key ryptosystem public secret keystream generation encryption decryption unitary matrix structure keystream generator initial seed keystream generator initial seed keystream generator creates bipolar keystream length secret matrix constructed arranging keystream updated encryption new keystream plaintext ciphertext produced given noisy ciphertext plaintext reconstructed recovery algorithm knowledge table ndistinguishability xperiment based ryptosystem adversary creates pair plaintexts length submits cryptosystem cryptosystem encrypts plaintext randomly selecting gives noisy ciphertext back adversary given ciphertext adversary carries polynomial time test figure corresponding plaintext adversary passes experiment fails otherwise step step step decision obvious uxh proof let respectively also let kth lth column vectors respectively since elements independent suxh nnt stl stl entries take independently uniformly random thus yields completes proof lemma note derivation covariance matrices rely asymptotic gaussianity instead covariance matrices results obtained exploiting independency uniformity elements using covariance matrices lemma develop upper lower bounds distance cryptosystem main contribution paper theorem cryptosystem assume plaintext length sufficiently large asymptotically gaussian theorem indistinguishability experiment let dtv distance probability distributions ciphertexts conditioned pair plaintexts cryptosystem let xmin xmax plaintexts minimum maximum possible energies respectively max max minimum energy ratio pnrmax maximum power ratio respectively cryptosystem lower upper bounds dtv given dtv low dtv respectively pnrmax pnrmax proof indistinguishability experiment table let consider pair plaintexts pnrh covariance matrices lemma pnrh obviously yields lower upper bounds form without loss generality may assume yields lower upper bounds turn monotonically decreasing redefine xmin xmax obtain bounds completes proof general definition energy ratio covering noisy cases called effective energy ratio paper security analysis assume legitimate recipient adversary energy ratio pnrmax cryptosystem theorem shows indistinguishability agots cryptosystem depends ciphertext length minimum energy ratio maximum ratio pnrmax irrespective plaintext length sparsity particular indistinguishability guaranteed cryptosystem regardless pnrmax corollary plaintext constant energy cryptosystem indistinguishability since success probability indistinguishability experiment thanks dtv dtv low dtv cryptosystem corollary ensures adversary learn partial information plaintext given ciphertext long plaintext constant energy also case cryptosystem achieve indistinguishability therefore normalization step equalizing plaintext energy implicitly required encryption cryptosystem table since also offers practical benefit efficient keystream generation cryptosystem promising option information security guaranteeing indistinguishability reliability efficiency framework energy sensitivity theorem implies indistinguishability agots cryptosystem sensitive minimum energy ratio figure sketches upper lower bounds pnrmax noiseless cryptosystem indicates distance increases gets away particular gets larger distance approaches quickly decreases behavior distance suggests far less adversary may distance dtv dtv low dtv low dtv dtv low fig upper lower bounds distance noiseless cryptosystem upper bound distance noiseless pnr max pnr pnr max max fig upper bounds distance noisy cryptosystem able detect correct plaintext indistinguishability experiment significantly high probability success implies cryptosystem may indistinguishable addition figure shows upper bounds various pnrmax noisy cryptosystem figure bounds sensitive pnrmax noiseless case figure moreover bound smaller less pnrmax implies adversary may difficulty distinguishing plaintexts low pnrmax due low distance result appears security cryptosystem would compression ratio achievable region theory log log fig minimum energy ratio required asymptotic indistinguishability cryptosystem pnrmax sensitive energy ratio higher pnrmax summary cryptosystem may able achieve indistinguishability unless plaintext constant energy follows study much security cryptosystem sensitive energy variation plaintexts worth studying energy sensitivity since one might need assign unequal energy plaintext presence noise depending reliability demands theorems present sufficient conditions minimum energy ratio plaintext length maximum ratio pnrmax respectively guarantee asymptotic indistinguishability cryptosystem theorem pnrp max given let min respectively minimum energy ratio satisfies min min max success probability indistinguishability experiment vanishes plaintext length increases words cryptosystem asymptotically indistinguishable sufficiently large long given pnrmax proof given dtv inequality turns holds min equivalently consequently sufficient condition met success probability indistinguishability experiment completes proof theorem minimum energy ratio required asymptotic indistinguishability cryptosystem figure displays cryptosystem pnrmax figure sketched various lognn fig compression ratios asymptotic indistinguishability reliability cryptosystem pnrmax figure reveals minimum energy ratio required asymptotic indistinguishability approaches ciphertext length increases particular cryptosystem allows larger energy variation plaintexts asymptotic indistinguishability achieved lower rate theorem pnrmax given recall let log equality holds ciphertext length satisfies log mmax implies cryptosystem asymptotically indistinguishable sufficiently large long mmax given pnrmax proof given pnrmax proof similar theorem dtv figure depicts maximum compression ratio mmax cryptosystem asympn totically indistinguishable pnrmax also sketch minimum pression ratio log reliable random gaussian sensing compare requirements asymptotic indistinguishability reliability note plaintexts constant energy indistinguishability achieved compression ratio meanwhile compression ratio cryptosystem must asymptotic indistinguishability particular cryptosystem may valid least theory corresponding since indistinguishability compatible reliability thus figure shows theoretical ratio noiseless recovery min min min upper bound pnrmax min fig upper bounds pnrmax asymptotic indistinguishability cryptosystem cryptosystem pnrmax achieve reliability security plaintexts nonzero entries compression ratios achievable shaded region also shows cryptosystem theoretically achievable region reliability indistinguishability guaranteed simultaneously note pnrmax since upper bound monotonically decreasing implies upper bound distance lower noisy case pnrmax noiseless case pnrmax ultimately points presence noise improves security cryptosystem lowering success probability adversary indistinguishability experiment moreover one increase reducing pnrmax given indicates agots cryptosystem secure less pnrmax given theorem presents largest possible pnrmax guarantee asymptotic indistinguishability cryptosystem proof straightforward min theorem cryptosystem assume minimum energy ratio given min given min minimum effective energy ratio defined theorem asymptotic indistinguishability achieved sufficiently large pnrmax min min note min cryptosystem asymptotically indistinguishable regardless pnrmax due min figure displays upper bounds pnrmax theorem various min clear pnrmax sufficiently high min asymptotic indistinguishability achieved theorem figure points need increase reducing pnrmax upper bound given achieve asymptotic indistinguishability cryptosystem however appears largest possible pnrmax relatively low reliable decryption cryptosystem therefore important issue keep energy variation plaintexts low possible conclusion turned security agots cryptosystem highly sensitive energy ratio plaintexts indistinguishability achieved plaintexts equal constant energy therefore cryptosystem indistinguishable nonasymptotically essential plaintext normalized encryption constant energy analyzing energy sensitivity presented sufficient conditions theorems asymptotic indistinguishability cryptosystem unequal plaintext energy however found even asymptotic indistinguishability achieved plaintexts low energy variation pnrmax analysis technique utilizes result theorem based gaussianity sensing matrix energy sensitivity paper also valid cryptosystem never discussed umerical esults section presents numerical results demonstrate indistinguishability energy sensitivity cryptosystem numerical experiments plaintext nonzero entries positions chosen uniformly random coefficients taken gaussian distribution encryption discrete cosine transform dct matrix element secret matrix taken bipolar keystream obtained generator ssg lfsr comparison test whose elements taken random bernoulli distribution assume ciphertext available adversary legitimate recipient pnr decryption cosamp recovery algorithm employed legitimate recipient decrypt ciphertext knowledge meanwhile assume adversary attempt kind detection polynomial time pass indistinguishability experiment distinguishing pair plaintexts given ciphertext figure displays plots entries total matrices cryptosystem figure entry taken random bernoulli distribution taking independently uniformly random figure bipolar ssg keystream since linear slope appears entries matrix ssg keystream quantiles quantiles random bernoulli matrix standard normal quantiles fig plots entries total matrices standard normal quantiles cryptosystem matrix ssg keystream random bernoulli matrix column index column index row index row index fig covariance matrices rrt cryptosystem pnr given total matrices tested average follow normal distribution cases figure gives numerical evidence agots cryptosystem asymptotically gaussian sufficiently large even generated pseudorandom fashion ssg figure illustrates covariance matrices rrt cryptosystem pnr experiment total matrices tested average given figure dark areas indicate entries covariance matrix small magnitudes less whereas white cells represent diagonal components significant values determined plaintext energy noise variance figure numerically confirms covariance analysis lemma valid cryptosystem whether random bernoulli matrix matrix ssg keystream figure displays upper lower bounds theorem distance cryptosystem pnrmax experiment computed bounds using covariance matrices obtained testing total matrices entry taken random bernoulli distribution ssg keystream cases figure shows bounds experiment well matched theoretical results theorem summary figures validate assumption independency uniformity elements ssg keystream numerical experiments figure displays success probabilities ciphertext length cryptosystem pnrmax adversary sketches upper bounds success probability indistinguishability experiment obtained upper bound theorem comparison also sketch empirical success probabilities legitimate recipient tested total plaintexts nonzero entries energy uniformly distributed encryption entry success probability distance upper bound experiment random bernoulli lower bound experiment random bernoulli upper bound experiment ssg lower bound experiment ssg upper bound theorem lower bound theorem legitimate recipient legitimate recipient legitimate recipient adversary theorem adversary theorem adversary theorem legitimate recipient legitimate recipient legitimate recipient adversary theorem adversary theorem adversary theorem success probability fig upper lower bounds distance cryptosystem pnrmax given total matrices tested fig success probabilities cryptosystem pnrmax adversary upper bounds success probability indistinguishability experiment sketched secret matrix ssg keystream observed decryption performance similar random bernoulli distribution decryption achieves declared success decrypted plaintext figure shows legitimate recipient enjoys reliable stable decryption sufficiently large meanwhile upper bounds success probability adversary indicate detection test successful indistinguishability experiment probability bounds particular adversary learn information pnrmax fig success probabilities pnrmax cryptosystem adversary upper bounds success probability indistinguishability experiment sketched plaintext success probability higher leads indistinguishability however energy variation occurs plaintexts figure reveals adversary may able distinguish plaintexts experiment probability higher also shows success probability adversary becomes significant minimum energy ratio decreases plaintext length increases figure depicts success probabilities pnrmax cryptosystem simulation environment identical figure seen figure decryption performance legitimate recipient improves pnrmax however detection performance adversary saturated high pnrmax highest possible success probability determined minimum energy ratio figure also shows pnrmax low highest possible success probability adversary close implies cryptosystem indistinguishable sufficiently low pnrmax regardless energy variation case however legitimate recipient also fails decryption due high noise level onclusions paper proposed new cryptosystem named cryptosystem employing secret bipolar keystream public unitary matrix efficient implementation practice demonstrated elements sensing matrix asymptotically gaussian sufficiently large plaintext length guarantees stable robust decryption legitimate recipient means total variation hellinger distances showed cryptosystem indistinguishability adversary long plaintext constant energy therefore essential agots cryptosystem normalization step encryption equalizing plaintext energy guarantees computational security kind polynomial time attack adversary finally found indistinguishability cryptosystem highly sensitive energy variation plaintexts support agots cryptosystem unequal plaintext energy developed sufficient conditions minimum energy ratio plaintext length maximum power ratio respectively asymptotic indistinguishability results energy sensitivity directly applicable cryptosystem eferences donoho compressed sensing ieee trans inf theory vol apr candes romberg tao robust uncertainty principles exact signal reconstruction highly incomplete frequency information ieee trans inf theory vol candes tao signal recovery random projections universal encoding strategies ieee trans inf theory vol eldar kutyniok compressed sensing theory applications cambridge university press tropp laska duarte romberg baraniuk beyond nyquist efficient sampling sparse bandlimited signals ieee trans inf theory vol mishali eldar theory practice sampling sparse wideband analog signals ieee select top sig vol haupt bajwa raz nowak toeplitz compressed sensing matrices applications sparse channel estimation ieee trans inf theory vol duarte sarvotham baron wakin baraniuk distributed compressed sensing jointly sparse signals asilomar conf signals systems computers pacific grove usa haupt bajwa rabbat nowak compressed sensing networked data ieee sig process vol mar caione brunelli benini compressive sensing optimization signal ensembles wsns ieee trans industrial informatics vol duarte davenport takhar laska sun kelly baraniuk imaging via compressive sampling ieee sig process vol mar marcia harmany willet compressive coded aperture imaging proc symp elec imag comp imag san jose lustig donoho pauly rapid imaging compressed sensing randomly trajectories proc ann meeting ismrm seattle goginneni nehorai target estimation using sparse modeling distributed mimo radar ieee trans signal vol rachlin baron secrecy compressed sensing measurements proc annu allerton conf orsdemir altun sharma bocko security robustness encryption via compressed sensing proc ieee military commun conf milcom bianchi bioglio magli security random linear measurements proc ieee int conf acoust speech signal process icassp may bianchi magli analysis security compressed sensing circulant matrices proc ieee workshop inf forens security wifs bianchi bioglio magli analysis random projections privacy preserving compressed sensing ieee trans inf forens security vol reeves goela milosavljevic gastpar compressed sensing channel proc ieee inf theory workshop itw agrawal vishwanath secrecy using compressive sensing proc ieee inf theory workshop itw zhang winslett yang compressive mechanism utilizing sparse representation differentical privacy proc annu acm workshop privacy electron soc wpes dautov tsouri establishing secure measurement matrix compressed sensing using wireless physical layer security proc int conf comput netw cambareri mangia pareschi rovatti setti low complexity multiclass encryption compressed sensing ieee trans signal vol may george pattathil secure lfsr based random measurement matrix compressive sensing sens vol zhang zhou chen zhang wong embedding cryptographic features compressive sensing neurocomputing vol mao lai qui compressed meter reading secure load report smart grid proc ieee smartgridcomm gao zhang liang shen joint encryption compressed sensing smart grid data transmission proc ieee globecom commun inf syst security zhang zhang zhou liu chen review compressive sensing information security field ieee access special section green communications networking wireless vol katz lindell introduction modern cryptography chapman gibbs choosing bounding probability metrics international statistical review vol cam asymptotic methods statistical decision theory springerverlag new york golomb gong signal design good correlation wireless communication cryptography radar cambridge university press gan nguyen tran fast efficient compressive sensing using structurally random matrices ieee trans signal vol meier staffelbach generator advances lecture notes computer science lncs vol rudelson vershynin sparse reconstruction fourier gaussian measurements comm pure appl vol dasgupta asymptotic theory statistics probability springer media llc guntuboyina saha schiebinger sharp inequalities ieee trans inf theory vol kailath divergence bhattacharyya distance measures signal selection ieee trans commun vol ferrie note metric properties divergence measures gaussian case jmlr asian conference machine learning vol foucart rauhut mathematical introduction compressive sensing springer media new york needell tropp cosamp iterative signal recovery incomplete inaccurate samples appl comput harmon vol
| 7 |
integrated design optimization physical dynamics energy efficient buildings passivity approach jan takeshi hatanaka xuan zhang wenbo shi minghui zhu paper address energy management heating ventilation hvac systems buildings present novel combined optimization control approach first formulate thermal dynamics associated optimization problem optimization dynamics designed based standard algorithm strict passivity proved design local controller prove physical dynamics controller ensured based passivity results interconnect optimization physical dynamics prove convergence room temperatures optimal ones defined unmeasurable disturbances finally demonstrate present algorithms simulation ntroduction stimulated strong needs reducing energy consumption buildings smart building energy management algorithms developed industry academia particular half current consumption known occupied heating ventilation hvac systems great deal works devoted hvac optimization control paper address issue based novel approach combining optimization physical dynamics interplays optimization physical dynamics actively studied field model predictive control mpc also applied building hvac control mpc approach regards optimization process static map physical states optimal inputs another approach integrating optimization physical dynamics presented mainly motivated power grid control solution process optimization viewed dynamical system combination optimization physical dynamics regarded interconnection dynamical systems benefits approach relative mpc follows first approach allows one avoid complicated modeling prediction factors hard know advance mpc needs models predict future system evolutions second since entire system dynamical system stability performance analyzed based unifying dynamical system theory line works shiltz addresses smart grid control interconnects dynamic optimization process locally controlled grid dynamics entire process hatanaka school engineering tokyo institute technology tokyo japan hatanaka zhang shi electrical engineering applied mathematics school engineering applied sciences harvard oxford cambridge usa zhu department electrical engineering pennsylvania state university park usa demonstrated simulation authors incorporate grid dynamics optimization process identifying physical dynamics subprocess seeking optimal solution scheme eliminate structural constraints required presented zhang instead assuming measurements disturbances similar approach also taken power grid control stegink paper address integrated design optimization physical dynamics hvac control based passivity regard optimization process dynamical system similarly interconnections dynamic hvac optimization building dynamics partially studied temperature data zones derivatives physics side fed back dynamic optimization process recover disturbance terms however data always available practical systems thus present architecture relying temperature data subgroup zones hvac systems contents paper follows thermal dynamic model unmeasurable disturbances associated optimization problem first presented formulate optimization dynamics based primaldual gradient algorithm designed dynamics proved strictly passive transformed disturbance estimate estimated optimal room temperature next design controller actual room temperature tracks given reference produces disturbance estimate physical dynamics proved reference disturbance estimate two results interconnect optimization physical dynamics prove convergence actual room temperature optimal solution defined unmeasurable actual disturbance finally presented algorithm demonstrated simulation roblem ettings preliminary section introduce concept passivity consider system representation state input output passivity defined definition system said passive exists positive function called storage function holds inputs initial states case static system passive system also said output feedback passive index replaced side changed real vector whose elements next linearize model around equilibrium describe errors equilibrium states inputs diagonal matrix whose diagonal elements element equilibrium input using variable transformations rewritten system said called impact coefficient bwq remark matrix positive definite control engineering point view system state control input disturbances suppose measurable well meanwhile general hard measure heat gain system description paper consider building multiple zones zones divided two groups first group consists zones equipped vav variable air volume hvac systems whose thermal dynamics assumed modeled circuit model optimization problem regarding system formulate optimizaci tis tion problem solved follows rij temperature zone mass flow rate zone ambient temperature tis air temperature supplied zone treated constant throughout paper heat gain zone external sources like occupants thermal capacitance thermal resistance rij thermal resistance zone specific heat air second group composed spaces walls windows whose dynamics modeled rij rooms use categorized group without loss generality assume belong first group belong second remark system parameters rij identified using toolbox collective dynamics described collections respectively matrices diagonal matrices diagonal elements respectively matrix describes weighted graph laplacian elements block diagonal matrix diagonal elements equal tis min subject bzu bdq variables correspond zone temperature mass flow rate transformation parameters components disturbances respectively variables coupled describes stationary equation throughout paper assume following assumption assumption problem satisfies following properties convex gradient locally lipschitz every element constraint function convex gradient locally lipschitz iii exists first term evaluates human comfort collection comfortable temperatures occupants zone might determined directly occupants way current systems computed using human comfort metrics like pmv predicted mean vote quadratic function error commonly employed mpc papers note common put weights element give priority zone done appropriately scaling element function take function introduced reduce power consumption papers simply take linear quadratic function control efforts function assumption trivially satisfied also mentioned power consumption supply fans approximated cube sum mass flow rates also satisfies assumption simple model consumption cooling coil given product mass flow rate also belongs intended class constraint function reflects hardware constraints upper bound power consumption example constraints reduced form objective paper design controller ensure convergence actual room temperature optimal room temperature solution without direct measurements optimization dynamics subsection present dynamics solve optimization problem eliminate using problem rewritten min bzu bdq abh easy confirm positive definiteness cost function strongly convex case unique optimal solution denoted satisfies following kkt conditions represents hadamard product since essentially equivalent solution also provides solution problem precisely define bdq pair solution sequel also use notation easy confirm gradient algorithm one however hard obtain since measurable thus need estimate measurements physical quantities motivates interconnect physical dynamics optimization dynamics taking account issues present iii ptimization dynamics subject fig block diagram optimization dynamics passive lemma given would easy solve however practical applications desired updated real time according changes disturbances regard convenient take dynamic solution process optimization since trivially allows one update parameters real time particular employ simplicity skip dependence subsequent results easily extended case depends estimates respectively notation real vectors dimension provides vector whose element denoted given otherwise element respectively note different gradient algorithm term replaced measurement replaced estimate whose production mentioned later system illustrated fig passivity analysis optimization dynamics hereafter analyze passivity optimization process assuming constant case holds practice disturbance namely ambient temperature following results applied practical case approximated piecewise constant signal fully expected since usually varies slowly assumption define output following lemma lemma consider system assumption passive proof see appendix another benefit using dynamic solution provides distributed solution present results extended global problem although exceeds scope paper please refer details issue let next transform output comparing definition signal regarded estimate also define prove following lemma lemma consider system assumption output feedback passive index proof easy see following inequality fig block diagram physical dynamics local controller lemma substituting yields integrating time completes proof section design physical dynamics prove passivity design model presented modify control architecture order interconnect optimization dynamics lemma steady states given follows controller design subsection design controller determine input tracks reference signal assume following assumption assumption matrix positive definite property always hold positive definite matrices expected true many practical cases since diagonal elements tend inant application note assumption holds full actuation case inspired fact many existing systems employ controllers logics local controller design following controller adding reference disturbance feedforward terms identity matrix feedforward gain selected equation rewritten bwq exists assumption system following lemma hysical dynamics remark steady state variable equal passivity analysis physical dynamics subsection analyze passivity system assuming constant case time varying treated end next section choose output prove passivity follows lemma consider system assumption system passive proof see appendix remark lemma holds regardless value extract term define output following lemma consider system assumption system impact coefficient minimal eigenvalue proof take substituting yields completes proof nterconnection ptimization hysical dynamics let interconnect optimization dynamics physical dynamics remark stationary value equivalent also holds inspired facts interconnect systems via negative feedback following main result paper theorem suppose assumptions hold interconnection via ensures proof define combining yields means belong class since positive definite state variables belong bounded also means bounded hence bounded thus invoking barbalat lemma prove means completes proof lyapunov stability desirable equilibrium tuple also proved proof emphasized optimal solution dependent unmeasurable disturbance nevertheless convergence solution guaranteed owing feedback path physics optimization results obtained assuming constant likely valid since ambient temperature general slowly varying however heat gain may contain high frequency components address issue decompose signal components others also assume belongs extended space following corollary holds proved following proof procedure passivity theorem using fact side upper bounded corollary suppose assumptions hold interconnected system finite gain fig hierarchical control architecture give remarks present architecture figs oriented theoretical analysis implementation need follow information processing figures indeed interconnected system equivalently transformed fig let operations shaded dark gray executed controller controller implemented decentralized fashion similarly existing systems fig controller physical dynamics biproper hence problem algebraic loops occur however matter practice since information transmissions highand processes usually suffer possibly small delays although controller contains algebraic loop easily confirmed loop solved direct calculations algebraic constraint transfer function disturbance estimate roughly speaking almost complementary transfer function hence low frequency components provided physical dynamics regarded estimate component cutoff frequency disturbance principle tuned system designed cutoff also automatically decided however always drawback least qualitatively actually even optimal solutions reflecting much faster disturbance variations provided physical states respond variations faster bandwidth consequence internal model control constant disturbances actual disturbance correctly estimated control architecture based similar concept presented section vii stegink however clear problem meet structural constraints assumed hence architecture directly applied problem zhang present another kind interconnection physical optimization dynamics based feedforward disturbance computed state measurements derivatives imulation section demonstrate presented control architecture simulation purpose build building modeling software sketchup trimble contains three rooms zones including walls ceilings windows building model installed energyplus order simulate evolution zone temperatures using acquired data identify model parameters via brcm toolbox next specify optimization problem elements set take constraints also selected zui element collecting constraints define function however since turns directly using response speed problem penalizing constraint violation algorithm instead take constraint essentially change optimization problem simulation take feedback gains tuned peak gain smaller criterion confirmed assumption satisfied following simulation use disturbance data shown fig compare results ideal case disturbance directly measurable case feedback path physical dynamics optimization needed hence take cascade heat ambient temp time min fig time min ambient temperature left external heats right heat heat room temp time min time min time min reference temp fed back optimization dynamics differences present scheme listed follows approach requires measurements state variables states include temperatures windows walls technological feasibility may problematic least increases system cost hand approach needs usually measurable since sensor measure computed using difference approximation provides approximation errors meanwhile present approach need approximation difference approximation errors together sensor noises high frequency components disturbances directly sent optimization dynamics may cause fluctuations output internal variables optimization process unless carefully designed sense noise reduction adding filter computed disturbance might eliminate undesirable factors however filter designed independently stability entire system presence uncertainties system model since case system becomes feedback system filter included loop meanwhile noises automatically rejected physical dynamics algorithm time min fig time responses estimated heat gain top figures estimated optimal room temperatures actual room temperatures figures solid curves show responses delivered proposed method dotted ones light colors disturbance feedforward scheme top figures dotted lines coincide actual heat gain connection optimization physics noted hard implement practice trajectories estimated heat gains estimated optimal room temperatures temperatures illustrated solid curves fig dotted lines light colors show trajectories using disturbance feedforward scheme see top left figures presented algorithm almost correctly estimate disturbance also observed right figure high frequency signal components filtered case algorithm trajectories figure sometimes get far since constraint gets active periods due high ambient temperature heat gains see bottom figures response present method constraint violations slower disturbance feedforward scheme high frequency components disturbance filtered physical dynamics let next show high sensitivity disturbance feedforward cause another problem small noises added heat gain every whose absolute value upper bounded run two algorithms resulting responses illustrated fig observed figure heat heat signed optimization dynamics local physical control system proved system properties related passivity interconnected optimization physical dynamics proved convergence room temperatures optimal ones finally demonstrated present algorithms simulation room temp reference temp time min time min eferences time min time min room temp reference temp fig time responses presence noises heat gain every line meaning fig time min time min fig time responses reference left room temperatures right scaling factor noise filtered present algorithm effects appear estimates accordingly trajectories estimated optimal actual room temperatures almost fig meanwhile disturbance feedforward approach suffers significant effects noise trajectories get smaller namely fluctuations caused undesirable undershoots although trajectories actual temperatures get smooth behavior reference desirable engineering point view adding filter reducing gain optimization dynamics would eliminate oscillations spoils advantage namely response speed noted disturbance feedforward implemented using recovery technique filter designed independently system stability stated section response speed present algorithm fig still problematic accelerated tuning scaling factor results shown fig observed almost speed disturbance feedforward fig achieved present algorithm simulation system practical settings left future work paper vii onclusion paper presented novel combined optimization control algorithm hvac control buildings afram theory applications hvac control systems review model predictive control mpc building environment vol oldewurtel parisio jones gyalistras gwerder stauch lehmann morari use model predictive control weather forecasts energy efficient building climate control energy buildings vol aswani master taneja culler tomlin reducing transient steady state electricity consumption hvac using control proc ieee vol kelman daly borrelli predictive control energy efficient buildings thermal storage modeling stimulation experiments ieee control syst vol matusko borrelli stochastic model predictive control building hvac systems complexity conservatism ieee trans control systems technology vol goyal ingley barooah control algorithms based occupancy information energy efficient buildings proc american control shiltz cvetkovic annaswamy integrated dynamic market mechanism markets frequency regulation ieee trans sust vol zhao topcu low design stability primary frequency control power systems ieee trans automatic control vol mallada zhao low optimal control frequency regulation smart grids proc annual allerton conf communication control computing zhang papachristodoulou distributed optimal control using proc ieee conf decision control stegink persis van shaft unifying approach optimal frequency market regulation power grids downloadable https zhang shi yan malkawi decentralized distributed temperature control via hvac systems energy efficient buildings automatica submitted cherukuri mallada asymptotic convergence constrained dynamics system control letters vol sturzenegger gyalistras semeraro morari smith brcm matlab toolbox model generation model predictive building control proc american control hatanaka chopra ishizaki distributed optimization communication delays using consensus estimator ieee trans automatic control submitted downloadable https van der schaft passivity techniques nonlinear control edn communications control engineering series springer london hatanaka chopra fujita spong control estimation networked robotics communications control engineering series simaan modularized design cooperative control operation networked heterogeneous systems automatica vol hong gao tracking control consensus active leader variable topology automatica vol hatanaka zhang shi zhu physics integrated hvac optimization multiple buildings robustness time delays arxiv downloadable https wen mishra mukherjee tantisujjatham minakais building temperature control adaptive feedforward proc ieee conf decision control privara siroky ferkl cigler model predictive control building heating system first experience energy buildings vol morosan bourdais dumur buisson building temperature regulation using distributed model predictive control energy buildings vol yuan perez ventilation temperature control vav system using model predictive strategy energy buildings vol maasoumya razmara shahbakhti vincentelli handling model uncertainty model predictive control energy efficient buildings energy buildings vol boyd vandenberghe convex optimization cambridge university press dept energy energyplus https ppendix roof emma lemma consider system assumption passive proof define energy function following procedure notation represents upper dini derivative integrating time completes proof next consider replace external input consider system lemma suppose assumption system passive proof subtracting yields define time derivative along trajectories given convexity holds completes proof system given interconnecting via easy confirm ready prove lemma define combining completes proof ppendix roof emma lemma assumption system stabilizable detectable proof define hold stable completes proof using lemma next prove following result lemma consider system assumption system passive proof first formulate error system take definite matrix define calculation schur complement assumption equivalent following riccati inequality following lemma since follows positive solution shown exist lemma define energy function solution time derivative along trajectories given completes proof ready prove lemma replace time derivative define along trajectories given define completes proof
| 3 |
language support reliable memory regions nov saurabh hukerikar christian engelmann computer science mathematics division oak ridge national laboratory oak ridge usa email hukerikarsr engelmann abstract path exascale computational capabilities highperformance computing hpc systems challenged inadequacy present software technologies adapt rapid evolution architectures supercomputing systems constraints power driven system designs include increasingly heterogeneous architectures diverse memory technologies interfaces future systems also expected experience increased rate errors applications longer able assume correct behavior underlying machine enable scientific community succeed scaling applications harness capabilities exascale systems need software strategies enable explicit management resilience errors system addition locality reference complex memory hierarchies future hpc systems prior work introduced concept explicitly reliable memory regions called havens memory management using havens supports reliability management approach memory allocations havens enable creation robust memory regions whose resilient behavior guaranteed protection schemes paper propose language support havens type annotations make structure program havens explicit convenient hpc programmers use describe extended memory management model implemented demonstrate use annotations affect resiliency conjugate gradient solver application work sponsored department energy office advanced scientific computing research manuscript authored llc contract department energy united states government retains publisher accepting article publication acknowledges united states government retains irrevocable license publish reproduce published form manuscript allow others united states government purposes department energy provide public access results federally sponsored research accordance doe public access plan http introduction computing hpc community sights set computers remain several challenges designing systems preparing application software harness parallelism due constraints power emerging hpc system architectures employ radically different node system architectures future architectures emphasize increasing parallelism addition scaling number nodes system order drive performance meeting constraints power technology trends suggest present memory technologies architectures yield much lower memory capacity bandwidth per flop compute performance therefore emerging memory architectures complex denser memory hierarchies utilize diverse memory technologies management resilience occurrence frequent faults errors system also identified critical challenge hpc applications algorithms need adapt evolving architectures also increasingly unreliable challenges led suggestions existing approaches programming models must change complement existing approaches demands massive concurrency emergence high fault rates require programming model features also support management resilience data locality order achieve high performance recent efforts hpc community focused improvements scalability numerical libraries implementations message passing interface mpi libraries useful future machines however also need develop new abstractions methods support fault resilience prior work proposed approach memory management using havens havens offer explicit method affecting resilience context memory management decisions memory management allocated object placed havens guarantee specified level robustness program objects contained memory region objects contained havens may freed individually instead entire deallocated leading deletion contained objects protected mechanism different havens program may protected using different resilience schemes use havens provides structure resiliency management program memory grouping related objects based objects individual need robustness performance overhead resilience mechanism approach memory management enables hpc applications write disciplines enhance resilience features arbitrary types memory traditional systems designed statically assign program objects memory regions based compiler analysis order eliminate need runtime garbage collection contrast primary goal havens provide scheme creating regions within memory various resilience features initial design defined interfaces creation use havens implemented library interface paper develop language support order make havens clearer convenient use hpc application programs supporting many language constructs possible paper makes following contributions make realistic proposal adding language support havens mainstream hpc languages develop type annotations enable static encoding decisions program object allocation deallocation robust regions also provide opportunities optimize robustness performance overhead protecting program objects investigate affecting resilience individual program objects using static annotations affects fault coverage performance application execution havens reliable memory regions havens designed support memory management runtime memory partitioned robust regions called havens program objects allocated various object deallocation policies may defined default free objects deleting entire pool memory therefore havens enable association lifetime reliable memory regions memory region protected predefined robustness scheme provides error detection correction objects robustness scheme used intended agnostic algorithm features structure data objects placed havens concept havens maintains clear separation memory management policies mechanism provides error resilience different havens used application may protected using different schemes parity hashing replication may carry different level performance overhead therefore havens enable program memory logically partitioned distinct regions possess specific level error resilience performance overhead perspective hpc application program havens enable applications exert control resilience properties individual program objects since different havens may varying guarantees resilience performance overhead object placement havens may driven criticality object program correctness associated overhead havens used create logical grouping objects require similar resilience characteristics havens also enable improvements locality dynamically allocated objects placement aggregation various objects based application pattern use furthermore havens permit hpc applications balance locality program objects resilience needs example runtime system may dynamically map onto specific hardware units memory hierarchy effort improve locality program objects mapping may also guided availability error memory unit cooperates protection scheme using havens memory management basic operations developing concept havens defined interface hpc programs effectively use reliable memory regions application codes abstract interface based notion manager provides set basic operations must implemented fully support use havens operations summarized create request creation application returns handle memory region memory allocated choice error protection scheme specified creation operation alloc application requests specified block memory within using interface operation results allocation memory initialization state related protection scheme delete interface indicates intent delete object within memory released destroyed read write interfaces read update program objects contained operations performed interfaces rather directly objects enable manager maintain updated state robustness mechanism destroy interface requests destroyed results memory blocks allocated region deallocated upon completion operation operation permitted memory available reuse state related robustness scheme maintained manager also destroyed relax robust interfaces enable error protection scheme applied turned based needs application program execution library interface implementation havens library similar one heap divided pages new creation aligned page boundary library maintains linked list pages provide library api functions primitives enable basic operations alloc new implement abstraction allocation objects associated region implementation interfaces require changes representation pointers pointers may reference havens access individual objects havens since library implementation differentiate pointer types conversions two kinds pointers potentially unsafe may lead incorrect behavior support allocation deallocation therefore deallocation illegal operation release enables expression end object life however destroy operation must invoked release memory achieved concatenating page list global list free pages protection schemes havens initial implementation havens memory regions guaranteed behavior comprehensive protection based lightweight parity scheme scheme requires manager maintain pair signatures memory region word length error correction additional word length signature error detection detection signature contains one parity bit per word memory region memory allocated region initialized correction signature retains xor words written memory region apply xor operation every word updated memory region correction signature silent data corruptions errors detected checking detection signature parity violations detection signature also enables location corrupted memory word identified value corrupted memory location may recovered using signatures xor two signatures equals xor uncorrupted locations therefore corrupted value memory region recovered performing xor operation remaining words xor signatures recovered value overwrites corrupted value detection signature recomputed protection adaptation erasure code using scheme multibit corruptions may recovered unlike ecc offers single bit error correction double bit error detection scheme maintains limited state detection correction capabilities therefore carries little space overhead comparison schemes ecc checksums additionally operations transparent application detection constant time operation recovery operation based size type system goals havens express intended relationships locality resilience requirements various program objects use havens brings structure memory management grouping related program objects based resiliency locality needs initial prototype implementation havens contains library interfaces primitive operations language support havens aims make programming hpc applications havens straightforward productive making programs using havens clearer easier write understand design language support seeks address following seemingly conflicting goals explicit hpc programmers control program objects allocated explicitly define robustness characteristic lifetime convenience minimal set explicit language annotations support many idioms possible order facilitate use havensbased memory management existing hpc application codes well development new algorithms safety language annotations must prevent dereferences space leaks scalability havens must support various object types performance overhead resilience scheme scales well even large number objects language support enables hpc programmers statically encode memory management decisions various program objects making structure havens resilience features explicit number runtime checks modifications structure resilience scheme reduced type annotations havens model memory management heap divided regions containing number program objects therefore havens abstract entities represent aggregation program objects pointers havens refer abstract entities heap whose resilience scheme defined upon creation provides protection program objects contained within definition pointer type provides statically enforceable way specifying resilience scheme type size information encapsulated objects inside type statically ensures programs using model memory management dont permit dangling references ptr new type handles havens declaration ptr typed pointer leads creation declaration allocate memory pointer object declared subsequently deleted shown listing deletehaven listing type annotations havens ptr smart pointer object contains pointer reference also maintains bookkeeping information objects resident including sizes reference count information enables library optimize resilience scheme protects example protection scheme protected using pair parity signatures availability count sizes objects inside enables statically creating protected pair signatures define deletehaven operator provides static mechanism reclaim memory allocated objects inside also discards bookkeeping information state maintained resilience scheme signatures provide parity protection library implementation havens permits unsafe operations since may deleted even program contains accessible pointers objects introduction ptr type also address issue safety deletehaven operator encountered safety delete operation guaranteed checking reference counts included ptr typed pointer object delete operation succeeds ptr contains null object pointers operation results releasing storage space along program objects contained ptr typed pointer object contains count active object references delete operation fails subtyping annotations subtype annotation used constrain membership object specific object type annotated region expression explicitly specifies values type belong region expression always bound type declaration object declare new pointer declare member int delete memory deletehaven listing subtype annotations havens type ptr defines subtype variables guarantees allocation qualified object within type annotation enables local variables global variables programs associated membership annotated variable also guarantees variable protection offered specified resilience scheme declaration single integer variable inside written shown listing type ptr annotation defines subtype pointer objects inclusion ptr specifies membership object referenced annotated variable declaration array inside allocation memory array written shown listing declare new pointer declare vector pointer member double vector memory vector size vector sizeof double set vector pointer null without fails vector null delete release memory vector deletehaven listing declaration array object within membership relationship variables havens expressed subtyping annotations also enables programmers imply locality reference program objects associated restrictions use type annotations object pointers programmers need differentiate traditional pointers pointers specify membership conversion two kinds pointers potentially unsafe may lead incorrect program behavior therefore define null enables traditional pointers viewed pointers objects inside null region compiler guarantees safe assignments pointer variables static analysis runtime checks defining lifetimes language support also define notion lifetimes havens basic idea define scope computation valid define reference lifetime shown listing syntax enables creation dynamic havens whose lifetime execution statement statement may compound statement program objects allocated within guaranteed error protection default resilience scheme explicit definition lifetimes havens enables programs scope specific regions computation must executed high reliability listing defining lifetime scope havens example vector addition example listing shows skeleton vector addition code objective protect operand vectors example omits details vector initialization addition routines declaration ptr pointer variable identifier creates upon creation parity signatures initialized memory allocated create vectors declare vectors members double sizeof double double sizeof double declare vector pointer member null double null malloc sizeof double vector set vector null without fails null null free deletehaven listing example resilient vector addition using havens language support declaration array pointers makes relationship operand vectors explicit ensures allocation vectors inside alloc allocation requests made library initializes resilience scheme allocates vectors size elements array pointer result vector traditional pointer declared double establishes membership null vector addition function returns operand vector pointers set null deletehaven operator able release memory associated includes vectors resilience models using havens variety fault tolerance abft strategies extensively studied past decades many techniques designed take advantage unique features application algorithm data structures techniques also able leverage fact different aspects application state different resilience requirements needs vary execution application however key barrier broader adoption resilience techniques development hpc applications lack sufficient programming model support since use features requires significant programming effort explore three generalized resilience models may developed using havens whose construction facilitated languagebased annotations models intended serve guidelines hpc application programmers develop new algorithms well adapt existing application codes incorporate resilience capabilities selective reliability based insight different variables hpc program exhibit different vulnerabilities errors havens provide specific regions program memory comprehensive error protection model hpc programmers use havens mechanisms explicitly declare specific data compute regions reliable default reliability underlying system specialized reliability various protection schemes provide correction capabilities havens guarantee different levels resiliency also based placement havens physical memory schemes may complement capabilities havens provide simplified abstractions design resilience strategies seek complement requirements different program objects various hardware protection schemes available phased reliability vulnerability various program objects computations errors varies program execution havens may also used partition applications distinct phases computations since various resilience schemes incur overheads application performance protection features specific data regions compute phases may enabled disabled order performance overhead resilience experimental results apply static annotations hpc application must identify program objects must allocated havens annotate declarations type qualifiers experiments evaluate use memory management using type qualifiers conjugate gradient code including type subtype qualifiers various application objects use iterative algorithm validate correctness fig performance overheads havens static annotations outcome solver solution produced using direct solver compare evaluation results previous implementation required insertion raw library interfaces one important advantages using static annotations number lines code changed reduced significantly compared changes required insertion library calls application code improves code readability algorithm solves system linear equations algorithm allocates matrix vector solution vector additionally conjugate vectors residual vector referenced iteration algorithm program objects application demonstrate different sensitivities errors errors operand matrix vector fundamentally changes linear system solved errors structures even solver converges solution may significantly different correct solution preconditioner matrix demonstrates lower sensitivity errors vectors features algorithm form basis strategic placement objects havens since allocation sensitive data structures havens provides substantially higher resilient behavior terms completion rates algorithm reasonable overheads performance naive placement strategy present detailed sensitivity analysis evaluate performance benefits gained use static annotations various objects code perform two sets experiments allocate one structure using static annotations remaining structures allocated using standard memory allocation interfaces strategically annotate data structures allocate structures havens specific combinations evaluate following combinations allocation static state matrix vector preconditioner havens dynamic state solution vectors allocated using standard memory allocation functions allocation matrix vector havens iii dynamic state provided fault coverage using havens compare strategies allocations havens provide complete coverage experimental runs allocate structure using havens performance overhead using havens terms time solution solver selection program objects allocation havens shown figure annotation program variables allocated havens provides higher fault coverage results higher overhead time solution application variables allocated using raw library interfaces program object protected pair signatures provides monolithic protection entire objects qualified static annotations application code compiler library better understanding size structure program objects therefore larger program objects notably operand matrix preconditioner matrix split protected multiple pairs parity signatures split protection transparent application programmer application still accesses matrix elements single data structure use multiple signatures improves overhead objects observed overhead static annotations program objects lower allocation set objects operand matrix occupies dominant part solver memory occupying active address space whereas solution vector conjugate vectors residual vector preconditioner matrix account remaining space therefore annotation matrix individually results lower overhead monolithic parity protection using library interfaces improvement performance smaller data objects statically annotated within version using library interfaces objects related work much research devoted studies algorithms memory management based either automatic garbage collection explicit schemes concept regions implemented storage systems allowed objects allocated specific zones zone permits different allocation policy deallocation performed basis vmalloc library provides programmers interface allocate memory define policies allocation systems arenas enable writing memory allocators achieve performance creating heap memory allocation disciplines suited application needs implementations vmalloc place burden determining policy allocation objects regions programmer schemes used profiling identify allocations place allocations regions several early implementations systems unsafe deletion regions often left dangling pointers subsequently accessible safety concerns addressed reference counting schemes regions dynamic heap memory management static analysis regions provide alternative garbage collection methods approach assignment program objects regions statically directed compiler effort provide predictable lower memory space approach refined relaxing restriction region lifetimes must lexical language support regions available many declarative programming languages prolog cyclone language designed syntactically close provides support regions explicit typing system rust programming language also provides support regions recent efforts seek provide programming model support reliability containment domains offer programming constructs impose transactional semantics specific computations previous work havens provided method memory allocations rolex offers extensions support various resilience semantics application data computations global view resilience gvr supports reliability application data providing interface applications maintain versionbased snapshots application data support fault tolerance explicit memory malloc failable interface used application allocate memory heap callback functions used handle error recovery memory block conclusion resilience among major concerns next generation hpc systems rapid evolution hpc architectures emergence increasingly complex memory hierarchies applications running future hpc systems must manage locality maintain reliability data havens provide explicit approach hpc applications manage resilience locality programs paper focused developing language support havens emphasis providing structure memory management type annotations programmer expresses intended relationships locality resilience requirements various objects application program type annotations enable resilience requirements program objects encoded within heap idioms static typing discipline application codes written also guarantees safety memory operations preventing dereferences space leaks structured management facilitated language support provides mechanisms development effective resilience models hpc applications references shalf dosanjh morrison exascale computing technology challenges proceedings international conference high performance computing computational science vecpar kogge bergman borkar campbell carlson dallya denneau franzon harrod hill hiller karp keckler klein lucas richards scarpelli scott snavely sterling williams yelick exascale computing study technology challenges achieving exascale systems technical report darpa september debardeleben laros daly scott engelmann harrod highend computing resilience analysis issues facing hec community pathforward research development whitepaper december amarasinghe hall lethin pingali quinlan sarkar shalf lucas yelick balaji diniz koniges snir sachs yelick exascale programming challenges report workshop exascale programming challenges technical report department energy office science office advanced scientific computing research ascr july hukerikar engelmann havens explicit reliable memory regions hpc applications ieee high performance extreme computing conference hpec september tofte talpin implementation typed using stack regions proceedings acm symposium principles programming languages popl new york usa acm ross aed free storage package communications acm august vmalloc general efficient memory allocator software practice experience hanson fast allocation deallocation memory based object lifetimes softwre practices experience january barrett zorn using lifetime predictors improve memory allocation performance proceedings acm sigplan conference programming language design implementation pldi new york usa gay aiken memory management explicit regions proceedings acm sigplan conference programming language design implementation pldi new york usa acm aiken levien better static memory management improving analysis languages proceedings acm sigplan conference programming language design implementation pldi tofte birkedal elsman hallenberg olesen sestoft bertelsen programming regions kit technical report technical report university copenhagen denmark april makholm memory manager prolog proceedings international symposium memory management ismm new york usa acm grossman morrisett jim hicks wang cheney regionbased memory management cyclone proceedings acm sigplan conference programming language design implementation pldi new york usa acm rust programming language http chung lee sullivan ryoo kim yoon kaplan erez containment domains scalable efficient flexible resilience scheme exascale systems proceedings international conference high performance computing networking storage analysis hukerikar lucas rolex language extensions systems journal supercomputing chien balaji beckman dun fang fujita iskra rubenstein zheng schreiber hammond dinan laguna richards dubey van straalen hoemmen heroux teranishi siegel versioned distributed arrays resilience scientific applications global view resilience procedia computer science bridges hoemmen ferreira heroux soltero brightwell cooperative dram fault recovery proceedings international conference parallel processing volume springerverlag
| 6 |
finite state machine synthesis evolutionary hardware andrey bereza maksim lyashov luis blanco dept information systems radio engineering state technical university anbirch raubtierxxx abstract article considers application genetic algorithms finite machine synthesis resulting genetic finite state machines synthesis algorithm allows creation machines less number states within shorter time makes possible use genetic finite machines synthesis algorithm autonomous systems reconfigurable platforms introduction nowadays evolutionary algorithms applied design digital analog devices trend called evolutionary electronics application hardware platforms reconfigurable elements allows rebuilding systems process operation called evolutionary hardware evolutionary hardware new type hardware based various probabilistic algorithms genetic algorithms evolutionary programming design reconfigurable parts dynamically rebuilding combinatory sequential logic circuits dynamically rebuild digital logic circuit necessary able synthesize circuit gate level hence task digital logic circuit synthesis development arises current methods finite state machine synthesis always use specifics problem makes impossible use state machine generation technic different kind problem quest make universal state machine synthesis method applicable wide range problems application finite machine synthesis shown work however given algorithms applied state machine programming program described finite state machines allow usage autonomous hardware systems reconfigurable platforms research supported russian foundation basic research grant problem state machine evolutionary synthesis problem state machine evolutionary synthesis defined set synthesized solution genotype objective genetic operators function synthesized solution genotype defined set amount finite machine states amount inputs objective function defined expression amount states amount iterations weight coefficients particular criteria task minimize objective function genetic algorithm finite state machine synthesis schematic diagram proposed hardware oriented genetic algorithm finite state machine synthesis designed creation reconfigurable platform shown figure first step user sets requirements finite state machine designed numbers states triggers also amount generations mutation crossing probabilities must set organize process evolutionary synthesis according algorithm initial set solutions generated evaluated transition correction algorithm also executed evaluation process initial set solutions population solution meets requirements gets saved algorithm ends terminal criteria reaching maximum number generations solution primary genetic operators fsmga mutation crossing chromosome coding bit string applies constrains operator types deleted population since crossing mutation operators chromosomes inserted population since proposed designed function autonomous chromosomes coded bit strings let consider chromosome coding specific example finite machine combinatory logic built logical elements replaced ram schematic diagram figure finite machine transition table converted first able replace combinatory circuit ram transition graph finite machine given figure describes behavior control device amount triggers required represent four states equal start determine chromosome structure generate initial set solutions evaluate initial set solutions define counter generation count found yes select chromosome pair apply crossing get descendants triggers combinatory logic input mutate descendants evaluate descendants include population result output selection end output triggers ram figure schematic diagram hardware oriented genetic algorithm finite state machine synthesis mutation operator random depend chromosome fitness gene residing chromosome result mutation randomly changes either output value state machine state number selected randomly picked transition crossing operator randomly exchanges genetic information two solutions existing genetic information preserved solution quality largely dependent crossing operator type selection proposed finite state machine synthesis algorithm crossing operators applied simplest hardware implementation experimental studies shown crossing preferable selection operation algorithm based bubble sort algorithm since hardware implementation takes least resources among sorting algorithms population sorted descending order chromosomes higher objective function value moved top population chromosomes worst objective function value input output figure memory usage state machine figure state machine sample state machine number ram address inputs equal sum amount triggers number inputs ram output number accordingly equal sum amount triggers number outputs finite machine sample ram contain address inputs outputs required memory size bits truth table ram equal one state machine schematic diagram state machine figure made replacement combinatory logics ram shown figure input ram output figure schematic diagram state machine experimental studies developed algorithm tested two different problems santa trail problem smart ant autopilot construction simplified helicopter model problem santa trail problem area cooperative usage finite state machines ant surface torus size cells food placed cells figure marked black located along broken line cells broken line cells food marked gray white cells belong broken line contain food altogether field contains food cells figure santa trail field ant starting location marked start ant occupies one cell looks one four directions left right ant able determine food directly front one game turn ant able make one three actions step forward eating food destination turn left turn right food eaten ant refill ant always alive food vital broken line random strictly fixed ant able walk cell field game moves long move ant performs one three actions turns amount eaten food calculated result game goal design ant eat much possible food within turns desirable one ways describe behavior ant mealy machine one input variable tells food front ant set output actions consisting three described schematic diagram machine given figure forward food front finite state machine left ant right figure schematic diagram smart ant finite state machine hard heuristically build machine solving problem instance heuristically built mealy machine five states solve problem finite state machine describes ant eats food cells within moves takes moves eat food experimental studies santa trail problem conducted population size crossing probability mutation probability comparison results proposed finite machine synthesis fmsga versus results heuristic algorithm proposed work shown table table experimental results amount algorithm states heuristic fmsga moves synthesis time implied results fmsga able find machine states solves problem moves also takes times less amount time synthesize machine existing analogs consider second problem autopilot construction simplified helicopter model autopilot created simplified helicopter model moves flat surface one move helicopter model either rotate certain predefined angle change velocity autopilot task drive helicopter markers within limited time best autopilot one manages visit highest number markers two autopilots reach amount markers one closest next marker last moment flight wins autopilot input variable receives sight sector number figure current target position relative helicopter given angle helicopter movement direction direction next marker figure helicopter always flies middle current sector sectors static relative helicopter figure figure helicopter output data autopilot model finite machine discrete input output actions machine state indirectly maps helicopter current position speed history state transitions experimental studies conducted sector sizes parameter set tests conducted experimental results shown table result column shows amount markers visited autopilot designed fmsga work finite machine states able drive helicopter first markers within given time table experimental results helicopter autopilot design fmsga result number sectors worst average best conclusion shown given experimental results developed fmsga allows machine synthesis within shorter time less number states proves developed fmsga effectively used autonomous systems reconfigurable platforms references pauline haddow andy tyrrell challenges evolvable hardware past present path promising future genetic programming evolvable machines september volume issue bereza kureichik evolutionary methods synthesis circuit decisions designing data telecommunication systems journal computer systems sciences international zebulum pacheco vellasco evolutionary electronics automatic design electronic circuits systems genetic algorithms usa crc press llc garrison introduction evolvable hardware practical guide designing systems ieee press series computational intelligence lobanov genetic algorithms usage finite machine synthesis tech science cand dissertation petersburg tsarev shalyto finite machine design minimum number states santa trail problem xth soft computing conference collection reports petersburg electrotechnical university
| 9 |
aug atomic structure puiseux monoids felix gotti abstract paper study atomic structure family puiseux monoids additive submonoids puiseux monoids natural generalization numerical semigroups actively studied since midnineteenth century unlike numerical semigroups family puiseux monoids contains generated representatives even interesting many puiseux monoids even atomic delve situations describing particular vast collection commutative cancellative monoids containing atoms hand find several characterization criteria force puiseux monoids atomic finally classify atomic subfamily strongly bounded puiseux monoids finite set primes introduction puiseux monoid additive submonoid rational numbers family puiseux monoids natural generalization one comprising numerical semigroups paper explore atomic structure former family far complex atomic structure numerical semigroups however controlled atomic behavior numerical semigroups guide initial approach puiseux monoids numerical semigroups atomic monoids systematically studied since century see monograph rosales algebraic geometry noetherian local domains whose integral closures finitely generated modules discrete valuation rings pop often associated valuations turn numerical semigroups many properties previously mentioned domains fully characterized terms valuation numerical semigroups details see understanding atomicity puiseux monoids set groundwork future exploration arithmetic properties factorization invariants atomic subfamilies obtain good insight algebraic properties puiseux monoids might expect use family commutative monoids understand certain behaviors puiseux domains see sec subdomains power series rational exponents would mirror way numerical semigroups used understand many attractive properties subdomains power series natural exponents see date march gotti although significant effort put exploring arithmetic invariants many families atomic monoids see instance little work gone attempt classify paper find entirely new family atomic monoids hidden inside realm puiseux monoids increasing current spectrum atomic monoids isomorphism therefore contributing classification aforementioned family addition puiseux monoids provide source examples atomic monoids new arsenal examples might help test several existence conjectures concerning commutative semigroups factorization theory family puiseux monoids contains vast collection representatives monoids containing elements factorizations irreducibles even surprising theorem identifies subfamily antimatter representatives puiseux monoids possessing irreducible elements contrast various subfamilies puiseux monoids whose members atomic even isomorphic numerical semigroups devote paper introduce study fascinating atomic structure puiseux monoids section establish terminology using throughout paper section pointing puiseux monoids naturally appear commutative ring theory introduce members targeted family illustrating much puiseux monoids differ numerical semigroups terms atomic configuration highlighted wildness atomic structure family investigated show atomic members precisely containing minimal set generators theorem present two sufficient conditions atomicity proposition theorem section introduce subfamily strongly bounded puiseux monoids presenting simultaneously atomic antimatter subfamilies failing strongly bounded last section study atomic configuration strongly bounded puiseux monoids present sufficient condition strongly bounded puiseux monoids antimatter theorem conclude dedicate second part last section classification atomic subfamily strongly bounded puiseux monoids finite set primes defined section preliminary begin presenting terminology related atomicity commutative cancellative monoids briefly mention basic properties numerical semigroups objects generalize work goal section formally introduce elementary concepts results commutative semigroups factorization theory rather fix notation establish atomic structure puiseux monoids nomenclature use later extensive background information commutative semigroups factorization theory refer readers monographs grillet geroldinger respectively use symbols denote sets positive integers integers respectively moreover real number denote set simply similar intention use notations often write instead denote unique gcd respectively call sets numerator denominator set respectively unless otherwise specified word monoid paper means commutative cancellative monoid let monoid every monoid assumed commutative unless state otherwise always use additive notation particular denotes operation denotes identity element invertible elements monoid called units set units denoted monoid said reduced generated subset write hsi monoid finitely generated hsi finite set brief precise exposition finitely generated commutative monoids readers might find useful element irreducible atom implies either unit set atoms denoted monoid atomic every element expressed sum atoms briefly comment general properties numerical semigroups numerical semigroup submonoid additive monoid finite every numerical semigroup unique minimal set generators happens finite minimally generated gcd consequently every numerical semigroup atomic finitely many atoms family numerical semigroups intensely studied three decades entry point realm numerical semigroups readers might consider valuable resource let minimally generated numerical semigroup frobenius number denoted greatest natural number contained smallest integer diophantine equation solution need later following result taken gives upper bound frobenius number terms minimal set generators theorem let minimally generated numerical semigroup gotti numerical semigroups canonical set generators also exhibit convenient atomic structure next section explore desirable properties behave general setting puiseux monoids atomic characterization puiseux monoids start section brief discussion puiseux monoids show naturally commutative ring theory explain connection field puiseux series justifying choice name puiseux move study atomicity family puiseux monoids find atomic characterization two sufficient conditions atomicity let field valuation map val satisfying following three axioms val val val val val min val val example take field laurent series formal variable consider map vall defined vall min val difficult verify function vall valuation standard result explained many introductory textbook algebra algebraic closure field laurent series formal variable denoted called field puiseux series first studied puiseux write field puiseux series field laurent series formal variable nonzero elements formal power series form complex numbers denominator natural number integers also function valp mapping valuation let subring second axiom definition valuation image valp closed addition since valp follows atomic structure puiseux monoids valp additive submonoid particular puiseux monoids arise naturally commutative ring theory images valp subdomains positive valuations given connection monoids investigated paper named puiseux honoring french mathematician victor puiseux proceed study atomic structure puiseux monoids family puiseux monoids natural generalization numerical semigroups following proposition whose proof follows immediately characterizes puiseux monoids isomorphic numerical semigroups proposition puiseux monoid isomorphic numerical semigroup finitely generated let puiseux monoid set rational numbers generating instead hsi sometimes write omitting brackets description say minimally generated proper subset generates clear contrast numerical semigroups puiseux monoids neither atomic finitely generated see examples addition unlike numerical semigroups every puiseux monoid atomic fact nontrivial puiseux monoids containing atoms hand atomic puiseux monoids infinitely many atoms following examples illustrate facts mentioned example fix prime number let puiseux monoid generated set although finitely generated set atoms empty sum copies every positive integer example let set comprising prime numbers consider puiseux monoid shall check atom every prime suppose natural number necessarily distinct primes setting obtain therefore divides thus means since generated atoms atomic finally follows albeit atomic isomorphic numerical semigroup confirm note contains infinitely many atoms example let enumeration odd prime numbers let puiseux monoid generated set gotti easy check way example atom hand follows immediately since empty one verify written sum atoms suppose way contradiction exist positive integer coefficients satisfying multiplying one gets implies divides since odd get contradiction hence written sum atoms consequently monoid infinitely many atoms like numerical semigroups puiseux monoids reduced therefore set atoms puiseux monoid contained every set generators mentioned numerical semigroup unique minimal set generators namely set atoms theorem shows unique minimal set generators characterizes family atomic puiseux monoids theorem puiseux monoid following conditions equivalent contains minimal set generators contains unique minimal set generators atomic proof first show conditions equivalent since follows immediately suffices prove implies suppose two minimal sets generators take arbitrary fact leads existence also generates sini sij result sij minimality implies therefore one gets using similar argument check hence minimal set generators exists must unique atomic structure puiseux monoids prove equivalent first assume condition holds let minimal set generators let show every element atom suppose way contradiction atom since strictly less written sum elements result contradicting minimality therefore must contained set generators thus atomic condition finally check implies assume atomic reduced atom written sum positive elements hence minimal set generators contrast numerical semigroups puiseux monoids containing minimal sets generators atomic still every minimal set generators nevertheless might generate example shows fact fail finitely generated still finitely many atoms example sheds light upon situation let prime nonzero integer define exponent maximal power dividing set addition set follows immediately map called valuation actual valuation particular satisfies third condition definition valuation given min definition let set primes puiseux monoid puiseux monoid every finite say finite puiseux monoid puiseux monoid said finite exists finite set primes finite simplify notation contains one prime write puiseux monoid instead puiseux monoid see valuation maps play important role describing atomic configuration puiseux monoids example proposition check finite puiseux monoid atomic sequence valuations generators bounded remainder section devoted finding characterization criteria atomicity puiseux monoids proposition theorem identify two atomic subfamilies puiseux monoids proposition puiseux monoid following conditions equivalent finite bounded every prime finite hri implies bounded every prime gotti denominator set bounded hri denominator set bounded one conditions holds atomic proof condition trivially implies condition assume condition let finite set primes finite one bounded therefore finite taking product elements one hence finite holds condition implies condition trivially finally assume condition since bounded consequence finitely many primes divide elements boundedness also implies bounded every prime condition check condition implies atomic suppose finite let subset rationals hri set min function take defined isomorphism arbitrary written every inequality one min mci min since follows isomorphic numerical semigroup atomic hence also atomic expected one four conditions proposition fails namely bounded might atomic illustrated example besides allowed proposition would hold see next example example let infinite set primes define puiseux monoid every observe sum copies since contained set generators fact every natural implies empty therefore atomic section give special name puiseux monoids whose set atoms empty atomic structure puiseux monoids theorem let puiseux monoid limit point atomic proof assume way contradiction atomic let nonempty subset comprising elements written sum atoms exist positive elements either must contained otherwise would belong let suppose without loss generality exist either assume one obtains continuing fashion build two sequences elements decreasing every since decreasing sequence positive terms converges cauchy sequence implies sequence converges zero contradicts fact limit point thus atomic establishes theorem converse theorem hold next example illustrates failure converse also indicates proposition theorem enough fully characterize family atomic puiseux monoids example let enumeration odd prime numbers define sequence positive integers follows take arbitrary chosen take inequalities hold define puiseux monoid verify limit point atomic way defined sequence ensures every result sequence converges hence limit point additionally suppose coefficients get written every summand side divisible contradicts side divisible therefore atom since generated atoms atomic gotti clearly puiseux monoid theorem limit point every submonoid must also atomic result theorem however general true every submonoid atomic monoid atomic next example illustrates observation example let sequence comprising odd prime numbers strictly increasing order consider puiseux monoid hsi since odd prime divides exactly one element set follows hence atomic hand element sum copies atom every thus monoid contains atoms immediately implies atomic therefore submonoid atomic monoid fails atomic proposition theorem give rise large families atomic monoids theorem applies particular puiseux monoid generated eventually increasing sequence many factorization invariants numerical semigroups investigated last two decades example set lengths elasticity delta set degree actively studied terms minimal sets generators see references therein studying factorization invariants atomic puiseux monoids provided theorem would contribute significantly understanding algebraic combinatorial structure strongly bounded puiseux monoids section restrict attention atomic structure puiseux monoids generated subset satisfying bounded precise say subset rational numbers strongly bounded numerator set bounded definition puiseux monoid bounded strongly bounded generated bounded strongly bounded set rational numbers although remainder paper focus studying subfamily strongly bounded puiseux monoids means subfamily containing atomic representatives following proposition explains way constructing family atomic puiseux monoids whose members strongly bounded atomic structure puiseux monoids proposition exist infinitely many atomic puiseux monoids strongly bounded proof let prime suppose sequence recurrently defined follows choose suppose selected take gcd consider puiseux monoid hsi let check every since smallest element atom inequality valuation every element monoid least consequence fact smallest element deduce atom hence therefore every generating set contains follows strongly bounded yet monoid atomic generated atoms note also prime found puiseux monoid atomic strongly bounded thus infinitely many atomic puiseux monoids failing strongly bounded fact way constructed puiseux monoid infer prime infinitely many atomic puiseux monoids strongly bounded found subfamily atomic puiseux monoids strongly bounded contrast natural ask whether family strongly bounded puiseux monoids comprises puiseux monoids containing atoms postpone answer question prove proposition let introduce terminology monoids containing atoms integral domain called antimatter domain contains irreducible elements antimatter domains studied coykendall however relevant investigation carried concerning monoids containing atoms definition let monoid empty say antimatter monoid point general concepts antimatter atomic monoids independent abelian groups atomic antimatter additive monoid atomic antimatter also additive monoid antimatter however atomic finally set polynomials gotti endowed standard multiplication polynomials atomic proved addition since every prime seen constant polynomial atom one finds antimatter indicated independence antimatter atomic monoids general setting notice particular case puiseux monoids nontrivial atomic monoid automatically fails antimatter generally actually true every nontrivial reduced monoid point know infinitely many atomic puiseux monoids failing strongly bounded sake completeness also construct proposition infinite subfamily antimatter puiseux monoids whose members fail strongly bounded know every generating set puiseux monoid contains particular atomic every generating set contains generating subset consisting atoms namely suggests question whether every generating set bounded strongly bounded puiseux monoid contains bounded strongly bounded generating subset show reduce generating set bounded puiseux monoid bounded generating subset proposition bounded puiseux monoid every generating set contains bounded generating subset proof let set generators take bounded subset rational numbers hbi define divides since upper bound fact bounded implies also bounded bounded subset verify generating set enough check hsi take arbitrary since generates exist generated exist rini rini consequently rij notice every element rij divides thus equality forces hsi therefore hri hsi hence bounded subset generating proposition naturally suggests question whether every generating set strongly bounded puiseux monoid contains strongly bounded generating subset unlike parallel statement boundedness desirable claim hold strongly bounded puiseux monoids atomic structure puiseux monoids example let odd prime let consider following two sets rational numbers verify generate puiseux monoid suffices notice let puiseux monoid generated sets since strongly bounded hand every strongly bounded subset must contain finitely many elements sequences numerators increase infinite addition antimatter generated subset generating must contain infinitely many elements hence conclude contain strongly bounded subset generating let resume search family antimatter puiseux monoids failing strongly bounded accomplish goal make use proposition proposition exist infinitely many antimatter puiseux monoids strongly bounded proof since every strongly bounded puiseux monoid also bounded enough find family failing bounded let odd prime consider sets let collection infinite subsets odd prime numbers let take puiseux monoid generated set claim antimatter bounded let verify first antimatter notice every means additionally therefore implies contain atoms since hxi follows immediately antimatter show bounded suppose way contradiction bounded proposition set must contain bounded subset generating observe every prime set bounded gotti therefore exists natural empty prime prime greater divide contradicts fact generates hence puiseux monoid bounded since distinct elements one gets infinite family antimatter puiseux monoids bounded shown example every generating set strongly bounded puiseux monoid reduced strongly bounded subset generating however strongly bounded also atomic set generators certainly reduced strongly bounded generating set record observation proposition whose proof follows straightforwardly fact set atoms reduced monoid must contained every generating set proposition let strongly bounded puiseux monoid atomic every generating set contains strongly bounded generating subset atomic structure strongly bounded puiseux monoids section restrict attention atomic structure subfamily puiseux monoids happen strongly bounded first find condition members subfamily antimatter presenting theorem first main result move focus classification atomic subfamily strongly bounded puiseux monoids finite set primes stated second main result theorem let introduce definitions say sequence integers stabilizes positive integer exists divides every spectrum sequence denoted spec set primes stabilizes lemma let sequence positive integers upper bound spectrum empty exist gcd ank proof fix since sequence bounded finitely many primes dividing least one terms let set comprising primes empty take two distinct integers greater case every upper bound one gcd gcd assume therefore constant atomic structure puiseux monoids sequence whose terms ones thus empty let fact implies spectrum empty exists divide similarly exists divide general one chosen ani exists satisfying following described procedure finitely many times obtain ani follows immediately gcd ank next theorem gives sufficient condition puiseux monoid antimatter theorem let strongly bounded subset rationals generating divides sequence unbounded spectrum empty antimatter proof every let denote respectively let upper bound sequence fix arbitrary positive integer show contained lemma exist gcd ank large enough satisfy using unboundedness hand gcd ank bnk makes sense bni divides bnk since gcd ank gcd ank bnk every one gcd bnk bnk ank let frobenius number numerical semigroup hbnk bnk ank taking bnk anj max bnk bni ani using theorem one see ank bnk anj ank anj bnk bnj bnk bnk last inequality follows bnk exist bnk bnk anj accordingly anj therefore every since every none elements atom moreover generator written sum copies hence checked none generators atom conclude antimatter gotti additional information proof theorem list following corollary corollary puiseux monoid satisfying conditions theorem let verify theorem sharp meaning none hypotheses redundant first one drops condition strongly bounded might antimatter see proposition example illustrates condition every also required numerical semigroups evidence sequence necessarily unbounded finally family strongly bounded puiseux monoids constructed proposition shows emptiness spectrum also needs imposed guarantee antimatter proposition exists generated strongly bounded puiseux monoid exactly atoms proof take prime numbers satisfying consider puiseux monoid check suppose since written suitable set coefficients finitely many zero valuations side least zero lefthand side equation must integer result every implies thus hand generator form atom sum copies hence proposition tells infinitely many puiseux monoids varying choice fixed finite number atoms finitely generated therefore fact along example example atomic structure puiseux monoids gives evidence complexity atomic structure puiseux monoids conclude discussion atomic structure puiseux monoids classification atomic subfamily strongly bounded puiseux monoids finite set primes first let introduce terminology spectrum natural denoted spec set prime divisors addition given finite set primes support respect denoted suppp set indices next lemmas used proof theorem lemma let finite set primes let puiseux monoid sequence every strictly increasing antimatter proof fix sequence positive integers strictly increasing take element let gcd set min since strictly increasing set finite exists every therefore obtain greatest natural number dividing every follows exponents positive thus sum one copy whence find atom since taken arbitrarily conclude empty lemma let finite set primes let sequence spec every exists satisfying following property exist subsequence strictly increasing proof let set subsets indices contain subsequence strictly increasing must exist satisfying every inequality holds least index take max suppose natural number subset done suppose way contradiction gotti every means inequality hold contradicting fact lemma let puiseux monoid generated set suppose also nonempty subset next set inclusion holds hsi proof empty follows trivially assume empty take atom therefore exists hsi hsi hsi one gets hsi hsi since taken arbitrarily inclusion holds desired position prove last main result theorem let strongly bounded finite puiseux monoid atomic isomorphic numerical semigroup proof let set primes finite first prove finitely many atoms proceed induction cardinality isomorphic numerical semigroup therefore finite suppose positive integer statement theorem true shall prove every strongly bounded puiseux monoid finitely many atoms set addition let two sequences natural numbers bounded gcd every let hsi show finitely many atoms distribute generators finitely many submonoids apply lemma since sequence bounded maximum namely set suppp let set pairs empty using lemma obtain hsj atomic structure puiseux monoids let puiseux monoid generated inclusion done show contains finitely many atoms every pair fix arbitrary pair prove finite since strongly bounded finite puiseux monoid strictly contained finite induction hypothesis remains check contains finitely many atoms finite proposition monoid isomorphic numerical semigroup case finite assume finite let subsequence sequence contains subsequence strictly increasing lemma implies antimatter therefore contains atoms notice isomorphic via assume subsequence exist using lemma find satisfying exists subsequence strictly increasing note must proper subset set thus one isomorphic via multiplication meaning suffices check finitely many atoms prove finitely many atoms consider proper generating set results reducing fraction lowest terms every follows divides implies existence subsequence strictly increasing lemma since contains subsequence strictly increasing every least index suppp proper subset every set suppp let collection subsets empty notice every element proper subset lemma hsi since set indices strictly contained induction hypothesis get hsi contains finitely many atoms every therefore follows point proved every strongly bounded puiseux monoid finite set primes finitely many atoms suppose atomic theorem gotti minimal set generators must hence finitely generated proposition isomorphic numerical semigroup conversely suppose isomorphic numerical semigroup since every numerical semigroup atomic must atomic completes proof example used evidence theorem hold require finite besides strongly boundedness puiseux monoid superfluous one see proposition guarantees existence atomic puiseux monoid prime infinitely many atoms fails strongly bounded acknowledgments working paper author supported berkeley chancellor fellowship author grateful scott chapman valuable feedback encouragement marly gotti dedicated final review chris neill helpful discussions early drafts paper also author thanks anonymous referee suggesting recommendations improved paper references amos chapman hine paixao sets lengths characterize numerical monoids integers paper barron neill pelayo set elasticities numerical monoids appear semigroup forum barucci dobbs fontana maximality properties numerical semigroups applications analytically irreducible local domains vol memoirs amer math brauer problem partitions amer math chapman tale two monoids friendly introduction nonunique factorizations math mag chapman corrales miller miller patel catenary tame degrees numerical monoid eventually periodic aust math soc chapman gotti pelayo delta sets realizable subsets krull monoids cyclic class groups colloq math chapman hoyer kaplan delta sets numerical monoids eventually periodic aequationes math chapman steinberg elasticity generalized arithmetical congruence monoids results math coykendall dobbs mullins integral domains atoms comm algebra eisenbud commutative algebra view toward algebraic geometry grad texts vol new york rosales finitely generated commutative monoids nova science publishers new york atomic structure puiseux monoids rosales numerical semigroups developments mathematics vol new york geroldinger factorizations algebraic combinatorial analytic theory pure applied mathematics vol chapman boca raton grillet commutative semigroups advances mathematics vol kluwer academic publishers boston omidali catenary tame degree numerical monoids generated generalized arithmetic sequences forum math puiseux recherches sur les fonctions jour math schmid differences sets lengths krull monoids finite class group nombres bordeaux mathematics department berkeley berkeley address felixgotti
| 0 |
string cadences amihood alberto travis gad oct department computer science university israel amir department computer science johns hopkins university baltimore school computational science engineering college computing georgia institute technology klaus advanced computing building ferst drive atlanta email axa school computer science telecommunications diego portales university santiago chile department computer science university haifa mount carmel haifa israel email landau department computer science engineering nyu polytechnic school engineering metrotech center brooklyn abstract say string cadence certain character repeated regular intervals possibly intervening occurrences character call cadence anchored first interval must length others give algorithm determining whether string cadence consisting least three occurrences character nearly linear algorithm finding anchored cadences introduction finding interesting patterns strings important problem many fields extensive literature see references therein one might therefore expect obvious kinds already considered however far aware one previously investigated best determine whether string contains certain character repeated regular intervals possibly intervening occurrences character initiate study natural problem introduce following notions definition cadence string pair natural numbers every mod cadence definition anchored cadence string natural number cadence every mod partly supported isf grant informally first interval must length others cadence anchored example alabaralaalabarda anchored cadence max trivially otherwise check cadence comparing therefore find cadences time proportional since still bound optimal section however give algorithm determining whether string section give log log time algorithm finding anchored cadences leave open problems finding algorithm reporting subquadratic algorithm determining whether given linear algorithm finding anchored cadences also curious define properly find efficiently approximate cadences whether exists subquadraticspace data structure given endpoints substring quickly reports cadences substring detecting string character positions leftmost rightmost occurrences satisfy easily check log time section show string binary also check log time follows still time string alphabet time general specifically show convert string set integer weights three weights sum check log time via transform see exercise since reduction essentially reversible suspect improving log bound challenging without loss generality assume interested detecting detect symmetrically let definition therefore average element element element follows alphabet create binary string length character alphabet marking occurrences character apply reduction one log total time arbitrary alphabet perform partitioning detect string character using time proportional min log number occurrences takes total time theorem determine whether string length log time alphabet constant size use log time see reduction essentially reversible suppose want determine whether set integer weights without loss generality need check whether exist two positive weights one negative weight sum case two negative weights one positive weight sum symmetric positive weight set negative weight set remaining set otherwise two positive weights one negative weight average case finding anchored cadences check whether anchored cadence comparing takes time since log obviously find anchored cadences log time section use following lemma improve bound log log lemma natural number anchored cadence string prime either anchored cadence proof let smallest multiple greater either anchored cadence must exist would anchored cadence see must prime assume prime factor anchored cadence choice meaning anchored cadence contradicting choice start computing sorted set primes primes takes log log time build boolean array set true check whether true prime increasing order set true otherwise set false lemma anchored cadence eventually set true check cell distinct prime factors since prime divides numbers see use log log time smallest anchored cadence set false immediately checking therefore check cell prime factor prime choices divides choice larger prime follows use time log log time moreover chosen uniformly random alphabet expected case check primes finding one average case therefore use total time theorem find anchored cadences string length log log time smallest cadence alphabet nonunary use time average acknowledgments many thanks organizers participants axa workshop references alberto apostolico pattern discovery algorithmics surprise artificial intelligence heuristic methods bioinformatics nato advanced science institutes series pages eric bach jeffrey shallit algorithmic number theory efficient algorithms mit press thomas cormen charles leiserson ronald rivest cliff stein introduction algorithms mit press edition paul pritchard improved incremental prime number sieves proceedings first international symposium algorithmic number theory ants volume lecture notes computer science pages springer
| 8 |
continuum topology optimization new approach based aug oded amir faculty civil environmental engineering technion israel institute technology odedamir august abstract new approach generating topological designs continua presented main novelty use modeling optimizing design exhibit response achieved imposing single global constraint total sum equivalent plastic strains providing accurate control local stress violations single constraint essentially replaces large number local stress constraints approximate aggregation common approaches literature classical plasticity model utilized analytical adjoint sensitivity analysis derived verified several examples demonstrate capability computational procedure generate designs challenge results literature terms obtained full analysis optimized designs shows prior initial yielding designs sustain significantly higher loads minimum compliance topological layouts minor compromise stiffness keywords topology optimization stress constraints introduction topology optimization continua computational method aimed optimizing distribution one several materials given design domain purpose typically achieve minimumweight structural design constraint displacements minimize compliance maximize stiffness using given amount available material extensive reviews see example eschenauer olhoff sigmund recently sigmund maute deaton grandhi one challenging aspects developing computational topology optimization procedures consideration stress constraints engineering standpoint limiting stresses optimized design fundamental requirement components typically designed remain regime throughout service life meaning yield stress exceeded article new approach satisfying stress constraints proposed central idea consider nonlinear inelastic material behavior via optimization drive design towards response incorporation stress constraints imposes several challenges first foremost local nature stress constraints applications topology optimization involve objective functional global nature global constraints control volume weight displacements compliance stress constraints incorporated considerably difficult tackle corresponding optimization problem principle stress constraints imposed every material point design domain meaning number constraints comparable number design related resolution underlying finite element mesh therefore expected solution time significantly longer standard topology optimization problems vulnerability numerical algorithms arrive local minima aggravated review existing literature highlights two dominating strategies formulating solving optimization problem local stress constraints considered problem formulation whereas actual solution subset active constraints included local stress constraints aggregated single global constraints former strategy implemented one earliest publications topology optimization continua duysinx throughout optimization process roughly one third local constraints considered sensitivity analysis optimization final optimization steps local constraints corresponding design variables actually active local constraints also considered bruggi venini similar tendencies reported later detailed study various problem formulations local stress constraints bruggi duysinx therefore efficiency imposing local constraints problems questionable similar problem formulation different numerical treatment presented pereira stress constraints considered augmented lagrangian technique utilized solving optimization problem facilitating reduction computational effort invested sensitivity analysis results appear promising exhibit layouts circumvent regions potentially high stress concentrations hand authors report computational times times higher standard minimum compliance formulations augmented lagrangian approach followed also fancello presented layouts avoid stress concentrations author reported difficulties regarding numerical implementation number function evaluations indicates procedure may suitable applications second widely adopted strategy dealing large number stress constraints involves various forms constraint aggregation collecting constraints global stress function early study yang chen examine use functions functions referring park problem formulations aim minimizing either global stress weighted combination global stress compliance subject volume constraint step direction utilizing global stress measures proposed duysinx sigmund two global stress functions suggested namely shown given maximum local stress bounded due oscillatory behavior maximum value limited large enough identifying actual peak stress several recent studies demonstrate constraint aggregation fact lead satisfactory particular classical case provide extensive critical review propose regional stress measures local stress constraints grouped interlacing regions according stress level similar approach block aggregation suggested also regional stress measures based normalization respect actual maximum stress proposed order improve approximation maximum stress optimization layout avoids corner generated demonstrating potential approach hand numerical implementation suffers several drawbacks first normalization changes discontinuously optimization cycles second also regional constraints change every optimization cycle according sorting local stresses thus introducing inconsistency optimization process third shown increasing number regions always improve optimized design one might expect due tighter control local stresses methods combined topological derivatives also succeeded generating designs free stress concentrations allaire jouve minimize integral measure penalty stress approach shown provide smooth designs corners direct control actual stress compliance another set positive results presented amstutz novotny large number stress constraints replaced external penalty functional mimics constraint layouts avoid stress concentrations corners slight violations stress constraints occur final designs target stress main drawbacks approach reliance penalization parameters may lack direct control local stresses significance latter depends degree localization high may differ considerably one problem another interesting novel approach proposed recently verbart instead imposing large number constraints material penalized stress exceeds allowable stress numerical results demonstrate potential efficient approach however penalization utilized hard satisfy admissible stress criterion accurately furthermore penalized material law somewhat artificial may difficult generalize method another major difficulty computational topology optimization singularity problem originally demonstrated context truss topological design shown optimal topology might correspond singular point design space therefore making difficult cases impossible arrive true optimum numerical search algorithms sved ginos kirsch cheng jiang singular points encountered cases removal certain truss bar material point continuum case results feasible design space better optimum due removal corresponding constraint article target difficulties related singularity phenomenon fact appropriate relaxation scheme essential ingredient suggested computational approach possibly widespread remedy dealing singularity problem cheng guo actual stress constraints relaxed resulting feasible domain possess degenerate branches similar relaxation scheme involving smooth envelope functions suggested context local buckling constraints rozvany approach first integrated continuum topology optimization duysinx duysinx sigmund implemented continuation scheme gradually reducing hence approaching actual constraints approach successfully applied various test cases later studies see example pereira fancello alternative relaxation avoiding singularity phenomenon introduced bruggi simp rule penalization power yield stress chosen lower penalization power stiffness noted separate penalization exponents stiffness yield stress suggested much earlier extension topology optimization structures maute practice appears provide similar results highly dependent continuation scheme shown sequence solutions relaxed problem may converge global optimum stolpe svanberg despite shortcomings seems necessary apply form continuous relaxation order arrive practical structural designs satisfy stress requirements central idea approach proposed article optimize inelastic structural response particular purpose satisfying stress constraints linear elasticity date applications topology optimization considered inelastic response concerned objectives one pursued herein material nonlinearities topology optimization initially considered yuge kikuchi layout optimization frame structures undergoing plastic deformation presented based homogenization porous material swan kosaka gested framework topology optimization structures material nonlinearity based voigt reuss mixing rules simp solid isotropic material penalization interpolation scheme originally proposed linear elastic material extended behavior maute although several articles subject published last two decades topology optimization involving still well established one difficulty lies obtaining accurate design sensitivities cases several derivative terms neglected maute schwarz apparently minor effect outcome optimization general terms negligible moreover comparing analytical design sensitivities finite difference calculations errors order observed swan kosaka yoon kim recent studies incorporated analytical adjoint sensitivity analysis rateindependent based framework michaleris accurate sensitivities reported problems involving reinforced concrete design bogomolny amir effective energy management dynamic loading nakshatrala tortorelli another analytical sensitivity analysis scheme topology optimization structures recently presented kato highly accurate derivatives obtained however formulation limited cases load applied nodes whose displacements controlled adjoint sensitivity analysis applied also topology optimization viscoelastic material james waisman viscoplastic materials multiscale approach fritzen continuum damage models amir sigmund amir james waisman latter study fact targeted similar goal failure imposing stress based different constitutive model different problem formulation involved constraint aggregation mentioned beginning introduction proposed approach relies modeling inelastic behavior driving design towards response achieved constraining total sum equivalent plastic strains single global constraint added standard stiffness volume problem inherently providing accurate control local stress violations consequently stress limits implicitly satisfied without imposing large number local constraints corresponding computational procedure alleviate one major obstacles topology need solve nonlinear optimization problem large number design variables equally large number constraints remainder article organized follows section briefly review material model nonlinear finite element analysis formulation topology optimization problem formulation design parametrization introduced section followed derivation verification adjoint sensitivity analysis section several examples demonstrate applicability proposed approach section finally discussion results necessary future investigations given section model finite element analysis section briefly review material model subsequent nonlinear finite element analysis purpose provide necessary background optimization problem formulation presented section involves state variables related material model well adjoint sensitivity analysis presented section classical plasticity derivation governing equations herein follows textbooks simo hughes zienkiewicz taylor model essentially composed following assumptions rules elastic relationships yield condition defining elastic domain flow rule hardening law complementarity conditions consistency condition first assume total strain tensor split elastic plastic parts respectively furthermore relate stress tensor elastic strains using elastic constitutive tensor yield criterion function defines admissible stress states internal variables related plastic strains hardening parameters elastic domain defined interior yield criterion yield surface defined stress state corresponding considered irreversible plastic flow governed evolution plastic strains internal variables functions defining direction plastic flow hardening material parameter typically called consistency parameter plastic multiplier together yield criterion must satisfy complementarity conditions well consistency requirement consistency requirement means plastic loading stress state must remain yield surface meaning widely accepted model plasticity metals usually known flow theory simply based von mises yield criterion von mises relates yielding material deviatoric stresses measured second deviatoric stress invariant model hereby presented particular case plasticity yield criterion von mises yield function expressed expression usually named von mises stress equivalent stress yield stress uniaxial tension depends single internal parameter according isotropic hardening function kinematic hardening considered current work popular choice hardening rule function initial yield stress scalar usually order young modulus associative flow rule assumed meaning flow plastic strains direction normal yield surface finally internal variable governing hardening equivalent plastic strain evolving according rule factor introduced particular case involving uniaxial plastic deformation obvious relation obtained finite element implementation finite element analysis process plasticity conveniently represented flow evolving time time step corresponds increment load displacement current work standard scheme displacement control employed purpose sensitivity analysis optimal design finite element equations cast framework transient coupled nonlinear systems suggested michaleris coupled approach every time increment transient analysis determine unknowns satisfy residual equations displacements vector load factor internal corresponding time satisfied global level satisfied gauss integration point transient coupled nonlinear system equations uncoupled treating response function response solving residual equations increment responses known previous converged increment independent response found iterative procedure global level iterative step dependent response found inner iterative loop responses dependant corrected eqs satisfied sufficient accuracy procedure repeated increments neglecting body forces defined current study difference external internal forces depends explicitly constant reference load vector entries loaded degrees freedom standard matrix context finite element procedures particular material model used study vector given solving local nonlinear constitutive problem implicit scheme employed central feature scheme introduction trial elastic state given incremental displacement field first assumed plastic flow time next time step meaning incremental elastic strains incremental total strains shown situation governed conditions identified using trial elastic state simo hughes plastic increment occurs new state variables found solving nonlinear equation system resulting time discretization governing equations results nonlinear system derived specifically given elastoplastic model particular model used current study defined collection four incremental residuals resulting time linearization governing constitutive equations npl bun npl equation represents associative flow rule represents evolution isotropic hardening parameter relates stresses elastic strains yield criterion squared form representing second deviatoric stress invariant evaluated using worth noting local nonlinear problem solved efficiently scalar equation algorithm example derived simo taylor plane stress situations nevertheless purpose sensitivity analysis find convenient use full representation suggested michaleris topology optimization approach central goal proposed formulation generate optimized structural layouts sustain certain load prescribed range displacements exceeding allowable stress limitations majority studies far design problem formulated optimization structure aimed minimizing either compliance volume stress limitations imposed constraints either locally material point global aggregated manner suggested formulation approach design goal completely different way essentially seek best three quantities weight structure coinciding volume layouts capacity represented product loads displacements final equilibrium point overall sum plastic strains representing violation allowable stress limits numerical experiments two variants optimization problem examined arise assigning quantities objective constraining one quantity shown variants lead satisfactory results remainder section quantity considered objective variants derived similar manner problem formulation purpose optimizing topological layout continuum follow material distribution approach kikuchi together simp interpolation scheme extension multiple phases usually known modified simp sigmund torquato implies design variables densities discrete material points assigned centroid finite element design domain varying zero void solid optimization problem stated follows min ngp objective maximize given prescribed displacement maximize capacity given magnitude deformation quantity evaluated using terminal values load factor displacements constraint ensures certain prescribed volume utilized design volume measured according physical material density finite element physical densities related mathematical variables via widely used filtering projection techniques presented explicitly next section constraint ensures overall spatial sum plastic strains exceed certain small threshold theory zero sum plastic strains evaluated based quantity plastic strain measured gauss point finite element mesh terminal equilibrium point finally nonlinear residuals defined section according respective model design parametrization correspondence mathematical optimization variables nonlinear finite element analysis follows first standard density filter applied bruns tortorelli bourdin simple linear weighting function obtain purpose applying density filter overcome difficulty artificial checkerboard patterns well introduce length scale design thus avoiding results thin features difficult manufacture heaviside projection function guest utilized order push design towards distinct layout yields physical density distribution xee exe xee xee threshold value parameter determining sharpness smooth projection function current study use meaning filtered density projected value projected initial value usually set increased gradually optimization progresses heaviside projections typically introduced order achieve crisp layouts necessary design problems due manufacturing requirements cases addressed article absolutely necessary utilize projections increase degree nonlinearity may cause difficulties convergence nevertheless useful apply heaviside projection even rather mild values order minimize material transition regions regions density zero one also known gray regions topology optimization material law artificial due choice penalization scheme explained therefore true stress actual manufactured figure normalized curves various densities separate penalty exponents left right densities smaller yield strain relatively delayed structure may differ computed stress within optimization motivates minimization gray transition regions constitutive model corresponding flow theory involves three material parameters young modulus hardening fraction initial yield stress mentioned extension simp approach interpolating three parameters originally presented maute evaluating tangent stiffness matrix internal forces vector young modulus interpolated finite element follows emin emax emin general emin emax values young modulus two candidate materials distributed design domain case distributing single material void emin set several orders magnitude smaller emax finally penalization factor required drive design toward layout initial yield stress penalized similarly max min min min max initial yield stresses two candidate materials corresponding respectively physical point view penalization factor equal yield strain depend density however many cases necessary set order avoid numerical difficulties arising low density elements reach yield limit physical consequence intermediate densities yield strain artificially higher full material shown figure separate exponents topology optimization already introduced maute approach used also stress constraints namely bruggi similar approaches cheng guo study keep independent design variables stiffness already penalized via furthermore essence design problem find designs yield short response cases necessary consider accurate response especially intermediate material densities solid material constant interpolation sensitivity analysis considering optimization problem derivatives volume constraint straightforward objective constraint involve state variables therefore adjoint sensitivity analysis procedure necessary mentioned earlier design sensitivities computed following framework transient nonlinear coupled problems described michaleris following focus computing derivatives general functional respect physical densities whereas derivatives respect computed chain rule adjoint procedure begin forming augmented functional clarity dependency state design variables omitted represents adjoint vector corresponding increment confused scalar used time discretization plastic multiplier furthermore global adjoint vector whereas local adjoint vector principle function state variables throughout time steps addition dependency design variables particular functionals see relations utilized particular implementation adjoint procedure functional purpose adjoint procedure eliminate terms involving derivatives state variables respect design variables computed explicitly seen explicit dependency upon design variables contained yielding expression explicit sensitivity respect element physical density adjoint vectors computed gauss integration point backwardsincremental procedure required due path dependency response backwards procedure consists collection equation systems resulting requirement implicit derivatives respect design variables vanish complete details regarding derivation adjoint procedure found michaleris whereas specific implementations described amir bogomolny amir nakshatrala tortorelli implementations nonlinear material models mentioned introduction adjoint procedure begins coupled system solved tangent stiffness matrix corresponding converged state ment michaleris determined level solving proceeding incrementally backwards time increment coupled adjoint equations solved determine followed solution local adjoint vector level determined contribution design sensitivities computed procedure continues previous increment denoted repeated contributions collected obtain required design sensitivities partial derivatives objective constraints global residuals local residuals respect state variables required implementing adjoint procedure derivatives easily obtained whereas derivatives related particular model choice internal variables model considered herein based classical flow theory derived differentiation explicit example derivatives given amir finally partial derivatives derived explicitly functional considered problem formulation noted implementing adjoint procedure derivatives local residuals maintain consistency respect analysis essence four situations possible certain sequence increments continuous elastic response transition continuous plastic response transition unloading actual situation encountered affects computation derivatives respective residuals general derivatives local residual matrices varying sizes depending situation determined exclusively elastic trial state final component required performing sensitivity analysis derivative residual respect physical material density combining eqs obtain xey xey derived eqs elastic constitutive tensor young modulus equal figure problem setup topology optimization load distributed top nodes order avoid artificial stress concentrations loading point examples section present several numerical examples demonstrate applicability proposed approach solving topology optimization different variants optimization problem considered optimization problems solved method moving asymptotes mma svanberg specific parameters required reproducing results given within text example design first example consider classical case often used evaluate new procedures topology optimization stress constraints references therein see figure problem setup particular case domain point load position thoroughly examined verbart similar problem often appearing articles stress constraints case point load applied position author opinion latter somewhat easier deal optimized layout compliance wider angle corner see figures therefore former case center following examination order fairly evaluate proposed approach latter case presented subsequently sake completeness reference design stress limitation first maximization subject volume constraint performed given certain prescribed displacement necessary order identify stiffest design achievable without limitation stresses load distributed top nodes order avoid artificial stress concentrations loading point assuming adjacent nodes almost identical vertical displacements sufficient measure based single dof displacement prescribed instead measuring complete somewhat simplifies computational implementation adjoint equations though derivation general applicable loading situation reference formulation objective includes emin emax min max table material parameters used examples product force displacement prescribed dof constraint omitted resulting layout performance coincide obtained minimum compliance topology optimization procedure expected optimization strain hardening multiple materials considered distinct yield stresses hardening behaviors optimized layouts may differ minimum compliance layouts see example kato model discretized mesh resolution consisting square elements available volume set total volume domain filter radius prescribed displacement position set automatic displacement incrementation applied increment size adapted based convergence iterations previous increment material parameters given table essentially constant examples according numerical experiments continuation scheme involving penalty exponents heaviside sharpness yields best results parameters increased gradually throughout optimization process initial values set increased every design cycles values respectively parameter initialized multiplied every design cycles upper limit set order avoid highly nonlinear projection functions call mma derivatives compliance objective multiplied order obtain good scaling consequently fast convergence mma known performance mma affected scaling parameter chosen according values actual case magnitude order according author experience problem badly scaled slow convergence cases lead overall optimization process inferior local minima external move limit mma update enforced examples presented section optimization terminated design cycles stopping criterion imposed requiring maximum change element density never achieved optimized topology presented figure sum equivalent plastic strains volume presented second column table stress distribution terms von mises stresses presented figure distribution equivalent plastic strains figure seen significant stress concentration vicinity corner constraining plastic straining exact problem setup used demonstrate capability proposed approach capture local stress concentrations consider seeking optimized topology add single global constraint total sum equivalent plastic strains final equilibrium state shown stress distribution improved stress concentrations avoided material parameters remain continuation scheme well scaling move limit mma small threshold set constraint plastic unp ngp plastic strains volume figures max vol max vol plastic strains min vol plastic strains table results topology optimization load top right corner volume prescribed displacement constraining sum equivalent plastic strains leads nearly zero plastic strains compromising strains providing slack improving convergence design almost zero plastic strains optimized topology presented figure sum equivalent plastic strains volume presented third column table stress distribution terms von mises stresses presented figure distribution equivalent plastic strains figure examining optimized layout seen proposed approach indeed generate designs circumvent stress concentrations modification design compared one optimized quite rounding corner stiffening bars meeting corner order compensate reduced stiffness rounded corner result slightly different achieved previous studies referenced appears closest one layout obtained without stress considerations resembles result achieved bruggi venini approach local stress constraints also resembles result achieved much higher volume fraction see figure referenced article argued primary design change shape modification may generated shape optimization procedure following topology optimization without stress constraints nevertheless seen topological changes indeed introduced example stiffening bars orthogonal main bars means performing topology optimization without stress considerations followed shape optimization stress considerations may sufficient finding best layout terms topology shape stress distribution evident adding constraint plastic strains leads uniform distribution extreme stresses hence stress constraint violation implicitly avoided without actually imposing local stress constraints material point nevertheless slight violation global constraint plastic strains straining present first element near corner violation attributed several factors plastic strains yield point causing difficulties satisfying constraint precisely inherent approximation due use sequential convex programming method solving problem quality optimization algorithm another possibility achieve design goal interchanging volume corresponds minimizing volume optimized design requiring certain capacity given prescribed displacement constrained order obtain good comparison result maximization volume plastic strains initial volume fraction set domain parameters used solution previous case retain values except continuation penalization exponents effectively constraining necessary begin process penalty eqs therefore initial values chosen kept constant first iterations continuation scheme applied previous cases optimized topology presented figure sum equivalent plastic strains volume presented fourth column table stress distribution terms von mises stresses presented figure distribution equivalent figure topology optimization load prescribed displacement top right corner left maximizing volume constraint maximizing constraints volume total sum equivalent plastic strains minimizing volume constraints total sum equivalent plastic strains top optimized layouts von mises stress distributions equivalent plastic strains vicinity corner note two orders magnitude difference scale compared plastic strains figure performance optimized design examining actual benefit proposed formulation interesting examine elastoplastic response three designs obtained numerical experiments responses directly compared result particular layout fig referenced article whose setting closest current problem definition simple layouts obtained current study performed consisting rounding intermediate design variables threshold value hardly affects layout volume thanks heaviside projection utilized within optimization process facilitates accurate analysis visible difference thin bar minimum volume design partially deleted hence contribute transfer forces comparative case image layout obtained imported processed obtain design similar possible original adapting mesh resolution current study projection scheme applied described threshold achieving roughly volume fraction designs four layouts presented figure convenience layouts tagged corresponding maximum volume maximum volume plastic strains minimum volume plastic strains design approach computed volume fractions designs respectively solution parameters optimization runs except prescribed displacement increased order guarantee designs enter plastic regime actual response compared displacement applied within equal increments furthermore solid parts true law obtained whereas void effectively yield curves prescribed dof presented figure seen responses designs obtained constraint plastic strains exhibit significant delay initial yield magnitude load initial yield increased approximately compared reference design designs respectively comes certain compromise stiffness meaning displacement level given load slightly expected layout stiffest design optimization also provides superior compared obtained approach results demonstrate capability proposed approach provide topological designs account stress constraints early design stage topological layouts obtained provide significant delay initial yield alongside minor compromise stiffness coincides common design goal finding stiff design fail prematurely load mentioned many previous studies topology optimization treated lbracket problem point load middle right side edge therefore interesting apply proposed approach also setup direct comparison final response published result sought also case purpose layout obtained using approach amstutz novotny particular design center fig referenced article volume fraction current optimization runs modified accordingly parameters remain load top except local distribution point nodes sufficient case avoiding stress concentration figure designs comparing responses optimized layouts minor changes observed compared results optimization figure layout based image imported comparing responses figure response three optimized designs loaded top right side designs optimized constraint plastic strains exhibit initial yield delayed terms forces compared stiffest design without stress considerations performance improved comparison approach terms stiffness strength given volume unp ngp plastic strains volume figures max vol max vol plastic strains min vol plastic strains table results topology optimization load mid point right side volume prescribed displacement constraining sum equivalent plastic strains leads nearly zero plastic strains compromising minimum volume procedure reaches volume fraction exhibits higher plastic strains results three formulations presented table observed trends first case volume fraction nearly zero plastic strains achieved compromise roughly minimum volume procedure achieves volume fraction far also exhibits slightly higher plastic strains optimized layouts presented figure seen rather subtle shape topological changes introduced order avoid stress concentrations facilitates relatively minor reduction stiffness compared stiffest design generated layouts resemble achieved approaches allaire jouve amstutz novotny james waisman examining response described repeated computed volume fractions designs respectively design corresponds interpretation amstutz novotny fig loaddisplacement curves prescribed dof displacement incrementation presented figure responses designs obtained constraint plastic strains exhibit significant delay initial yield quite remarkably magnitude load initial yield increased figure topology optimization load prescribed displacement mid right presented layouts obtained projection described text examination response maximizing volume constraint maximizing constraints volume total sum equivalent plastic strains minimizing volume constraints total sum equivalent plastic strains layout based image imported amstutz novotny comparing responses figure response three optimized designs loaded middle right side designs optimized constraint plastic strains exhibit initial yield delayed terms forces compared stiffest designs without stress considerations attained stiffness slightly lower amstutz novotny initial yield postponed approximately compared reference design designs yielding designs postponed also comparison layout taken amstutz novotny highlights capability proposed approach generate designs high quality example design second example demonstrates topological design horizontal load see figure problem setup two regions stress concentrations expected load path pass via corners model discretized mesh resolution consisting square elements available volume material set total volume filter radius prescribed displacement set automatic displacement incrementation applied material parameters previous examples continuation scheme slightly modified order examine capability beginning optimization penalization namely contrast initial values used previous example minimum volume case penalty exponents constant first design cycles order enable feasibility compliance constraint otherwise continuation scheme mma parameters identical previous example examine three optimization problems maximizing volume constraint maximizing constraints volume total sum equivalent plastic strains minimizing volume constraints total sum equivalent plastic strains optimized topologies von mises stress distributions distributions equivalent plastic strains presented figure sums equivalent plastic strains volumes presented table clearly seen suggested approach generates designs circumvent stress concentrations vicinity corners two latter solutions provide different figure problem setup topology optimization load distributed top nodes order avoid artificial stress concentrations loading point unp ngp plastic strains volume figures max vol max vol plastic strains min vol plastic strains table results topology optimization load top right corner volume prescribed displacement constraining sum equivalent plastic strains leads nearly zero plastic strains compromising stiffness weight keeping plastic strains low level interesting see maximization subject constraint plastic strains suggests alternative load path reference design considerable forces transferred via vertical bar right side edge enabling reduction stresses near corners force transfer appears also minimum volume design lesser extent designs compliance compromised roughly comparison reference solution exhibits two significant stress concentrations minimum volume procedure appears deliver slightly better results provides compliance uses design domain finally comparison initial yield level optimized designs reveals increase approximately applied force prior yielding compared optimized design achieved without stress considerations demonstrates capability proposed approach deal multiple stress concentrations without added complexity conclusion new approach achieving topological designs continua presented main novelty use material nonlinearity form classical models order avoid imposing large number local stress constraints stress constraints implicitly satisfied imposing single global constraint spatial sum equivalent plastic strains incorporating constraint formulations maximizing given prescribed figure topology optimization load prescribed displacement top right corner left maximizing volume constraint maximizing constraints volume total sum equivalent plastic strains minimizing volume constraints total sum equivalent plastic strains top optimized layouts von mises stress distributions equivalent plastic strains vicinity corners note two orders magnitude difference scale compared displacement subject volume constraint leads optimized designs practically stress violations critical comparison proposed approach existing techniques terms accuracy efficiency hereby presented designed view applications topology optimization aim finding best three competing quantities volume compliance stress results presented article demonstrate capability formulation attain layouts terms two types optimized favorably compared results obtained constraint aggregation using either external penalty highlights one advantage current constraints captured accurately without actually imposing large number constraints local stress values evident computational price tag improvement design quantities discussed following nevertheless results achieved study provide another unexplored view topology optimization motivate fertilize development efficient approximate techniques computational cost present form proposed formulation efficient approaches results reported paper achieved constant number design iterations enabling continuation penalty parameters sharpness smoothed heaviside design iteration requires full nonlinear finite element analysis typically uses solutions linear equation systems depending automatic incrementation convergence iterations compliance plastic strains functionals require adjoint procedure design iteration typically uses solutions linear equation systems depending automatic incrementation procedures based linear elasticity certain constraint aggregation technique require solutions linear equation systems per design iteration depending specific aggregation scheme therefore current implementation approach expected slower existing approaches includes techniques external penalty amstutz novotny extent also formulations introduce local constraints bruggi venini nevertheless relatively good design encourage exploration approach focusing reducing computational burden may possible utilize material models may suitable capturing full response accurately may suffice purpose achieving design another path explored reduction displacement increments iterations exploiting fact design cycles response either close linear computational cost single design cycle reduced level similar analysis another option utilize reanalysis techniques fact motivated investigation formulation first place amir possibilities investigated thoroughly future work considerations many publications stress constraints local constraints aggregated single global approximations maximum stress often requires specific tuning parameters ensure actual local stresses exceed allowable values highlights another advantage proposed approach requires material model already incorporated standard fea packages examples observed optimized designs involve topological well shape changes means simply optimizing shape optimized topology generated without stress considerations may suffice achieving best possible design furthermore incorporating stress considerations topological design phase eliminate need optimized topology create cad model generate new mesh optimize process time consuming industrial context particularly potential disadvantage proposed formulation lies plastic strains according adopted formulation model yield stress limit represents point affect functional product forces displacements stresses strains retains smoothness passing yield point plastic strains however strictly zero point instantaneously increase passing point seen numerical experiments appear hamper convergence towards design violate stress limit despite fact optimized designs several material points indeed close yield limit differentiable approximation introduced several numerical experiments order examine affect quality attained solutions results show improvement comparison original implementation important issue thoroughly examined continuation research acknowledgements research supported israel science foundation grant appendix appendix present numerical verification adjoint sensitivity analysis procedure implementing procedure somewhat cumbersome task believe verification prove useful readers procedures furthermore accurate efficient sensitivity analysis response still rather open issue discussed recent publication kato following results adjoint computations compared numerical derivatives computed forward finite differences consider small problem symmetric clamped beam symmetric half modeled finite element mesh square elements downwards vertical displacement prescribed top right corner two separate loading situations considered see figure problem setup point load top right corner distributed load right edge first case easier implement equations global adjoint vectors eqs take simple form force applied prescribed dof however second case much useful especially particular application considered article necessary distribute applied load several adjacent nodes numerical solution point load inherently include stress concentrations material optimization parameters given table density four elements set prescribed displacement applied within equal increments convergence increment assumed relative norm residual forces finite difference check perturbation value set compare design sensitivities two critical quantities context current application prescribed dof gec unp superscript denotes prescribed dof sum plastic strains ngp whole domain final equilibrium state comparisons derivatives computed adjoint procedure obtained finite differences presented tables point load distributed load respectively figure problem setup verification adjoint sensitivity analysis left point load right distributed load emin emax min max table material optimization parameters used validation adjoint sensitivity analysis element gec rel error rel error table verification sensitivity analysis element domain point load element gec rel error rel error table verification sensitivity analysis element domain distributed load seen design sensitivities practically identical thus verifying derivation implementation adjoint procedure nonlinear response test cases presented figure terms curves prescribed dof equivalent plastic strain tables seen even elements elastic regime contribute sum plastic strains two opposite addition material either increase decrease overall plastic strain whereas always stiffening effect compliance finally analysis sensitivity analysis repeated displacement increments practically identical results obtained nonlinear respones well design sensitivities references allaire jouve allaire jouve minimum stress optimal design level set method engineering analysis boundary elements amir amir efficient reanalysis procedures structural topology optimization phd thesis technical university denmark amir amir topology optimization procedure reinforced concrete structures computers structures amir sigmund amir sigmund reinforcement layout design concrete structures based continuum damage truss topology optimization structural multidisciplinary optimization amstutz novotny amstutz novotny topological optimization structures subject von mises stress constraints structural multidisciplinary optimization optimal shape design material distribution problem structural optimization kikuchi kikuchi generating optimal topologies structural design using homogenization method computer methods applied mechanics engineering figure nonlinear response test cases used verification sensitivity analysis load factor prescribed displacement point load load factor prescribed displacement distributed load equivalent plastic strains point load equivalent plastic strains distributed load sigmund sigmund theory methods applications springer berlin topology optimization bogomolny amir bogomolny amir conceptual design reinforced concrete structures using topology optimization elastoplastic material modeling international journal numerical methods engineering bourdin bourdin filters topology optimization numerical methods engineering international journal bruggi bruggi alternative approach stress constraints relaxation topology optimization structural multidisciplinary optimization bruggi duysinx bruggi duysinx topology optimization minimum weight compliance stress constraints structural multidisciplinary optimization bruggi venini bruggi venini mixed fem approach topology optimization international journal numerical methods engineering bruns tortorelli bruns tortorelli topology optimization nonlinear elastic structures compliant mechanisms computer methods applied mechanics engineering cheng guo cheng guo approach structural topology optimization structural optimization cheng jiang cheng jiang study topology optimization stress constraints engineering optimization deaton grandhi deaton grandhi survey structural multidisciplinary continuum topology optimization post structural multidisciplinary optimization duysinx duysinx topology optimization continuum structures local stress constraints international journal numerical methods engineering duysinx sigmund duysinx sigmund new developments handling stress constraints optimal material distribution proceedings symposium multidisciplinary design optimization aiaa saint louis missouri aiaa paper pages eschenauer olhoff eschenauer olhoff topology optimization continuum structures review applied mechanics reviews fancello fancello topology optimization minimum mass design considering local failure constraints contact boundary conditions structural multidisciplinary optimization fritzen fritzen xia leuschner breitkopf topology optimization multiscale elastoviscoplastic structures international journal numerical methods engineering guest guest belytschko achieving minimum length scale topology optimization using nodal design variables projection functions international journal numerical methods engineering james waisman james waisman failure mitigation optimal topology design using coupled nonlinear continuum damage model computer methods applied mechanics engineering james waisman james waisman topology optimization viscoelastic structures using adjoint method computer methods applied mechanics engineering kato kato hoshiba takase terada kyoya analytical sensitivity topology optimization elastoplastic composites structural multidisciplinary optimization pages kirsch kirsch singular topologies optimum structural design structural optimization norato bruns tortorelli topology optimization continua structural multidisciplinary optimization maute maute schwarz ramm adaptive topology optimization elastoplastic structures structural optimization michaleris michaleris tortorelli vidal tangent operators design sensitivity formulations transient coupled problems applications elastoplasticity international journal numerical methods engineering nakshatrala tortorelli nakshatrala tortorelli topology optimization effective energy propagation elastoplastic material systems computer methods applied mechanics engineering navarrina colominas casteleiro block aggregation stress constraints topology optimization structures brebbia editors computer aided optimum design structures navarrina colominas casteleiro block aggregation stress constraints topology optimization structures advances engineering software park park extensions optimal layout design using homogenization method phd thesis university michigan ann arbor pereira pereira fancello barcellos topology optimization continuum structures material failure constraints structural multidisciplinary optimization rozvany rozvany difficulties truss topology optimization stress local buckling system stability constraints structural optimization schwarz schwarz maute ramm topology shape optimization elastoplastic structural response computer methods applied mechanics engineering sigmund maute sigmund maute topology optimization approaches structural multidisciplinary optimization sigmund torquato sigmund torquato design materials extreme thermal expansion using topology optimization method journal mechanics physics solids simo taylor simo taylor return mapping algorithm plane stress elastoplasticity international journal numerical methods engineering simo hughes simo hughes computational inelasticity volume springer science business media stolpe svanberg stolpe svanberg trajectories approach truss topology optimization structural multidisciplinary optimization svanberg svanberg method moving asymptotes new method structural optimization international journal numerical methods engineering sved ginos sved ginos structural optimization multiple loading international journal mechanical sciences swan kosaka swan kosaka topology optimization structures nonlinear material behaviors international journal numerical methods engineering verbart verbart langelaar van keulen new approach stressbased topology optimization internal stress penalization world congress structural multidisciplinary optimization orlando florida usa von mises von mises mechanics ductile form changes crystals zeitschrift angewandte mathematik und mechanik cai cheng volume preserving nonlinear density filter based heaviside functions structural multidisciplinary optimization yang chen yang chen topology optimization structural optimization yoon kim yoon kim topology optimization continuum structures element connectivity parameterization international journal numerical methods engineering yuge kikuchi yuge kikuchi optimization frame structure subjected plastic deformation structural optimization zienkiewicz taylor zienkiewicz taylor finite element method solid mechanics volume
| 5 |
mar invariant forms irreducible modules simple algebraic groups mikko february abstract let simple linear algebraic group algebraically closed field characteristic let irreducible rational gmodule highest weight basic question ask whether alternating bilinear form quadratic form answer well known easily described terms case know always alternating bilinear form however determining quadratic form classical problem still remains open solve problem case classical type fundamental highest weight case type also give solution specific cases exceptional type application results refine seitz description maximal subgroups simple algebraic groups classical type one consequence following result simple algebraic groups irreducible one following holds neither modules invariant quadratic form section polytechnique lausanne lausanne switzerland email address author supported grant swiss national science foundation grant number contents introduction invariant forms irreducible fundamental representations type representation theory construction computation quadratic form fundamental irreducibles classical types representations type representation theory construction computation quadratic form simple groups exceptional type applications work connection representations symmetric group reduction problem application maximal subgroups classical groups fundamental irreducible representations fixed point spaces unipotent elements introduction let vector space algebraically closed field characteristic fundamental problem study simple linear algebraic groups determination maximal closed connected subgroups simple groups classical type seitz shown known list examples given images irreducible rational representations simple algebraic groups given irreducible representation one still determine groups contain cases answer known contained furthermore know see section contained lemma lemma furthermore know irreducible representations image contained image contained see section contained lemma currently still missing method determining characteristic two exactly contained problem main subject paper state equivalently follows problem assume let irreducible highest weight quadratic form nontrivial open problem literature subject currently partial results known main result paper solution problem following cases classical type fundamental dominant weight theorem type theorem case exceptional type give partial results section type able give complete solution proposition proposition types give answer specific table final section paper give various applications results describe open problems motivated problem one particular application given subsection refinement seitz description maximal subgroups simple algebraic groups classical type seitz gives full list irreducible subgroups question classical groups contain image irreducible representation considered example possible proper inclusion irreducible subgroups maximal subgroup subsection list given seitz describe exactly inclusions occur particular results consequence theorem simple algebraic groups irreducible one following holds module invariant quadratic form iii neither invariant quadratic form general approach proofs main results follows basic method used throughout theorem recorded proposition allows one determine whether orthogonal computing within weyl module classical type irreducible fundamental highest weight first prove result case type proposition result classical types fairly straightforward consequence theorem case type case type proofs results heavily based various results literature representation theory use results submodule structure weyl module found also need first cohomology groups computed corollary one key ingredient proof results baranov suprunenko give structure restrictions certain subgroups defined terms natural module notation terminology fix following notation terminology throughout whole text let algebraically closed field characteristic groups consider linear algebraic groups subgroup always mean closed subgroup modules representations rational unless otherwise mentioned denotes simply connected simple algebraic group rank vector space throughout view group rational points time studied either chevalley group constructed usual chevalley construction see classical group natural module occasionally denote type notation means simply connected simple algebraic group type fix following notation maximal torus character group set dominant weights respect system positive roots character element fundamental dominant weights use standard bourbaki labeling simple roots given irreducible highest weight weyl module highest weight rad unique maximal submodule dominant weight write say irreducible representation said prestricted bilinear form radical rad zero quadratic form vector space polarization bilinear form defined say radical rad rad zero bilinear form quadratic form say symplectic alternating bilinear form say orthogonal quadratic form note bilinear form weight spaces orthogonal thus compute form enough work zero weight space nonzero quadratic form weight vector weight given morphism algebraic groups twist representations representation representation denote corresponding denote frobenius endomorphism induced field automorphism see example lemma simply connected representation composition series composition factors occasionally denote acknowledgements grateful donna testerman suggesting problem many helpful suggestions comments earlier versions text would also like thank gary seitz providing argument used proof lemma two anonymous referees helpful comments improvements invariant forms irreducible let irreducible representation simple algebraic group highest weight write sum runs positive roots coroot corresponding usual dual pairing cocharacter group know longest element weyl group lemma furthermore orthogonal even root system mod iff even odd always mod mod always even always odd iff mod mod always always iff always always table values modulo weight symplectic odd lemma hence characteristic deciding whether irreducible module symplectic orthogonal straightforward computation roots weights table give value mod simple type terms coefficients characteristic turns nontrivial irreducible module symplectic shown following lemma found include proof convenience lemma assume char let nontrivial irreducible representation group symplectic proof since exists isomorphism induces bilinear form defined since defined also isomorphism schur lemma exists scalar nonzero characteristic two follows symmetric form submodule nontrivial irreducible submodule must alternating lemma shows image irreducible representation lies following general result reduces determining whether orthogonal characteristic two computation within weyl module proposition assume char let nonzero suppose type weyl module nonzero quadratic form unique scalar unique maximal submodule equal rad iii irreducible module nonzero quadratic form rad rad case rad submodule rad codimension trivial composition factor orthogonal proof see theorem proposition iii claim iii also deduced satz claim consequence iii since hom rad case type following result well known include proof completeness proposition assume char type nonzero quadratic form proof example claim follows general result rational map constant indeed acts transitively nonzero vectors follows thus since rational lemma let symplectic orthogonal proof see proposition proposition remark assume char lemmas show irreducible must tensor indecomposable steinberg tensor product theorem implies frobenius twist weight therefore determine irreducible representations orthogonal suffices consider dominant weight fundamental representations type throughout section assume simply connected type section determine characteristic fundamental irreducible representation nonzero quadratic form answer given following proposition prove follows proposition assume char let orthogonal mod following examples immediate consequences proposition example char orthogonal mod example char orthogonal mod example char orthogonal also proven corollary orthogonal rough outline proof proposition follows various results literature representation theory reduce claim specific must considered study using standard realization exterior algebra natural module explicitly describe nonzero quadratic form find vector rad orthogonal proof finished computing representation theory composition factors determined odd characteristic premet suprunenko theorem independently composition factors submodule structure found arbitrary characteristic adamovich using results adamovich shown corollary result premet suprunenko also holds characteristic two state result composition factors need pmake definitions first let write expansions base say contains base define set integers mod contains base main result also valid characteristic described follows set trivial irreducible module theorem let weyl module composition factor multiplicity set composition factors view proposition iii also useful know first cohomology group nonzero determined kleshchev sheth corollary theorem let write base either characteristic result becomes following corollary assume char let mod throughout section consider subgroups embedded follows consider let alternating form fix symplectic basis embedding note typo definition line say every basis fixes basis vectors module structure restrictions determined baranov suprunenko theorem need know composition factors occur restriction case result following define theorem let assume set mod otherwise character lcl given sum brackets zero denotes valuation maximal divides note mod therefore char always theorem particular composition factors occurring give applications theorem theorem characteristic two needed proof proposition lemma assume char let suppose mod following hold composition factors restriction form lcl trivial composition factors proof nothing prove suppose enough prove follow induction let suppose first mod theorem composition factors occurring claim follows since thus consider case mod mod hand theorem composition factors occurring claim follows lemma let suppose mod contains base proof nothing suppose replacing see equivalent prove contains base suppose contains base consider first case since mod contains base happen consider write mod contains base must also contain base case mod contain base contradiction lemma let suppose mod contains base proof prove claim induction claim immediate since suppose assume contains base therefore must occur binary expansion assumption occurs binary expansion note also means contains base mod follows occur binary expansion write finally since contains base mod induction following corollaries immediate theorem lemmas corollary assume char let suppose mod proof theorem irreducible composition factor contains base lemma equivalent corollary assume char let suppose mod nontrivial composition factor form proof theorem irreducible composition factor contains base lemma construction describe well known construction weyl modules using exterior algebra natural module consider group chevalley group constructed complex simple lie algebra type details chevalley group construction see let basis complex vector space let spanned basis alternating form defined let lie algebra formed linear endomorphisms satisfying simple lie algebra type let cartan subalgebra formed diagonal matrices diag define maps diagonal matrix diagonal entries root system system positive roots base let linear endomorphism chevalley basis given let kostant respect chevalley basis subring universal enveloping algebra generated lattice define note also defines alternating form simply connected chevalley group type induced equal group invertible linear maps preserving abuse notation identify basis note lie algebra acts naturally xvi action invariant induces action one show gvk identify diagonal matrices form maximal torus basis weight vectors given elements eik basis vector weight form induces form exterior power det form invariant action since furthermore let eik ejk two basis elements eik ejk otherwise therefore follows form nondegenerate precisely way find basis weight vectors define form note alternating odd symmetric even well known unique submodule isomorphic weyl module shown following lemma following lemma also consequence lemma let let generated equal subspace spanned totally isotropic subspace dim isomorphic weyl module proof since acts transitively set totally isotropic subspaces follows spanned totally isotropic subspace claim dimension follows result proven example theorem theorem odd characteristic since maximal vector weight submodule generated image dim viii must isomorphic follows identify submodule given lemma set note identify denote even basis zero weight space given vectors form yis also description basis zero weight space lemma purposes need convenient set generators given next lemma lemma lemma suppose even say zero weight space thus also spanned vectors form yjs yks lemma suppose even say vector fixed action furthermore point scalar multiple proof see fixed see example shown definition depend symplectic basis chosen claim note first point must weight zero recall zero weight space basis yis group permutations acts clearly action preserves form gives embedding note also acts transitively thus point linear span must scalar multiple preliminary steps done move proving proposition rest section make following assumption assume char let proposition know orthogonal suppose orthogonal proposition iii corollary mod remains determine orthogonal lemma reduce evaluation single vector quadratic form lemma let suppose mod define vector equal fixed point subgroup rad iii orthogonal nonzero quadratic form proof lemma fixed action follows lemma generated isomorphic weyl module corollary module trivial lemma generated since contained generated claim follows different proof one also prove showing kernel certain linear maps defined theorem theorem proposition proposition enough show orthogonal respect form since weight orthogonal vector weight therefore suffices show orthogonal vector weight lemma follows show orthogonal vector set contains distinct integers indeed otherwise contradicting fact let number vector written otherwise integer equal number sum thus follows since iii lemma exists nonsplit extension trivial module find extension image weyl module rad submodule rad since composition factor occurs multiplicity one theorem composition factor nontrivial corollary lemma restriction trivial composition factors fixed point follows rad let nonzero quadratic form since polarization rad rad proposition composing square root map defines morphism rad therefore must vanish since trivial composition factors thus scalars vanishes rad hence proposition iii orthogonal computation quadratic form finish proof proposition still compute vector lemma retain notation previous subsection keep assumption char let even say form induces quadratic form use form find nonzero quadratic form lemma proof let giving defines nonzero quadratic form polarization equal proposition possible therefore defines nonzero quadratic form polarization similar construction odd discussed proposition consider zero weight vector form value vector equal since yfk ygk otherwise compute value lemma since terms occurring sum defines thus divisible proof proposition finished following lemma lemma let mod integer divisible mod proof according kummer theorem prime maximal divides number carries occur adding base mod mod adding binary results one carry possibility mod case carries therefore divisible mod equivalent mod fundamental irreducibles classical types bit work use proposition determine classical types fundamental irreducible representations orthogonal section assume char groups type fundamental irreducible representations form odd furthermore fundamental representations minuscule thus proposition representation orthogonal type exists exceptional isogeny simply connected groups type theorem irreducible representations induce irreducible representations twisting isogeny fundamental irreducible representations lcl lbl lcl frobenius twist lbl therefore representation lcl orthogonal lbl orthogonal consider type first note natural representation ldl orthogonal since working characteristic two embedding subsystem subgroup generated short root subgroups lcl ldl theorem combining fact lemma see lcl orthogonal ldl orthogonal lemma let simple type consider type subsystem subgroup generated short root subgroups suppose nontrivial irreducible representation irreducible representation orthogonal orthogonal proof seitz orthogonal clear orthogonal well suppose orthogonal since natural module proposition iii exists nonsplit extension hwi furthermore exists nonzero quadratic form claim also nonsplit extension case hwi show invariant contradiction since nonsplit irreducible theorem curtis theorem module also irreducible representation lie since lie ideal lie invariant adjoint action follows lie lie sum trivial module must thus exists surjection quadratic form induces via nonzero quadratic form vanish radical proposition iii representation orthogonal finally representations minuscule representations ldl vdl ldl vdl proposition follows ldl ldl orthogonal selfdual therefore conclude ldl orthogonal note lcl lcl also orthogonal example thus lcl orthogonal ldl orthogonal taking together proposition improved following theorem assume char let simple type suppose orthogonal one following holds type type mod representations type assume simply connected type set section determine characteristic irreducible representation nonzero quadratic form orthogonal enough consider see table case answer methods prove similar found section result following theorem prove follows theorem assume char let orthogonal mod following examples follow easily theorem examples example char orthogonal mod result also proven theorem example char orthogonal mod representation theory composition factors submodule structure weyl modules determined adamovich using result baranov suprunenko given theorem description set composition factors similarly theorem define set pairs contains base define trivial irreducible module theorem gives theorem let weyl module composition factor multiplicity set composition factors baranov suprunenko give result terms get formulation theorem result kleshchev sheth corollary first cohomology groups groups type gives following corollary theorem assume char let mod throughout section consider subgroups embedded follows consider basis embedding basis fixes basis vectors baranov suprunenko determined submodule structure restrictions article theorem section purposes enough know composition factors occur restriction state result baranov suprunenko denote lal define main result gives theorem theorem let assume set mod otherwise character given sum brackets zero theorem note char always theorem following applications theorems characteristic two needed later lemma assume char let suppose mod let composition factors restriction form baranov supruneko give result terms lli lal replacing gives formulation theorem trivial composition factors proof lemma nothing prove suppose enough prove follow induction let suppose first mod theorem composition factors occurring therefore claim follows since thus consider case mod mod hand theorem composition factors occurring since claim follows consequence theorem lemmas get following corollaries corollary assume char let suppose mod proof according theorem composition factors contains base replace condition equivalent containing base implies lemma corollary assume char let suppose mod nontrivial composition factor form proof according theorem composition factors contains base setting composition factors contains base lemma proves claim construction describe construction many ways similar vcl described section consider group chevalley group constructed complex simple lie algebra type let basis complex vector space let spanned basis let lie algebra formed linear endomorphisms trace zero simple lie algebra type let cartan subalgebra formed diagonal matrices respect basis define maps diagonal matrix diagonal entries root system system positive roots base let linear endomorphism chevalley basis given let kostant respect chevalley basis subring universal enveloping algebra generated lattice define simply connected chevalley group type induced equal let basis dual basis denote spanned identify action given abuse notation identify basis basis let lie algebra acts naturally xvi similarly action furthermore acts lattice identify diagonal matrices form maximal torus basis weight vectors given elements eik basis vector weight natural dual pairing see example induces symmetric form define det det let two basis elements otherwise therefore form precisely way find basis weight vectors define symmetric form find weyl module submodule shown following lemma lemma lemma let let generated isomorphic weyl module proof general fact weyl modules always submodule simple groups classical type particular type follows results proven first lakshmibai theorem general result wang theorem lemma types fact consequence results due donkin types except characteristic two mathieu general case weight occurs multiplicity vector weight generate submodule isomorphic prove lemma note furthermore minuscule weights vector weight claim follows result previous paragraph identify submodule lemma set lemma identify note basis zero weight space given vectors form need following lemma gives set generators zero weight space lemma lemma suppose zero weight space thus also spanned vectors form efk sequences proof give proof somewhat similar lemma given lemma zero weight space generated elements form integers product taken respect fixed ordering positive roots assume choosing suitable ordering therefore assume one following types iii note type commute also true types iii writing terms simple roots see equal fact deduce following exists unique exists unique claims clear since occur expression sum simple roots claim follows induction since contributes expression sum simple roots claim follows similarly particular follows claims let sets type iii respectively follows claim claim thus write furthermore choose ordering another consequence indeed expression sum simple roots simple root occurs times hand types iii contribute sum precisely type iii total type total therefore sum contribution equal since equal get implies since type commute may assume denote ewr straightforward computation shows otherwise ejr ejr ewr ejr ejr ejr last equality combine terms makes sense since type iii commute type computing expression ejr see equal sum distinct elements summand equal transformed following way replace ews ejs replace replace eis ejs replace conclude sign statement lemma sequences defined lemma suppose vector eik fixed action furthermore point scalar multiple proof lemma fact fixed exercise linear algebra give proof convenience reader first need introduce notation let matrix entries denote entry ith row jth column indices set minor defined determinant matrix aip denote minor following relation minors matrices similar matrix multiplication rule special case formula proof found proposition formula let matrices subsets sum runs subsets consider matrix respect basis matrix acts exterior power avk denote eik straightforward calculation shows sum runs subsets respect dual basis action matrix atp transpose ofp denote sum runs subsets ready prove fixes vector note sum runs subsets observations see equal sums run subsets formula therefore show unique point scalar note first point must weight zero recall also zero weight space basis eik group permutations acts gives embedding note follows acts det even permutation get embedding alt alternating group well known alt alt acts transitively since thus alt point linear span must scalar multiple begin proof theorem rest section make following assumption assume char let suppose orthogonal proposition iii corollary mod remains determine orthogonal main argument following lemma lemma reduces question evaluation invariant quadratic form particular lemma let suppose mod define vector equal fixed point subgroup rad iii orthogonal nonzero quadratic form proof lemma apply lemma lemma corollary lemma apply lemma note otherwise iii lemma iii apply theorem find submodule rad rad composition factor occurs multiplicity one theorem nontrivial composition factors lemma restriction trivial composition factors vector fixed thus rad lemma iii see orthogonal nonzero quadratic form computation quadratic form finish proof theorem still compute vector lemma retain notation assumptions previous subsection let form induces quadratic form use form find nonzero quadratic form lemma proof lemma therefore defines nonzero quadratic form polarization section vector lemma finally applying lemma completes proof theorem simple groups exceptional type section let simple group exceptional type assume char give results orthogonality irreducible representations type give complete answer types results specific representations given table proven end section irreducible representations occurring adjoint representation answers given earlier gow willems section proposition let let irreducible representation orthogonal frobenius twist proof view remark enough consider dominant weight orthogonal proposition remains show orthogonal several ways see example since dim could done direct computation alternatively note composition factors proposition proposition module orthogonal third proof note action regular unipotent single jordan block theorem element exists proposition following lemma useful throughout section show certain representations orthogonal lemma let nontrivial irreducible suppose one following holds dim mod exactly one trivial composition factor dim mod exactly two trivial composition factors nontrivial composition factor occuring odd multiplicity orthogonal proof since nontrivial assume lemma holds applying results section lemma find vector irreducible highest weight proposition see example module orthogonal therefore orthogonal trivial composition factors lemma shows composition factor odd multiplicity orthogonal case assumption dim implies example lemma exist dim dim furthermore irreducible module highest weight proposition see example module orthogonal therefore orthogonal trivial composition factors lemma composition factor odd multiplicity orthogonal gmodule proposition let let irreducible representation orthogonal proof let exceptional isogeny given theorem steinberg tensor product theorem isomorphic thus lemmas enough prove claim case dominant weight orthogonal proposition remains show orthogonal let dim lemma enough prove exactly one trivial composition factor occurs odd multiplicity computation magma given furthermore data deduce therefore composition factors orthogonal yes yes yes yes yes yes yes yes yes yes table orthogonality type finish section verifying information given table suppose type orthogonal proposition show next orthogonal one also construct quadratic form explicitly realizing space trace zero elements albert algebra details construction found dim lemma enough prove exactly one trivial composition factor occurs odd multiplicity computation magma shows deduce composition factors thus occur exactly composition factor consider next type assume simply connected weyl module lie algebra orthogonal theorem data therefore orthogonal proposition show orthogonal dim lemma enough prove exactly two trivial composition factors occurs odd multiplicity computation magma see see composition factors therefore exactly two trivial composition factors occurs exactly composition factor type orthogonal finally show orthogonal dim lemma enough prove exactly two trivial composition factors occur odd multiplicity computation magma see see composition factors therefore exactly two trivial composition factors occur multiplicity one applications work section describe consequences findings propose questions motivated problem unless otherwise mentioned let simply connected algebraic group assume char connection representations symmetric group denote symmetric group letters describe connection orthogonality certain irreducible irreducible representations done application proposition various results literature result surprising since representation theory symmetric group plays key role representation theory modules example many results applied proof proposition based studying certain associated well known exists embedding see theorem therefore representation orthogonal clear true restriction proceed show converse also true seem priori obvious first following result due gow kleshchev theorem gives structure theorem let restriction irreducible isomorphic irreducible labeled partition gow quill determined irreducible modules orthogonal result following theorem let orthogonal mod case one express result following way corollary let orthogonal mod proof theorem module orthogonal mod never happens therefore always orthogonal desired consider according theorem orthogonal case orthogonal mod equivalent saying mod condition equivalent mod giving claim finally combining theorem proposition corollary gives following result proposition let orthogonal orthogonal reduction problem determine irreducible orthogonal enough consider dominant weight remark groups exceptional type leaves finitely many consider groups classical type reduce question type type follows next two lemmas note lemma identify fundamental dominant weights abuse notation lemma let let type orthogonal except irreducible lbl orthogonal irreducible lcl orthogonal proof let usual exceptional isogeny simply connected groups type theorem lcl lbl lbl lbl last equality follows steinberg tensor product theorem assume note orthogonal orthogonal thus follows lemma lcl orthogonal except possibly finally know lcl orthogonal example proves claim type type let usual exceptional isogeny simply connected groups type lbl lcl lcl lcl claim follows type case claim follows lcl lbl claim follows since orthogonal orthogonal lemma let ldl orthogonal ldl orthogonal lcl orthogonal except proof example table see type weight sum roots therefore composition factor thus orthogonal proposition suppose considering pcl subsystem subgroup generated long roots lcl ldl theorem claim follows lemma application maximal subgroups classical groups subsection allow char arbitrary mentioned introduction one motivation problem study maximal closed connected subgroups classical groups let classical simple algebraic group finding maximal closed connected subgroups reduced representation theory simple algebraic groups proceed explain done details see certain collections geometric subgroups defined terms natural module geometry reduction theorem due liebeck seitz theorem implies positivedimensional maximal closed subgroup one following holds belongs connected component simple irreducible particular reduction theorem implies following theorem let subgroup maximal among closed connected subgroups one following holds contained member simple irreducible maximal closed connected subgroups case theorem well understood theorem furthermore maximal closed connected subgroups occurring case theorem also described essentially determined seitz testerman result stated following theorem tells irreducible subgroup maximal theorem let simple algebraic group let irreducible rational closed proper connected subgroup simple irreducible occurs table refine characterization maximal closed connected subgroups given theorem one determine contain theorem example let simple type let simple type embedded usual way situation corresponds entry table however see table theorem situation maximal maximal fact results presented text allow one determine almost occurring table whether orthogonal symplectic neither easily done using table list information table deduced follows entry consequence lemma example lemma entry orthogonal example thus also orthogonal entry orthogonal proposition since irreducible entry follows lemma lemma entries irreducible orthogonal proposition entries follow proposition show orthogonal entry consequence lemma orthogonal orthogonal orthogonal bnk bnk orthogonal orthogonal orthogonal orthogonal orthogonal orthogonal orthogonal even orthogonal odd orthogonal orthogonal even orthogonal odd orthogonal orthogonal orthogonal table invariant forms occurring table case remains entry table embeddedp subsystem subgroup longproots situation know general whether orthogonal know except case true orthogonal orthogonal lemma using fact information table deduce following result theorem let simple algebraic group let irreducible closed proper connected subgroup simple irreducible one following holds orthogonal iii neither orthogonal type type natural module fundamental irreducible representations among irreducible orthogonal far ones know sense minimal among irreducible modules make precise follows pose question whether examples found recall longest element weyl group know dominant weight written uniquely sum fundamental dominant weights unique integers similarly exists collection written uniquely simple type listed type odd type even types even rank type odd type currently known examples irreducible modules orthogonal form others problem let suppose nonzero orthogonal answer problem yes results would settle problem almost completely indeed positive answer problem would show irreducible representation must equal frobenius twist results determine orthogonality classical type ones type lal described theorem type ones described theorem unique exception type arising restriction handful still remain exceptional types simple exceptional type irreducibles whose orthogonality decided section follows type type type case natural next step towards solving problem determining answer problem methods used paper solve problem certain families rely heavily detailed information structure weyl module known general representations composition factors found using results given however general sort information available characteristic composition factors known relatively cases example type even know dimension characteristic fixed point spaces unipotent elements finish question possible orthogonality criterion irreducible representations let irreducible representation assume lemma orthogonal since connected unipotent element number jordan blocks even proposition words dim even subspace fixed points converse hold problem let irreducible representation orthogonal exist unipotent element dim odd table listed examples without proof nonorthogonal representations answer problem yes answer problem turns yes would interesting criterion irreducible representation orthogonal positive answer would show orthogonality irreducible representation decided properties individual elements type mod mod conjugacy class regular regular regular regular regular regular regular regular dim table irreducible representations examples dim odd unipotent element references references adamovich analogue space primitive forms field positive characteristic vestnik moskov univ ser mat adamovich submodule lattices weyl modules symplectic groups fundamental highest weights vestnik moskov univ ser mat adamovich composition factors submodules weyl modules phd thesis moscow state university russian henning haahr andersen jens carsten jantzen cohomology induced representations algebraic groups math wieb bosma john cannon catherine playoust magma algebra system user language symbolic computational algebra number theory london armand borel properties linear representations chevalley groups seminar algebraic groups related finite groups institute advanced study princeton lecture notes mathematics vol pages springer berlin bourbaki partie les structures fondamentales analyse livre chapitre formes formes quadratiques sci ind hermann paris bourbaki fasc xxxviii groupes lie chapitre vii cartan chapitre viii lie scientifiques industrielles hermann paris brouwer composition factors weyl modules fundamental weights symplectic group unpublished http baranov suprunenko branching rules modular fundamental representations symplectic groups bull london math baranov suprunenko modular branching rules diagram representations general linear groups algebra joel broida gill williamson comprehensive introduction linear algebra publishing company advanced book program redwood city bart bruyn subspaces kth exterior power symplectic vector space linear algebra bart bruyn grassmann modules symplectic groups algebra stephen donkin rational representations algebraic groups volume lecture notes mathematics berlin tensor products filtration william fulton joe harris representation theory volume graduate texts mathematics new york first course readings mathematics paul fong decomposition numbers symposia mathematica vol xiii convegno gruppi loro rappresentazioni indam rome pages academic press london roderick gow alexander kleshchev connections representations symmetric group symplectic group characteristic algebra skip garibaldi daniel nakano bilinear quadratic forms rational modules split reductive groups canad rod gow contraction exterior powers characteristic spin module geom dedicata rod gow patrick quill quadratic type certain irreducible modules symmetric group characteristic two algebra roderick gow wolfgang willems methods decide simple modules fields characteristic quadratic type algebra rod gow wolfgang willems quadratic type simple modules fields characteristic two algebra james humphreys introduction lie algebras representation theory new graduate texts mathematics vol jens carsten jantzen darstellungen halbeinfacher algebraischer gruppen und zugeordnete kontravariante formen bonn math jens carsten jantzen representations algebraic groups volume mathematical surveys monographs american mathematical society providence second edition peter kleidman martin liebeck subgroup structure finite classical groups volume london mathematical society lecture note series cambridge university press cambridge kleshchev sheth extensions simple modules symmetric algebraic groups algebra kleshchev sheth corrigendum extensions simple modules symmetric algebraic groups algebra algebra lakshmibai musili seshadri geometry bull amer math soc martin liebeck gary seitz reductive subgroups exceptional algebraic groups mem amer math martin liebeck gary seitz subgroup structure classical groups invent martin liebeck gary seitz unipotent nilpotent classes simple algebraic groups lie algebras volume mathematical surveys monographs american mathematical society providence frank small degree representations finite chevalley groups defining characteristic lms comput electronic frank tables weight multiplicities http accessed olivier mathieu filtrations ann sci norm sup george mcninch dimensional criteria semisimplicity representations proc london math soc premet suprunenko weyl modules irreducible representations symplectic group fundamental highest weights comm algebra rimhak ree simple groups defined chevalley trans amer math gary seitz maximal subgroups classical algebraic groups mem amer math robert steinberg lectures chevalley groups yale university new notes prepared john faulkner robert wilson irina suprunenko irreducible representations simple algebraic groups containing matrices big jordan blocks proc london math soc peter sin wolfgang willems quadratic forms reine angew donald taylor geometry classical groups volume sigma series pure mathematics heldermann verlag berlin donna testerman irreducible subgroups exceptional algebraic groups mem amer math jian pan wang sheaf cohomology tensor products weyl modules algebra wolfgang willems metrische der charakteristik math robert wilson finite simple groups volume graduate texts mathematics london london
| 4 |
solving hard stable matching problems involving groups similar agents aug kitty meeks school computing science university glasgow glasgow baharak rastegari department computer science university bristol bristol abstract many important stable matching problems known even strong restrictions placed input paper seek identify simple structural properties instances stable matching problems allow design efficient algorithms focus setting agents involved matching problem partitioned different types type agent determines preferences agents preferences types may refined detailed preferences within single type situation could arise practice agents form preferences based small collection agents attributes notion types could also used interested relaxation stability agents form private arrangement allows matched partner differs current partner particularly important characteristic show setting several stable matching problems max smti max srti max size min smti belong parameterised complexity class fpt parameterised number different types agents admit efficient algorithms number types small introduction matching problems occur various applications scenarios assignment children schools college students dorm rooms junior doctors hospitals aforementioned similar problems understood participants refer agents preferences agents subsets agents majority literature assumes preferences ordinal assumption make work well moreover widely accepted good reasonable solution matching problem must stable stability according context problem hand intuitively speaking stable solution guarantees subset agents best interest leave prescribed solution seek assignment amongst unfortunately many interesting important stable matching problems known even highly restricted cases work consider setting agents grouped types type agent agent preferences compared agents setting could arise practice example agents derive preferences considering small collection attributes agents notion types also useful interested relaxation stability agents willing form private arrangement partner distinctly superior current partner respect important characteristic show solve important hard stable matching problems point view parameterized complexity number types taken parameter many results rely parameter tractability integer linear programming also demonstrate imposing restrictions problems become solvable perhaps widely studied matching problem stable marriage problem instance two disjoint set agents men women preference ordering individuals opposite sex candidates solution problem matching mapping men women man matched one woman vice versa matching stable two agents prefer assigned partners pair exists say blocking pair blocking agents seminal work gale shapley showed every stable marriage problem admits stable matching found polynomial time proposed algorithm simple extensions used identify stable matchings domains agents permitted declare candidates unacceptable stable marriage incomplete lists smi allowed express preferences two agents stable marriage ties smt stable marriage ties incomplete lists smti known instance smti might admit stable matchings sizes many practical applications important match many agents possible thus maximum cardinality stable matching stable matching largest size amongst stable matchings crucial issue definition max smti problem determining maximum cardinality stable matching instance smti depending application one might willing tolerate small degree instability leads larger matchings two measurements degree instability introduced literature number blocking pairs number blocking agents definition max size min smi respectively max size min smi problem finding matching maximum cardinality matchings minimum number blocking pairs respectively minimum number blocking agents instance smi stable roommate problem generalization extensions allowing incomplete lists preference lists way instance need admit stable matching exists polynomial time algorithm stable matching instance reports none exists definition max srt problem identifying maximum cardinality stable matching instance srt reporting none exists since instance may admit stable matching interest matching minimum number blocking pairs definition min problem identifying matching minimum number blocking pairs instance problem ties hrt famous extension smti models many practical applications including assignment junior doctors hospitals allowing agents one side market assigned multiple agents side market mechanisms similar proposed used compute stable assignment residents hospitals concern computing maximum size stable matching extends instances hrt definition max hrt problem determining maximum cardinality stable matching instance hrt problems known note provided instances smi chose two problems hard even ties preference lists likewise assumes agents agents acceptable hardness result holds even assumption instances problem hard even agents rank agents strict order preference fact aforementioned problems hard even input heavily restricted example max smti even man preference list strictly ordered woman preference list either strictly ordered tie length structure rest paper section provide settings study well brief introduction parameterised complexity existing results integer linear programming section formally mean agents types discuss related work summarise results subsequent sections present results complexity computing maximum cardinality stable matching instance smti hrt section maximum cardinality stable matching instance srti section maximum cardinality matching minimum number blocking section restricted setting section provide problem couples pair residents send preferences pairs hospitals present result complexity computing maximum cardinality stable matchings scenarios motivate setting discussing may arise practice section along brief description extract types ordinal preference lists conclude section provide directions future work preliminaries section introduce main concepts use paper begin provide brief introduction parameterised complexity describing existing results integer programming definitions section provide key stable matching settings study background terminology refer reader let denote set agents bipartite matching setting smti hrt composed two disjoint sets hospital instance hrt associated capacity denotes number posts bipartite matching setting use term candidates refer agents opposite side market agent consideration settings candidates refer agents except one consideration agent subset candidates acceptable ranks order preference preferences orderings need strict possible agent two candidates write equivalently denote agent prefers candidate candidate denote write denote either prefers say weakly prefers instance smti matching pairing men women one paired unacceptable partner man paired one woman woman paired one man srti instance matching pairing agents agent matched one agent additionally acceptable write say matched instance hrt matching pairing hospitals residents agent paired unacceptable candidate resident matched one hospital hospital matched residents use denote agent set agents case hospitals matched write agent unmatched assume every agent prefers matched acceptable candidate remaining unmatched given instance smti srti matching weakly stable pair prefers current partner vice versa given instance hrt matching stable acceptable resident hospital pair prefers either prefers worst assigned resident parameterised complexity paper concerned parameterised complexity computational problems intractable classical sense parameterised complexity provides multivariate framework analysis hard problems problem known expect algorithm depend exponentially aspect input seek restrict combinatorial explosion one parameters problem rather total input size potential provide solution problem parameter question much smaller total input size parameterised problem total input size parameter considered tractable solved algorithm whose running time bounded computable function problems said parameter tractable belong complexity class fpt emphasised problem fpt exponent polynomial must independent parameter value problems satisfy weaker condition running time polynomial constant value parameter degree polynomial may depend parameters belong class background theory parameterised complexity refer reader complexity integer programming algorithms present paper make use algorithm integer linear programming way problem formally stated follows given matrix two vectors vector minimizes scalar product subject linear constraints given else report vector satisfying constraints exists note easily translate problems wish maximise rather minimise objective function form also express constraints based linear equalities combination linear inequalities simplicity presentation use generalisations expressing problems instances integer linear programming integer linear programming general one celebrated results parameterised complexity problem belongs fpt parameterised number variables theorem based integer linear programming instance size variables solved using log log arithmetic operations space polynomial log upper bound absolute value variable take solution largest absolute value coefficient vector section also need solve instances integer quadratic programming variant integer linear programming objective function quadratic formally given integer matrix integer matrix integer vector goal vector minimises subject linear constraints else report vector satisfying constraints exists note easily generalise deal maximisation problems constraints form linear equalities lokshtanov recently gave fptalgorithm problem theorem integer quadratic programming fpt parameterised maximum absolute value entry matrices contribution section describe approach detail provide context results begin section giving formal problems consider discussing related work section finally section summarise results typed approach hardness results study stable matching problems based premise agents may arbitrary preference lists practice however agents preferences likely structured correlated work consider setting agent associated type preferences well perceived agents discuss settings may arise practice extract types ordinal preference lists section first model agents type indistinguishable simplest model assume agents type completely indistinguishable preference lists every agent type acceptable assume types available agents let denote set agents type type preference ordering types candidates need complete strict assume without loss generality type least one type acceptable write equivalently agents type strictly prefer agents type agents type write denote agents type agents types agents type prefer agents type type two assume given every two agents type identical preference lists restricted agents requirements imply agent either agents given type acceptable none acceptable say instance stable matching problem satisfying requirements typed refer standard problems input form typed max smti etc illustrate typed instances example example assume types agents stable marriage setting types correspond men rest types correspond women let preferences types follows preference lists ordered left right decreasing order preference types round brackets tied assume seven men seven women men type men type last two men type assume type type last three women type therefore preferences agents typed model follows generalisation agents type refine preferences way also consider generalisation typed instances agents longer necessarily two agents type however agents type occur consecutively preference lists moreover assume two types essentially means agents either types means two agents type identical preference lists restricted agent type appears preference list type types agents instance stable matching problem slightly weaker requirements say instance refer standard problems input form max smti etc illustrate two short examples example consider stable marriage setting types example example already meets requirements model many permitted preference profiles setting agents allowed break ties within particular type example following preference list however following preference list allowed men tie requires three men indifferent women either two types example assume stable roommates setting six agents type assume type finds type acceptable let refined preferences within type follows agents preference lists listed related work matching problems stated earlier problems section even highly restricted cases max smti shown variety restricted settings example even man list strictly ordered woman list either strictly ordered tie length even mans preference list derived master list women woman preference list derived master list men contains one tie even smti instance symmetric preferences acceptable man woman pair rank rank rank one plus number candidates prefers smti special case hrt maximum stable matching latter follows directly problem former weak srt problem deciding weather stable matching exists instance srt holds even preference list either strictly ordered contains tie length head therefore follows max srt max size min smi max size min smti hard approximate even agent preference list length solvable agents one side market preference lists length parameterized complexity limited number works addressing tractability stable matching problems study paper marx schlotter study parameterized complexity max smti show problem fpt parameterised total length ties parameterised number ties instance even men strictly ordered preference lists attributes types settings agents partitioned types derive preferences based set attributes assigned candidate considered problems sampling counting stable matchings instances see authors study problem characterizing matchings rationalisable stable matchings agents preferences unobserved focus restricted setting translates assigning agent type based several attributes assuming agents type identical identical preferences remark empirical studies marriage typically make assumption bounded agent types considered derive results coalition structure generation problem important issue cooperative games goal partition participants exhaustive disjoint coalitions order maximize social welfare results observe hardness results discussed previous section also hold even settings typed models let type contains one agent models place restrictions preference lists thus deduce typed max smti typed max hrt typed max srt typed max size min smi typed max size min smi part input contrast hardness results able provided positive results parameterised setting number types taken parameter theorem typed max smti typed max hrt typed max srti typed max size min smti typed max size min smti belong fpt parameterised number types note cases fpt result applies general version known results presented sections also consider setting preferences types strict certain known hardness results also carry setting allow typed max size min smi typed max size min smi remain even preferences types strict however additional restriction input allows give algorithms taken part input typed max smti typed max hrt typed max srti theorem typed max smti typed max hrt typed max srti solvable preferences types strict prove results sections efficient algorithms max smti max hrt section present problem typed max smti show extend solve max smti give algorithm special case preferences types strict conclude presenting straightforward reduction max hrt max smti implies use latter solve former typed max smti let typed instance smti let matching may assume without loss generality every agent matched creating many dummy agents type inserted end men women possibly incomplete preference list worstm type least desirable agent agent type matched note worstm would dummy type agent type unmatched matched dummy agent let type denote type given agent claim order determine whether stable examine values worstm lemma let typed instance smti matching stable pair worstm worstm proof suppose stable case exists pair agents matched together prefers current partner suppose without loss generality type type know type worstm similarly type worstm conversely suppose stable suppose contradiction worstm worstm agent type matched agent type worstm particular matched partner less desirable agent type similarly agent type matched agent type worstm hence matched partner less desirable agent type thus prefer current partner form blocking pair contradicts assumption stable use observation prove main result section theorem typed max smti fpt parameterised number different types instance proof lemma implies every stable matching must pass test lemma matching passes test stable note possibilities function worst say given function worst feasible type worst either type acceptable type dummy type say matching realises given feasible function worst least desirable partner agent type type worst strategy consider feasible possibilities worst turn first determine using lemma whether matching realising worst stable done candidate function worst time feasible function worst give rise stable matching want determine maximum number agents matched matching realises worst solving suitable instance integer linear programming formulation unordered pair distinct values variable represents number pairs matching consisting one agent type another type recall set agents type following integer linear program maximize subject worst worst constraint ensures every agent involved exactly one pair perhaps dummy agent next two conditions ensure worst indeed type least desirable partner assigned agent type objective function seeks maximise total number pairs involve dummy agents integer linear program constraints constraint constant length variables upper bound absolute value variable take therefore theorem maximisation problem candidate function worst solved time max smti extend result typed max smti max smti need following result lemma let instance smti suppose matching pair worstm worstm stable matching every contain number pairs consist one agent type another type moreover given compute polynomial time proof given matching let denote number pairs consisting agent type agent type construct stable matching present construction prove stable let decomposition projection onto agent types construction takes place two steps step compute denotes set agents type matched agents type step generate given computed step next elaborate steps executed step start agents available type take candidate types type decreasing order preference ties broken arbitrarily iki denotes type ranked type starting take topmost perspective agent type available agents type put agents become unavailable note preference lists agents type agents type may include ties may possible determine exactly topmost available agents type precise going preference list agents type available agents type may reach tie including available agent need pick number happens arbitrarily pick agents step show generate given computed step computing stable matching amongst agents since agents type preference ordering candidates type problem reduces problem computing stable matching instance smti men identical preferences well women latter problem solvable polynomial time using straightforward greedy procedure construction remains prove weakly stable assume contradiction admits blocking pair types respectively three kinds blocking pairs possible depending compare others types types partners examine show admit none prefers type type prefers type type assumption lemma given pair worstm worstm construction worstm remains unchanged types thus directly follows admit blocking pair existence blocking pair implies structed step weakly stable preferences agents contradiction either without loss generality assume prefers type type since prefers therefore follows construction step since therefore agent type including prefers type type contradiction notice considered scenario assume assumption instance agent type hence agents type type therefore blocking pair similar argument holds let instance smti let typed instance smti obtained ignoring preferences within type every agent candidates type follows stability every matching stable also stable lemma implies stable matching exists stable matching cardinality thus order maximum cardinality matching instance smti solve typed problem ignore preferences within type use algorithm provided proof lemma convert solution matching cardinality stable instance theorem max smti fpt parameterised number different types instance max hrt typed instance max hrt reduced instance max smti follows type hospitals let denote set posts hospitals type hence total number posts hospitals type sum capacities hospitals type reduce instance max hrt instance max smti additionally let denote set posts type matched residents type hospital type set residents type matched post type resident type next result straightforwardly follows theorems corollary typed max hrt max hrt fpt parameterised number different types instance strict preferences types preceding sections assumed agents agents two types investigate implication requiring strict preferences types complexity stable matching problems studied show problems refined typed max smti refined typed max hrt solvable agents strict preferences types argument based private communication david manlove theorem preferences types strict typed max smti refinedtyped max smti solvable furthermore stable matchings size proof let typed instance smti assume preferences types strict let instance smti obtained breaking ties arbitrarily consistent way recall agents type identical preferences fact instance smi preferences types strict lemma stable matching exists stable matching size implies size maximum cardinality matching size stable matching however instance smi stable matchings size therefore solve max smti enough stable matching done easily polynomial time simple extension additionally stable matchings size let instance smti assume preferences types strict let typed instance smti obtained ignoring preferences within type every agent candidates type proved preceding paragraph stable matchings size thus follows lemma using similar argument preceding paragraph stable matchings also size thus maximum cardinality stable matching found polynomial time breaking ties arbitrarily applying simple extension result combined corollary gives following result corollary preferences types strict typed max hrt refinedtyped max hrt solvable furthermore stable matchings size efficient algorithms max srti notice proofs lemma lemma theorems made use fact smti bipartite matching problem except implicitly assumed type acceptable therefore proofs immediately max srti impose assumption allow type acceptable need make explained next typed max srti let typed instance max srti let matching may assume without loss generality every agent matched using argument section worstm section additionally second worstm type second least desirable agent agent type matched one agent type second worstm case let second worstm note possible second worstm worstm claim order determine whether stable examine values worstm second worstm lemma let typed instance srti matching stable pair worstm worst pair least two agents type second worstm proof proof similar proof lemma slight concerning second case statement current lemma suppose stable case exists pair agents matched together prefers current partner agents types argument lemma assume type know type likes type type type type least well type second worstm without loss generality assume type second worstm since blocking pair type type therefore second worstm conversely suppose stable case proof lemma suppose contradiction exists type corresponding least two agents second worstm two agents type matched agent type worstm agent type second worstm respectively thus prefer current partner form blocking pair contradicts assumption weakly stable theorem typed max srti fpt parameterised number different types instance proof proof similar proof theorem lemma implies every stable matching must pass test lemma matching possibilities passes test lemma stable note pair functions worst second worst say given pair functions worst second worst feasible type worst either type acceptable type dummy type second worst either type acceptable type dummy type one agent type worst second worst exists least two agents type worst say matching realises functions worst second worst least second least desirable partner agent type types worst second worst respectively strategy consider feasible possibilities worst second worst turn first determine using lemma whether matching realising worst second worst stable done candidate functions worst second worst time note possible none candidates functions passes test lemma instances srti necessarily admit stable matching pair feasible functions worst second worst give rise stable matching want determine maximum number agents matched matching realises worst second worst solving suitable instance integer linear programming formulation exactly one provided proof theorem exception following two sets constraints introduced ensure second worst indeed type second least desirable partner assigned agent type second worst worst worst second worst worst worst second worst integer linear program constraints variables upper bound absolute value variable take therefore theorem maximisation problem candidate function worst solved time max srti using similar argument section aid following lemma employ machinery used solve typed max srti solve max srti lemma let instance srti suppose matching pair worstm worstm pair least two agents type second worst stable matching every contain number pairs consist one agent type another type moreover given compute polynomial time proof thesproof similar lemma following let allow inclusion step take topmost available agents type put step generate given create complete stable matching amongst agents using following greedy procedure agents matched take topmost available tie type match agents together number agents topmost tie odd match remaining agent one agents second topmost available tie label matched agents unavailable prove stable additionally need take care one type blocking pair two agents type prefer partners assumption lemma given pair second worstm construction second worstm remains unchanged types thus directly follows admit blocking pair theorem max srti fpt parameterised number different types instance strict preferences types similar argument proof theorem small shows versions also solvable preferences types strict theorem preferences types strict typed max srti refinedtyped max srti solvable furthermore stable matchings exists size proof let typed instance srti assume preferences types strict let instance srti obtained breaking ties arbitrarily consistent way recall agents type identical preferences fact instance sri preferences types strict let max size max size denote size maximum cardinality stable matching respectively zero stable matching exists follows stability every matching stable also stable implying max size max size lemma implies stable matching exists exists stable matching cardinality implying max size max size therefore max size max size order maximum cardinality matching reporting none exits enough solve max sri latter problem solvable furthermore stable matchings admits size hence stable matchings admits also size maximum cardinality matchings minimum instability stated section settings size matching takes priority stability criteria mechanism designers willing tolerate small degree instability leads matching larger size extend methods previous sections prove following results theorem typed max size min smti belongs fpt parameterised number different types given instance theorem typed max size min smti belongs fpt parameterised number different types given instance begin considering problem minimising total number blocking pairs rest section assume types types women types types men assume preference lists extended include dummy type least desirable acceptable type agent unmatched say type dummy type step translate blocking pair setting typed instances proposition let agents type type respectively suppose type type blocking pair given observation obtain expression total number blocking pairs comprised agent type agent type lemma number blocking pairs woman type man type given indicator function returns one zero otherwise proof easy verify side side equation equal remainder proof focus side equation proposition know set blocking pairs consisting one agent type another type precisely type type cardinality set thus equal number agents type matched agent type inferior type multiplied bypthe number agents type matched agent type inferior type result follows immediately summing possibilities gives following result lemma total number blocking pairs given prove theorem proof theorem begin computing polynomial time cardinality cmax maximum matching instance strategy formulate max size min smti instance integer quadratic programming goal minimise following objective function subject constraints cmax see side objective function equation equal side equation notice every pair appears twice summation side coming sum coming second sum way around former counts number blocking pairs type type types respectively latter counts number blocking pairs type type type type pairs least one equal dummy type dealt separately case one blocking pair linear constraints enforce every agent involved exactly one pair perhaps dummy agent number pairs involve dummy agents equal maximum possible cardinality matching write objective function form vector entry corresponding equal either depending many following conditions hold thus theorem solve instance max size min smti consider problem minimising number agents involved least one blocking pair start characterising conditions agent particular type belong one blocking pairs characterisation follows immediately blocking pair lemma let agent type assume type would dummy type unmatched belongs blocking pair agent type paired agent type would dummy type unmatched using characterisation prove theorem proof theorem matching collection boolean variables type dummy type true matching contains least one pair involving agent type agent type unmatched agents considered matched agents dummy type given matching collection variables vector call matching note possible matching consider possible turn determine minimum number agents involved blocking pairs maximum matching typesignature maximum matching exists minimising set optimal solutions give desired answer describe compute minimum number agents involved blocking pairs maximum matching else report maximum matching exists strategy encode problem instance integer linear programming first constraints usual need ensure every agent involved exactly one pair potentially involving dummy agent proof theorem need enforce number pairs involve dummy agents equal maximum cardinality matching instance moreover need make sure matching indeed equal gives rise following linear constraints cmax finally objective function captures number agents involved least one blocking pair lemma know agent type matched agent type belongs blocking pair exist thus given compute indicator variable takes value agent type matched agent type matching belong blocking pair takes value otherwise clear total number agents involved least one blocking pair matching linear objective function observe theorems easily extended typed instances corollary max size min smti max size min smti belong fpt parameterised number types proof use strategy problems first solve problem corresponding typed instance ignoring preferences within types note number blocking pairs blocking agents achieved problem clearly gives lower bound minimum number achieved take account information argue fact always obtain matching increase either quantity take account full preference lists follow method described lemma given number pairs type method allows construct matching exactly pairs involving one agent type one type blocking pair currently matched agent type thus blocking pairs form type type type type precisely blocking pairs occur relaxation typed instance thus indeed obtain solution max size min smti max size min smti applying appropriate algorithm number pairs type relaxation typed instance use method lemma extend matching introduce additional blocking pairs full preference lists taken consideration finally consider extending results matching problems obtain one immediate corollary corollary typed max size min hrt typed max size min hrt belong fpt parameterised total number types generalise case takes slightly care two agents type matched agents type form blocking pair total number blocking pairs results rather otherwise exactly method works thus obtain following corollary corollary typed max size min srti typed max size min srti belong fpt parameterised total number types also fairly straightforward modify iqp ilp proofs theorem solve min srti min srti respectively need remove constraint enforces matching size cmax corollary typed min srti typed min srti belong fpt parameterised total number types related problem min exact given instance integer decides whether admits matching exactly blocking pairs even problem general typed instance solve exact srti move objective function iqp theorem set constraints enforce equal corollary typed exact srti belongs fpt parameterised total number types problem couples job market medical residents underwent change since mid due married couples seeking posts nearby hospitals subsequently central matching systems adapted take account couple preferences otherwise couples would seek arrange matches outside centralized clearinghouse problem couples hrc models settings couples send preferences pairs hospitals size given matching instance hrc equal number residents matched matching definition stable hrc problem deciding whether instance hrc admits stable matching definition max hrc problem identifying maximum cardinality stable matching hrc instance reporting none exists stable hrc corollary max hrc result holds even hospital capacity single residents parameterized complexity stable hrc studied also studies max hrc stable hrc problem parametrized number couples result holds even hospital capacity however stable hrc belongs fpt problem parametrized number couples instance hospitals list derived master list residents remainder section provide stability instances hrc extend typed models instances conclude presenting typed max hrc stability hrc instance hrc set residents includes even size subset consisting residents belong couples every resident belongs exactly one couple let denote set ordered pairs form couple single residents hospitals subsets candidates acceptable rank strict order preference every couple joint strict preference ordering acceptable subset ordered hospital pairs stability instance hrc provided literature distinct one another see paper adopt provide given say hospital undersubscribed given instance hrc matching stable admits blocking pair blocking pair one following properties involves single resident hospital prefers undersubscribed prefers worst assigned resident involves couple hospital either prefers undersubscribed prefers resident prefers undersubscribed prefers resident involves couple pair necessarily distinct hospitals prefers either respectively either undersubscribed prefers respectively least one assigned residents prefers least one one assigned residents prefers resident prefers resident distinct typed instances instance hrc type associated single resident hospital preference ordering types candidates similarly earlier instances hrt assume without loss generality pair residents form couple ordered type one larger type second one type type case pair types associated couple joint preference ordering ordered pairs hospital types conditions outlined section apply single residents hospitals every two single residents hospitals type identical preference lists single residents hospitals type hospitals residents type notions extend naturally preference lists couples couples type identical preference lists couples hospitals type say hospitals type say typed max hrc adapt method used prove theorems order show typed max hrc fpt parameterised number types instance idea consider number possibilities matching number possibilities bounded function determine possibility whether matching meets conditions stable express maximisation problem associated set stable conditions instance integer linear programming previous sections assume without loss generality least one agent type however sure always single resident given type couple pair types also previous sections assume every agent matched creating many dummy agents type inserted end single resident hospital possibly incomplete preference list insert end couple preference list start providing conditions necessary matching stable given typed instance hrc given matching every agent matched perhaps dummy agent three functions worstm second worstm assignedm function worstm follows resident type least one single agent worstm type least desirable hospital respect preference list single resident type assigned pair resident types least one couple type worstm least desirable pair hospital types respect joint preference list couple types assigned hospital type worstm type least desirable resident respect preference list assigned hospital type function second worstm hospital types total capacity hospitals type least two second worstm type second least desirable resident assigned hospital type boolean function assignedm combination pair resident types pair hospital types assignedm least one couple involving residents type assigned hospitals types respectively remainder section assume types types residents types types hospitals let denote set let denote number single residents type denote total capacity hospitals type number couples resident type second one type lemma let typed instance hrc matching stable none following holds pair types worstm worstm hospital type either worstm pair hospital types assignedm worstm pair hospital types assignedm two hospital types worstm worstm iii worstm hospital type worstm worstm either second worst second worst proof sketch straightforward prove claim using similar approach proof lemma stable matching instance hrc theorem typed max hrc fpt parameterised number different types instance proof lemma gives necessary condition matching realising functions worst second worst assigned stable consider feasible possibilities functions worst second worst assigned turn determine using lemma whether matching realising stable done set candidate functions time trio feasible functions worst second worst assigned give rise stable matching need determine maximum number residents matched matching realises trio rest paper solving instance integer linear programming variables follows pair types variable denotes number single residents type matched hospitals type additionally combination pair resident types pair hospital types may may distinct variable denotes number couples consisting residents types respectively resident type assigned hospital type resident type assigned hospital type objective maximise size matching hence maximise set constraints ensures every agent involved exactly one pair constraint handles single agents second one hospitals last one couples next set constraints ensures function worst complies worst worst worst worst nworst worst worst following set constraints ensures function second worst hospitals complies second worst worst nworst nsecond worst nworst second worst worst last set constraints ensure boolean variables assigned set correctly assigned integer linear program variables constraints upper bound absolute value variable take therefore theorem maximisation problem trio candidate functions worst second worst assigned solved time types practice section discuss issues relating applicability models real world matching problems begin section discussing instance relatively types could arise agents derive preference lists small set attributes agents section discuss ideas applied consider relaxation stability requirement realistic setting agents feel strongly small collection attributes may preferences arbitrarily finally section explain identify agents type access agents ordinal preference lists attributes seems believable many settings particularly involving many agents agents derive preferences considering collection attributes agents number attributes considered likely much smaller total number agents example centralised job allocation scheme employers might rank applicants based number criteria application form exam grade interview score etc setting allocation junior doctor might rank hospitals based programs reputation quality programs geographic location etc similar observation made assume set attributes attribute take one distinct possible values perhaps including applicable value subsets attributes relevant agents write attribute profile agent lists value attributes relevant agent note possible attribute agent since agents form preferences solely basis attributes agents every agent necessarily two agents attribute thus express agent preferences ordering possible attribute allowing attribute declared unacceptable implies reordering agents involved ties possible preference lists capture relevant information agent specifying attribute ordering possible attribute therefore obtain typed instance types grouping together agents attribute preferences attribute certain models might reasonable make stronger assumption namely agents preferences also determined attributes example stable roommates problem agents might looking partners similar certain ways agents rank others based solely attributes means number types bounded rather relaxation stability requirement attributes may well play important role determining agents preferences situation described section small number attributes taking distinct values relevant might reality many situations section discuss ideas may applied much wider range settings relax notion stability reasonable assume certain amount required agents blocking pair make private arrangement outside matching agents unlikely make small improvement utility say blocking pair dangerous agents make improvement utility means make improvement later goal maximum cardinality matching contains dangerous blocking pairs model potentially many attributes two natural ways quantify relative utility two partners one consider attributes improved switching partners another consider much attributes improved seems reasonable assume agent considers small subset attributes important make private arrangement improve value important attribute possible values attribute split small number groups similar values agent make private arrangement obtain partner whose attribute value comes superior group assume relatively attributes important one agents particular large number agents disjoint sets important attributes simpler problem instance captures coarse information original instance crucially contains enough information determine whether dangerous blocking pairs matching idea formalised follows denote set attributes important least one agent split values groups gisi write maxai coarse attribute profile agent lists group agent value belongs attribute based assumptions know agent make private arrangement new partner whose coarse attribute preferable current partner note possible coarse attribute possible rankings coarse attribute group agents types two agents assigned type coarse attribute preferences coarse attribute number types simplify instance assuming agents fact two agents attribute obtain typed instance course matching stable instance need stable original instance however easy check matching weakly stable contains dangerous blocking pair thus willing accept matchings stable contain dangerous blocking pairs use machinery developed deal typed instances general setting identifying types preference lists order make use algorithms developed typed instances need know type agent practice may presented lists ordinal preferences agent section describe compute coarsest partition types meet conditions either models finding types typed instance identify type agent two stages step sort list agents preference lists makes easy identify agents identical preference lits one requirements agents type write agents preference list next consider pair agents pairs yet found evidence type consider preference lists agents list check whether belong tie agent strictly prefers vice versa type otherwise write types taken equivalence classes respect equivalence relation finding types instance process identifying types instances slightly complicated require agents type identical preferences begin sorting preference lists determine pairs call equivalence class respect group partition follows making number passes list preference lists every pass consider preference list turn check group whether members occur consecutively preference list every group case split group maximal subsets occur consecutively preference list considered move onto next preference list new partition continue process complete pass preference lists split group occurs stop current partition point clear obtained partition requirements agents type identical preferences appear consecutively agents lists remains observe complete many passes partition note pass split group must increase number parts partition least one therefore instance split set types satisfying conditions instance possibly make passes summary future work considered setting agents partitioned types type agent determines preferences well compared agents agents preferences types may detailed preferences within single type agents type identical preferences showed setting several important stable matching namely max smti max hrt max srti max size min smti max size min srti min srti belong parameterised complexity class fpt parameterised number types agents admit algorithms number types small able prove max smti max hrt max srti solvable agents strict preferences types additionally able show max hrc fpt parameterised number types agents agents agents type would interesting investigate whether approach might yield stable matching problems agents type similar preference lists necessarily identical another intriguing question would understand complexity stable matching problems studied changes agents one side market associated types acknowledgements author supported personal research fellowship royal society edinburgh funded scottish government authors extremely grateful david manlove insightful comments preliminary version manuscript references abraham manlove almost stable matchings roommates problem proceedings waoa workshop approximation online algorithms volume lecture notes computer science pages springer abraham levavi manlove malley stable roommates problem pairs proceedings wine international workshop internet network economics volume lecture notes computer science pages springer abraham levavi manlove malley stable roommates problem pairs internet mathematics preliminary version appeared aziz keijzer complexity coalition structure generation international conference autonomous agents multiagent systems aamas pages bhatnagar greenberg randall sampling stable marriages spouseswapping work proceedings soda symposium discrete algorithms pages irving schlotter stable matching couples empirical study acm journal experimental algorithmics section article pages manlove mittal size versus stability marriage problem technical report university glasgow department computing science manlove mittal size versus stability marriage problem theoretical computer science cantala matching markets particular case couples economics bulletin chebolu goldberg martin complexity approximately counting stable matchings proceedings approx international workshop approximation algorithms combinatorial optimization problems volume lecture notes computer science pages springer full version available http chebolu goldberg martin complexity approximately counting stable matchings theoretical computer science preliminary version appeared chebolu goldberg martin complexity approximately counting stable roommate assignments journal computer system sciences choo siow marries journal political economy cygan fomin kowalik lokshtanov marx pilipczuk pilipczuk saurabh parameterized algorithms springer international publishing downey fellows fundamentals parameterized complexity springer london dutta stability matchings individuals preferences colleagues journal economic theory echenique lee shum yenmez revealed preference theory stable extremal stable matchings econometrica flum grohe parameterized complexity theory springer frank tardos application simultaneous diophantine approximation combinatorial optimization combinatorica gale shapley college admissions stability marriage american mathematical monthly irving stable marriage problem structure algorithms mit press hamada iwama miyazaki improved approximation lower bound almost stable maximum matchings information processing letters irving stable problem technical report university glasgow department computing science irving manlove stable roommates problem ties journal algorithms irving manlove scott stable marriage problem master preference lists discrete applied mathematics kannan minkowskis convex body theorem integer programming mathematics operations research klaus klijn nakamura corrigendum stable matchings preferences couples journal economic theory kojima pathak roth matching couples stability incentives large matching markets quarterly journal economics lenstra integer programming number variables mathematics operations research lokshtanov parameterized integer quadratic programming variables manlove algorithmics matching preferences world manlove irving iwama miyazaki morita hard variants stable marriage technical report university glasgow school computing science manlove irving iwama miyazaki morita hard variants stable marriage theoretical computer science full version available marx schlotter parameterized complexity local search approaches stable marriage problem ties algorithmica marx schlotter stable assignment couples parameterized complexity local search discrete optimization mcdermid manlove keeping partners together algorithmic results hospitals residents problem couples journal combinatorial optimization malley algorithmic aspects stable matching problems phd thesis university glasgow department computing science ronn stable matching problems journal algorithms roth evolution labor market medical interns residents case study game theory journal political economy shrot aumann kraus agent types coalition formation problems proceedings international conference autonomous agents multiagent systems aamas pages
| 8 |
control synthesis systems metric interval temporal logic specifications sofie andersson alexandros nikou dimos dimarogonas mar access linnaeus center school electrical engineering kth center autonomous systems kth royal institute technology stockholm sweden sofa anikou dimos abstract paper presents framework automatic synthesis control sequence systems governed continuous linear dynamics timed constraints first motion agents workspace abstracted individual transition systems second agent assigned individual formula given metric interval temporal logic mitl parallel team agents assigned collaborative team formula proposed method based control synthesis method hence guarantees resulting system satisfy specifications specifications considers properties extended simulations performed order demonstrate efficiency proposed controllers keywords reachability analysis verification abstraction hybrid systems systems control design hybrid systems modelling control hybrid discrete event systems temporal logic introduction systems composed number agents interact environment cooperative control systems allows agents collaborate tasks plan efficiently paper former considered regarding collaborative team specifications requires one agent satisfy property time aim construct framework start environment set tasks local specific individual agent global requires collaboration multiple agents yield system achieve satisfaction specifications control synthesis specification language introduced express tasks linear temporal logic ltl see loizou kyriakopoulos general framework used based procedure kloetzer belta first agent dynamics abstracted transition system second discrete plan meets high level task synthesized third plan translated sequence continuous controllers original system control synthesis systems ltl specifications addressed kloetzer guo dimarogonas kantaros zavlanos due work supported erc starting grand bucophsys swedish research council swedish foundation strategic research ssf knut och alice wallenberg foundation fact interested imposing timed constraints system aforementioned works directly utilized timed constraints introduced single agent case gol belta raman topcu zhou case karaman frazzoli nikou authors karaman frazzoli addressed vehicle routing problem metric temporal logic mtl specifications corresponding approach rely verification based construction linear inequalities solution resulting linear programming milp problem previous work nikou proposed automatic framework multiagent systems agent satisfies individual formula team agents one global formula approach solution suggested paper follows similar principles nikou however start continuous linear system rather assuming abstraction adding way abstract environment suitable manner transition time taken explicitly account suggested abstraction based work presented gol belta considered time bounds facet reachability single agent system consider systems suggest alternative time estimation provide proof validity furthermore present alternative definitions local bwts product bwts global bwts compared work presented nikou definitions suggested requires smaller number states hence lower computational demand drawback suggested definitions increased risk false negative result required modification applied however effect fact method method entirety implemented simulations demonstrating satisfaction specifications resulting controller positive weight assignment map expression used express weight assigned transition definition timed run wts infinite sequence contribution paper summarized four parts extends method suggested nikou ability define environment directly continuous linear system rather treating abstraction given provides less computationally demanding alternative simulation results support claims included considers linear dynamics contrast already investigated nikou single integrator definition timed word produced timed run infinite sequence pairs timed run definition syntax mitl set atomic propositions defined grammar formulas operators negation conjunction respectively extended operators eventually always defined paper structured follows section introduces preliminaries notations applied throughout paper section defines considered problem section presents main result namely solution framework finally simulation result presented section illustrating framework applied simple example conclusions made section preliminaries notation section mathematical notation preliminary definitions formal methods required paper introduced given set denote cardinality set subsets respectively let matrix vector respectively denote element row column matrix similarly denote element vector given set nonnegative rational numbers time sequence defined definition alur dill time sequence infinite sequence time values satisfies following atomic proposition statement system variables either true false definition weighted transition system wts tuple set states set initial states set inputs transition map expression used express transition action set observations atomic propositions observation map given timed run wts semantics satisfaction relation defined definition clock constraint conjunctive formula form clock constant let denote set clock constraints tba first introduced alur dill defined definition timed automaton tba tuple finite set locations set initial locations finite set clocks map labelling state clock constraints set transitions set accepting locations finite set atomic propositions labelling function labelling every state subset atomic propositions state pair location clock valuation satisfies clock constraint initial state pair vector number valuations semantics examples tba definition refer reader nikou shown previous work alur mitl formula algorithmically translated tba language satisfies mitl formula also language produces accepting runs tba tba expresses possibilities satisfaction violation mitl formula timed runs result satisfaction mitl formula called accepting definition accepting run run infinitely many run consists infinitely many accepting states movement agent described timed run case movement agents collectively described collective run definition definition nikou collective timed run agents defined follows argmin otherwise problem definition since taking account counterexample nikou section iii following holds proposed solution solution approach involves following steps agent abstract linear system wts describes possible movements agent considering dynamics limitations state space section agent construct local bwts wts tba representing local mitl specification accepting timed runs local bwts satisfy local specification section next construct product bwts local bwtss accepting timed runs product bwts satisfy local specifications section next construct global bwts product bwts tba representing global mitl specification accepting runs global bwts satisfy global specification local specifications section finally determine control input applying algorithm find accepting run global bwts projecting accepting run onto individual wtss section computational complexity proposed approach discussed section system model consider agents performing bounded workspace governed dynamics set containing label agent problem statement problem considered paper consists synthesizing control input sequence agent satisfies local individual mitl formula set atomic propositions apm time team agents satisfy team specification mitl formula set atomic propositions apg apm following terminology presented section problem becomes problem synthesize sequence individual timed runs following holds collective run defined remark initially might seem run satisfies conjunction local formulas found problem solved straightforward centralized way hold constructing wts section consider abstraction environment wts definition wts given section abstraction performed agent resulting number wtss following idea gol belta begin dividing state space rectangles defined definition definition rectangle characterized two vectors rectangle given formula satisfied rectangle atomic proposition set apm either true points within rectangle false points within rectangle api api apm set states wts defined set rectangles definition initial state transitions labelling follows directly iff common edge api apm rue set given set control inputs induce transitions particular control input must defined possible transition guarantees transition transition allowed occur edge transition goes must reachable conditions control inputs required ensure synthesized path followed guarantee following time estimation holds suggested controller transition direction based gol belta given max ekl ekl min akj max bkj bkj akj umax bound robustness parameter idea maximize transition speed conditions speed direction negative edge norm direction transition direction finally weights assigned maximum transition times times given according theorem theorem depends assumption matrices dimension respectively assumption corresponds affine theorem maximum time max required transition occur share edge ekl ekl edge located opposite ekl direction transition assuming ekl reachable points within defined max min min akj bkj ekl ekl note ith coordinate initial final positions transition see figure illustration variables theorem dimensions proof theorem max maximum transition time fig illustration variables theorem dimensions system following linear dynamics determined considering minimum transition speed consider dynamics agent projected onto direction transition ith coordinate point edge ekl ith coordinate point edge ekl since system rewritten introducing maximum transition time determined solving equation solved separating constant since constant holds otherwise maximum transition time overestimated considering minimum transition speed min every point determined considering limits namely akj bkj akj min bkj maximum transition time denoted max overestimated solution max min min solved yields max yields max remark max maximal time required transition occur otherwise max finally weights wts defined max constructing product bwts product bwts constructed local bwtss definition given follows definition given local bwtss tnp defined definition product bwts tnp qinit qinit apn constructing local bwts next local bwts constructed wts tba representing local mitl specification agent stated section mitl formula represented tba alur approaches translation suggested maler brihaye piterman note considered mitl formulas must form due overapproximation time abstraction local bwts defined definition given weighted transition system timed automaton sinit local bwts defined qinit qinit sinit iff follows construction ltl model checking theory baier katoen possible runs local bwts correspond possible run wts furthermore accepting states local bwts corresponds accepting states tba formalized lemma lemma accepting timed run rkt local bwts projects onto timed run wts produces timed word accepted tba also timed run produces accepting timed word tba accepting timed run local bwts qinit apg defined qinit qinit qinit iff dmax max dpi apg apk cimi cik follows construction accepting collective run product bwts corresponds accepting runs local bwts formally lemma accepting collective run product bwts projects onto accepting timed run rkt local bwts moreover exists accepting timed run every local bwts exists accepting collective run remark note definition allow agents start transitions different times causes overestimation required time increases risk false negative result alternative definition allows mentioned behaviour suggested nikou however definition suggested requires less number states hence less computational time constructing global bwts finally global bwts constructed product bwts tba representing global mitl specification definition given product bwts qinit apg init global tba global bwts apg defined zmg init qinit consists sets first set contains ones remaining sets contains ones iff zki zki cik zki zki otherwise otherwise otherwise otherwise follows construction accepting run global bwts corresponds accepting run product bwts well accepting run tba representing global specification formally lemma accepting timed run global bwts projects onto accepting collective run product bwts produces timed word accepted tba representing global specification also exists accepting collective run produces timed word accepted tba accepting timed run global bwts control synthesis controller designed applying modified algorithm modified dijkstra find accepting run global product modification algorithm includes clock valuation considering transition sketch modification given algorithm idea calculate clock valuation clock given predecessors current state valuation satisfy clock constraint transition valid algorithm complete accepting run projected onto wtss following lemma lemma lemma finally set controllers given sequences control inputs induces timed runs turn produce accepted timed words local tbas well global tba algorithm modification evaluate clocks result clock valuation number clocks state successor red red red red isempty break end end end end transition illegal add successor red end complexity framework proposed paper requires number states method suggested nikou requires cmax cmax number states possible clock values integers set cmax cmax local global tba respectively hence number states required proposed framework factor cmax cmax less room room corridor room room table worst case estimation transition times well actual required time actual transition times defined maximum times individual agents require complete transition transitions require agent stay place hence actual time defined time agent requires complete transition numbered order transitions see figure room room position agent agent worst case time estimation actual time fig draft problem described section fig partition constructed matlab script circles represents initial states agent simulation result consider agents dynamics form evolving bounded workspace consisting rooms hallway seen figure agent assigned local mitl formula eventually within time units agent must room agent enters room must enter room within time furthermore assigned global mitl formula eventually within time units agent must room agent must room initial positions agent indicated encircled figure remark seen figure walls added environment transitions forbidden handled abstraction since edges walls placed reachable suggested environment abstracted wts states see figure local mitl formula represented tba states results local bwts states notable local bwtss agent identical dynamics identical furthermore problem hand considers local mitl formulas global tasks considered five step procedure described earlier stop case control design performed based accepting runs local bwts since global task considered case product bwts global bwts must constructed product bwts consist states global bwts consist states matlab used simulate problem constructing transition systems applying modified dijkstra algorithm find accepting path well control sequence satisfies specifications projection found accepting run onto wts yielded respective agent result visualized figure shows evolution system given initial positions figure constructed implementing function determined system state initial position equal last position former transition switching controllers performed based position agent namely switching controller uij ujk performed agent entered far enough state far enough defined iterations upon exiting previous state estimated time distances joined transition given table worst case transition times yields agent agent begins respective initial position corridor agent enters room within time units start agent enters room within time units start agent agent enters room within time units respectively entering room agent room agent room within time units start clear given path satisfy mitl formulas agent agent fig illustration paths agent example numbers represent end joined transition actual arrival time location well time agent required wait till worst case transition time reached guaranteed agents transitioned noted right figure time agent wait till corresponds worst case estimation required transition time due requirement agents make transitions simultaneously notable agents finish transitions less time worst case estimation hence waiting time cut allowing agents communicate transition done simulation presented section run matlab laptop core ghz processor runtime approximately conclusions future work framework synthesize controller system following continuous linear dynamics local mitl formulas well global mitl formula satisfied presented method supported result simulations matlab environment future work includes communication constraints agents references alur dill theory timed automata theoretical computer science alur feder henzinger benefits relaxing punctuality journal acm jacm baier katoen principles model checking mit press brihaye geeraerts mitl alternating timed automata formal modeling analysis timed systems springer topcu computational methods stochastic control metric interval temporal logic specifications corr gol belta temporal logic control systems nonlinear analysis hybrid systems guo dimarogonas plan reconfiguration local ltl specifications international journal robotics research kantaros zavlanos distributed ltlbased approach intermittent communication mobile robot networks american control conference acc karaman frazzoli vehicle routing problem metric temporal logic specifications ieee conference decision control cdc kloetzer ding belta multirobot deployment ltl specifications reduced communication ieee conference decision control cdc kloetzer belta fully automated framework control linear systems temporal logic specifications automatic control ieee transactions fainekos pappas waldo sensor based temporal logic motion planning mag loizou kyriakopoulos automatic synthesis motion tasks based ltl specifications ieee conference decision control cdc maler nickovic pnueli mitl timed automata formal modeling analysis timed systems springer piterman mtl deterministic timed automata springer nikou boskos tumova dimarogonas cooperative planning coupled multiagent systems timed temporal specifications http nikou tumova dimarogonas cooperative task planning systems timed temporal specifications raman sadigh murray seshia reactive synthesis signal temporal logic specifications international conference hybrid systems computation control hscc zhou maity baras timed automata approach motion planning using metric interval temporal logic european control conference ecc
| 3 |
theory aug eduardo blanco abstract paper define new algebraic object show main properties disguisedgroups consequence see coincide regular semigroups prove many results theory groups adapted case unknown results theory groups regular semigroups introduction paper defined developed new algebraic structure new object lives semigroups groups called like many traditional properties groups turn groups new algebraic structure definition coincide monoid fact never monoid see later simultaneously monoid fact group similar known algebraic structure disguisedgroups regular semigroups see one deduce equality algebraic objects definition disguisedgroups sense groupoids binary operation done every couple elements topological point view use theory groups extended algebraic topology use groupoids one reasons groupoids complicated algebraic structures difficult obtain results see point view propose like good tool attack problems algebraic topology shown information theory groups suggest references definition let set binary operation element identity say element left identity say element right identity element inverse element identity eduardo blanco definition let set binary operation call next four conditions hold closed operation every associative operation every every exist right identity left identity every element least one inverse every inverse element right identity left identity looking definition one realize interesting things first one difference group taking one element disguisedgroup identities right left different among identities elements definition one deduces category later describe morphisms include category groups first one empty natural question one find group paper going define seem group though fact second interesting thing one realize definition relation monoids monoid unique identity right left identity unique elements monoid fact refer monoid every element unique right identity unique left identity equal monoid group similar known algebraic structure regular semigroups definition one deduce exactly regular semigroups difference lies definition definition regular semigroups see identities hold definition time condition identities element let denote set identities let denote set right identities set left identities going prove lemma help state important inverse identity relations remark looking definition point view inverse element every theory every require work prove inverse statement lemma let let every lemma analogue proof associative property definition suppose using associative property definition remark clearly contradiction beginning proof definition last lemma inverse identity relations inverse definition let every every define positive integer order element minimum exist positive integer say element infinite order said cyclic exists abelian every two elements conmute results section known literature regular semigroups see excepting proposition seems new next proposition prove properties arise definition one surprising last one cancellative property say surprising algebraic structures see grothendiek group page cancellative property improve algebraic object case cancellative property holds turns group eduardo blanco proposition let following properties hold let right identity left one unique moreover exists element group idn particularly identity right left inverse identity every inverse unique moreover inverse element extend definition case every let element finite order right left identities equal inverse unique right identity unique left identity group suppose true group proof let lemma right identity analogue uniqueness left right identities looking definition identity element would unique would group let without loss generality using inverse identity relations exists induction suppose statement true natural numbers lower using associative property definition hypothesis induction idn therefore idn let two inverses let lemma let using associative property definition definition identity lemma theory inverse identity relations lead prove necessary know inverse hypotheses case inverse proved inverse unique equality holds first important say every elements finite order identities order proposition using definition proposition equalities conclude inverse using equalities obtain right left identity proposition proof finished prove equality right identity set analogue let associative property definition need apply property proposition let unique right identity take using hypotheses inverse identity relations like last deduction valid left identity conclude group prove first case second one analogue suppose every triad exist hypotheses stays invariant left action every every exists let fix looking proposition using proposition conclude group remark proposition next equalities proposition regular semigroups proposition inverse semigroups regular semigroups inverse semigroups eduardo blanco looking references definition regular inverse semigroups mantain name shake clearness account equality proved remark come directly definition til end use following notation let denote idr idl right left identities respectively unique proposition inverse unique proposition one important difference comparing groups set identities necessarily closed operation fact generates lot problems want prove results theory groups however concepts results groups theory obtained without big effort traditional results need several surprising theorems let define concept definition let subset following three conditions hold closed operation every idr idl every identity unique say subgroup necessary demand associative property property holds condition exist trivial every group take subsets idr idl fact subgroups proposition true general subsets one element possible prove operation closed subset identities let see examples disguisedgroup let let idr idl idr idl easy see using properties proposition inverse identity relations group exists least one element idr idl set subgroup another example following one let element finite order let let see subgroup let prove first condition definition proposition right left identities theory equal let element right left identities using proposition equalities proposition operation closed every using associative property moreover proved identity elements set let prove condition definition let finite order using inverse subgroup next proposition give criteria shorter definition subset proposition let proof suppose condition definition condition definition suppose let idl idl idl using inverse identity relations like using proposition idr finally closed operation proven remembering proposition remark remark aims show important fact going use frequently later properties results groups extended little bit work prove elementary fact obvious groups obvious however quite easy see let let going prove invariance eduardo blanco identities let prove first equality second one analogue obvious suppose operating right using associativity proposition clearly false conclude next proposition specific result theory exist groups theory main idea due multiplicity identities despite fact identities let define proposition let let let define idr idl following statements hold subsets inverse iii subsets proof suppose let prove let proposition let see disguisedsubgroup operation closed using definition proposition let identities definition thanks inverse identity relations let prove let elements exactly identity identity using proposition definition none elements identities like identities unique proposition subsets let theory remembering proposition would contradiction let prove iii let elements identity proposition elements definition normal concepts results exposed section new theory regular semigroups moment going define useful concept groups theory going important theory definition normal possible general find normal trivial example groups theory work due multiplicity identities however existence normal produces effects future going prove criteria shorter useful definition subset disguisedgroup normal lemma let normal proof let right left identity element without loss generality inverse identity relations suppose exists idr definition element right hand side right identity proposition element left hand side right identity proposition sides equal right identity unique every element proposition proposition let disguisedsubgroup following conditions equivalent normal iii every exists eduardo blanco every exists proof going prove implications iii suppose normal let inverse lemma normal every every exist idr idl suppose first let prove hypotheses suppose identity using proposition element left right identity disguisedsubgroup contradiction hypotheses case exists associativity idr element idr takes values invariance identities taking idr exists taking suppose every exists take hypothesis exist proven take hypothesis exist idl idl idl proven next theorem explains good name given new algebraic object theory theorem let exists conmutes every element group particularly abelian group cyclic group contains normal group proof suppose exists conmutes every element necessary prove unique identity elements particularly conmute inverse right left identities equal idr idl let different right left identities idr idl respectively proposition element right identity idr left identity element right identity left identity idl every element disguisedgroup right left identities unique proposition conmutes elements disguisedgroup idr idl like proved group suppose cyclic exists every exists take writing using proposition conclude every right identity left identity remembering proposition group suppose contains normal let iii proposition every exists fixed fixed analogue deduction last case conclude idr idr idl idl using proposition group last theorem one realizes clearly reason name surprising theorem next definitions crucial purposes future definition let disguisedsubgroup say definition let disguisedsubgroup let define subsets eduardo blanco define operation define quotient set binary operation set contains subsets kind surprising theorem going call fundamental theorem theorem let disguisedsubgroup group proof definition obvious operation closed associative let prove unique identity define remark take using twice remark using reasoning idj idn conclude unique identity element last step requires proof every inverse taking one enough see corolary let disguisedsubgroup contains group proof contains particularly contains subgroup fact applying fundamental theorem finish corolary let normal let following equivalence relation every case say let set subsets every binary operation defined theory proof normal group theorem group traditional group theory knowing group exercise prove equivalence relation group let call going prove let group normal exists let exists normal exists group isomorphy theorems moment developed proved basic properties results going define concept consider different category natural one associate disguised groups going consider category morphisms groups fact happens good reason isomorphy theorems proved category would able prove considering morphisms main reason binary operation close set identities definition let group morphism said definition let group identity element morphism said homomorphism said said suprajective said suprajective two said isomorphic two exists group eduardo blanco proposition let group identity let proof let using definition idr idr using cancellative property group conclude idr last proceeding done every idr every idr right identity inverse identity relations every let definition idl analogue deduction done idr operating inverse unique every proposition also every group true definition let group identity let define kernel set ker define image set proposition let group identity let ker normal proof let prove ker take ker using proposition ker like theory using proposition ker let see ker normal take ker using disguisedhomomorphism proposition ker ker remembering proposition proof finished corolary let group identity exists group proof necessary use propositions til end section going declare results concerning without proof appear condition exists using last corollary result turn one traditional groups theory information theory groups see great complete book proposition let group identity let subgroup subgroup subgroup normal subgroup normal subgroup normal disguisedepimorphism normal subgroup next proposition obtain consequence definition isomorphic proposition let isomorphic disguisedsubgroup exists quotient groups isomorphic going declare statements called theory groups isomorphy theorems theorem first isomorphy theorem let group group isomorphic proof direct consequence corollary first isomorphy theorem groups eduardo blanco let use notation express isomorphy second isomorphy theorem theorem second isomorphy theorem let normal normal subgroup group proof direct consequence proposition second isomorphy theorem groups theorem third isomorphy theorem let normal normal subgroup group normal subgroup normal subgroup furthermore proof direct consequence proposition third isomorphy theorem groups references blanco homotopy groups symmetric products preprint brown topology groupoids booksurge llc clifford preston algebraic theory semigroups volume providence ams dorronsoro grupos anillos dubreil grupos howie fundamentals semigroup theory clarendon press lang algebra springer
| 4 |
dec coalescing random walks voting connected colin robert tomasz hirotaka december abstract coalescing random walk set particles make independent random walks graph whenever one particles meet vertex unite form single particle continues random walk graph let undirected connected graph vertices edges coalescence time expected time particles coalesce initially one particle located vertex study problem bounding coalescence time general connected graphs prove log second eigenvalue transition matrix random walk avoid problems arising lack coalescence bipartite graphs assume random walk made lazy partially supported royal society international joint project grant random walks interacting particles faster network exploration epsrc grant random walks computer preliminary version results paper presented proceedings podc department informatics king college london department computer science university salzburg austria department economic engineering university kyushu fukuoka japan department informatics king college london required value given degree vertex average degree parameter indicator variability vertex degrees regular graphs general bound holds connected graphs implies example graphs expansion parameterized eigenvalue gap bound given classes graphs skewed degree distributions voter model initially vertex distinct opinion step vertex changes opinion random neighbour let expected time voting complete unique opinion emerge system coalescing particles initially one particle located vertex corresponds voter model thus result stated also gives general bounds introduction coalescing random walk set particles make independent discretetime random walks undirected connected graph whenever two particles meet vertex unite form single particle continues make random walk graph let undirected connected graph vertices edges coalescence time expected time particles coalesce initially one particle located vertex graph study problem bounding coalescence time general connected graphs given graph denote coalescence time particle system order bound study coalescence time system particles expected time particles coalesce single particle depends initial positions let coalescence time particles start distinct vertices worst case expected coalescence time particles max special case two particles naturally referred worst case expected meeting time two random walks system coalescing particles initially one particle located vertex corresponds another classical problem voter model defined follows initially vertex distinct opinion step vertex changes opinion random neighbour let number steps voting completed unique opinion emerge expected completion time voting called voting time random variable distribution hence expected value coalescence time coalescing particles one particle initially located vertex see thus bound coalescence time applies equally voting time coalescence time easier estimate focus quantity henceforth coalescing random walk key ingredient mutual exclusion algorithm israeli jalfon initially vertex emits token makes random walk meeting vertex tokens coalesce provided graph connected bipartite eventually one token remain vertex token exclusive access resource token makes random walk long run visit vertices proportion stationary distribution previous work coalescing random walks summarize known results coalescing random walks two distinct models transition times random walks finite graphs model walks make transitions synchronously steps model walk waits random time independently walks makes transition wait time independent exponential random variable rate let denote hitting time vertex starting vertex random variable gives time taken random walk starting vertex reach vertex let hmax maxu aldous considers meeting time two random walks continuoustime model shows hmax maximum degree upper lower bounds far apart star graph loops whereas hmax hmax bound implies hmax log since number particles halves hmax time aldous conjectured actually hmax earlier results cox model imply hmax constant dimension tori grids regular graphs model aldous fill show log hmax connected graphs complete graphs cooper confirmed conjecture hmax holds random walks random regular graphs follows result random graphs high probability use notation high probability whp mean probability tending notation means simple bounds hmax obtained commute time pair vertices see corollary lovasz graph vertices edges minimum degree hmax average degree follows hmax graph upper bound connected graphs hmax follows general results coalescing walks graphs article study problem bounding coalescence time connected graph assume graphs consider bipartite bipartite random walks lazy pause probability step equivalently voting process assume vertices may choose opinion probability main result stated formally given terms second eigenvalue transition matrix random walk parameter measures variability degree sequence let degree vertex average degree parameter ratio average squared degree average degree squared defined also written parameter ranges regular graphs star graph prove following general theorem theorem connected graph vertices edges let let let expected coalescence time system particles making lazy random walk originally one particle starts vertex log equivalence coalescence voting expected time complete voting upper bound although theorem general statement results bound improved extremal cases established section log bound better log example star gives gives whereas correct value log since star bipartite graph consider lazy walk hassin peleg showed voting hence also coalescence completed expected log time connected graph bound parameterized eigenvalue gap offer refinement hassin peleg bound connected regular graph coalescence graphs completed expected time example graph coalescence time given two cliques size joined path length hand lollypop graphs indicating bound tight parameter related second moment degree distribution measures variability degree sequence maximum degree near regular graphs ratio largest smallest vertex degree bounded constant bound becomes particular expander classic sense regular eigenvalue gap constant parallel work oliveira recently proved conjecture hmax random walks result oliveira implies analogous linear bound random walks expanders note bound qualitatively different hmax graph structure made explicit parameter hmax graph see bound improve hmax occur example also since graphs hmax examples follow graphs power law heavy tailed degree distribution theorem give sublinear bounds coalescence voting times following example shows mihail prove random graph vertices degree log eigenvalue gap class power law graphs theorem implies sublinear voting time whereas graph hmax also examples graphs bound asymptotically better hmax consider graph consisting log expander additional vertex attached one vertices expander graph positive constant hmax log would interesting general lower bound incorporates graph structure similar way upper bounds clear form bound might take weaker conjecture bound tight path vertices indeed cover time graph particle starting left vertex expected hitting time central vertex particles starting left right vertices structure paper analysis coalescence process proof theorem divided two phases first phase number particles decreases initial threshold value phase analysed showing suitably chosen number steps probability exist particles single meeting within first steps implies probability least number particles step less second phase number particles decreases analysed bounding expected time wait first meeting particles time first meeting number particles decreases relatively small probability first meeting could involve particles reducing number particles fewer analysis second phase based following theorem bounding expected time first meeting particles theorem let given max min log maximum degree given particles starting arbitrary vertices let time first meeting log expression threshold value transparent seems necessary deal generality degree sequences connected graphs provided maximum degree graph satisfies log terms cover extremal cases star graphs condition ensures least particles coalesce section gives background material random walks section replaces multiple random walks single walk suitably defined product graph theorem proven section proof theorem concluded section random walk properties let denote connected undirected graph let degree vertex simple random walk graph markov chain modeled particle moving vertex vertex according following rule probability transition vertex vertex equal neighbour otherwise walk starts vertex denote vertex reached step assume connected random walk ergodic stationary distribution case bipartite walk made ergodic making lazy random walk lazy moves one neighbours probability stays vertex probability let matrix transition probabilities walk let put eigenvalues real ordered walk ergodic let max rate convergence walk given absolute value real number proof see example lovasz theorem assume henceforth standard way ensure make chain lazy use following definition mixing time graph vertices convenience assume log even necessary let denote expected hitting time vertex stationary distribution quantity expressed see chapter zvv zvv let denote event visit vertex steps following lemma gives bound probability event terms mixing time walk lemma let mixing time random walk satisfying proof let distribution steps fact connected graph imply let time hit starting let noting restarting process obtain multiple random walks consider coalescence independent random walks graph replace walks single walk follows let graph vertex set thus vertex vertices repeats allowed two vertices adjacent edges direct equivalence random walks wui starting positions single random walk starting position starting positions walks let time first meeting let diagonal set vertices defined random walk visits set two particles occupy vertex underlying graph coalescing meeting occurs number visits set random walk readily manipulated quantity easier approach contract single vertex thus replacing graph contraction edges including loops retained thus denotes vertex degree graph degree set sum degrees vertices moreover total degree degree vertex graph let stationary distributions random walk respectively follows mixing time satisfying hitting time stationarity since replaced individual walks single walk need relate mixing times directly given mixing time single random walk underlying graph need two places bound applying lemma graph lemma random walks graphs mixing times log ktg ktg max graphs proof bound well known see example sinclair use observing constant upper bound use also derive bounds need know eigenvalues terms eigenvalues follows established results next explain notation markov processes random walk known tensor product chain eigenvalues products graph vertices stationary distribution mixing time log ktg table main parameters random walks graphs eigenvalues thus assuming follows see page details notation random walk random walk collapsed proved corollary subset vertices collapsed single vertex second eigenvalue transition matrix increase corollary variable thus get factor bounds mixing times need number vertices graphs reference record salient facts graphs table bound established lemma hitting time stationarity proof theorem proof theorem based inequality good upper bound expected hitting time vertex random walk starts stationary distribution obtain bound using deriving upper bound lemma lower bound stationary probability lemma lemma let graph eigenvalue gap zvv particular vertex zvv proof let using gives thus zvv pvt proof lemma establishes lemma let connected graph vertices edges let log max min maximum degree let integer let contraction proof definition define following subsets equals therefore principle factor occurs number ways partition objects disjoint sets size partition objects sets size single intersection respectively bound follows noting upper bound proof theorem let time first meeting among particles let contraction diagonal set using graph lemmas hitting time stationarity expected value since ktg referring table ktg log let time particles coalesce use proof theorem next section state upper bound follows directly theorem using noting constant log coalescence time proof theorem consider case coalescing particles particle initially located distinct vertex graph purpose section conclude proof connected graph log establish result first prove probability exist particles single meeting within first steps log value given upper bound expected time particles coalesce given deal part separately let set particles starting vertices probability particles meet time probability random walk starting visit time apply lemma graph vertex obtained meeting among particles log coalescence process assume two particles meet vertex lowest index particle survives continues random walk particles die thus particles steps set particles meet within steps therefore least particles steps exists set particles meeting last inequality holds bound implies expected number steps fewer particles remain therefore using log lemma bound given bound given obtain bound log log log log log log last bound obvious log log last bound holds second term sum log indeed log definition either log log former second term sum clearly observe log second term sum thus conclude noting since implies log bound better log references aldous meeting times independent markov chains stochastic processes applications august aldous fill reversible markov chains random walks graphs http cooper frieze radzik multiple random walks random regular graphs siam discrete math cooper ono radzik coalescing random walks voting graphs podc proceedings acm symposium principles distributed computing pages july cox coalescing random walks voter model consensus times torus annals probability october gkantsidis mihail saberi conductance congestion power law graphs sigmetrics proceedings acm sigmetrics intl conf measurement modeling computer systems new york usa pages hassin peleg distributed probabilistic polling applications proportionate agreement information computation december israeli jalfon token management schemes random walks yeild self stabilizing mutual exclusion podc proceedings annual acm symposium principles distributed computing quebec city quebec canada pages august levin peres wilmer markov chains mixing times american mathematical society random walks graphs survey bolyai society mathematical studies combinatorics paul eighty keszthely hungary oliveira coalescence time reversible random walks trans amer math soc sinclair improved bounds mixing rates markov chains multicommodity flow combinatorics probability computing december
| 8 |
nov asymptotically optimal online algorithm weighted random sampling replacement piotr startek university warsaw faculty mathematics informatics mechanics november abstract paper presents novel algorithm solving classic problem generating random sample size population size nonuniform probabilities sampling done replacement algorithm requires constant additional memory works time even case algorithm produces list containing every population member number times selected sample algorithm works online processing streams addition novel method discrete distribution using algorithm presented introduction assume given population elements least first problem infinite populations elaborated later along sequence probabilities element denoted single number sample size might greater lower task compute random sample size population element sample one elements corresponding probability note without loss generality assume algorithm assumes implementation procedures sampling single random numbers beta easy case parameters integer binomial distributions author address startek matematyki informatyki mechaniki uniwersytetu warszawskiego banacha warsaw poland algorithm naive sampling algorithm input sequence probabilities desired sample size output multiset containing random sample compute array new empty multiset repeat times randomize find greatest using binary search add contains result well lack numerical errors consideration mitigating effects numerical inaccuracies given later sections algorithm best presented author feels starting naive algorithm iteratively refining desired time memory complexity reached naive algorithm naive algorithm despite costs practice reasonably efficient used second variant example numpy numeric library python based cruicial idea used also novel version presented idea based geomertical intuition interval divided parts lengths sampling random number uniform distribution picking subinterval falls corresponding element results choice single element desired probability distribution efficient finding selected subinterval faciliated precomputing array cumulative sums probabilities performing binary search algorithm consumes time initialization lines log time actual sampling memory space additional data structures counting result omitting computation cumulative sums table first modification algorithm makes possible skip necessity precompute array cumulative sums line one instead samples necessary random numbers sorts array processes iid sequence time fasion similar merge step mergesort algorithm algorithm sampling algorithm without cumulative sums table input sequence probabilities desired sample size output multiset containing random sample new empty multiset idx cumulativep robsum randomize sort ascending order foreach cumulativeprobsum cumulativep idx idx end add idx end contains result algorithm runs log time improvement previous version lines take total log outer loop runs times inner loop runs total times variable idx bounded algorithm uses memory omitting sorting algorithm might improved table could generated already sorted order fact possible fact min beta first iid element sought table since variables independent sampling minumum using method easy see remaining variables condition less minimum distributed according minimum variable might sampled method rescaling fact allows drop step precomputing table altogether compute consecutive variables making algorithm capable online operation well improving runtime algorithm runs time requires constant additional memory working online case every intermediate result immediately provided calling procedure consumption possibly immediately discarded instead explicitly stored algorithm online algorithm input sequence probabilities desired sample size output multiset containing random sample new empty multiset idx cumulativep robsum currentx beta currentx cumulativep robsum currentx cumulativep idx idx end add idx end contains result case practical speed algorithm constrained speed sampling beta distribution remaining operations trivial comparison provides opportunity optimization population small respect numer samples required algorithm sample beta distribution many times population member avoided changing reasoning insead asking next ask many xes encounter going current answer binomial distribution binom answer distribution conditioned number xes population probability already consumed binom fact standard algorithm sampling multinomial distribution exactly problem random sample replacement former terminology often used contexts latter otherwise provided algorithm runs time assuming behaves like counter increasing count given item done constant time constant memory capable working online noted although achieves optimal theoretical asymptotic runtime practical implementation inefficient even sample consisting one element desired perform expensive operations sampling binomial distribution practical algorithm previous two algorithms opposites terms practical pessimistic case one using beta distribution randomize per requested algorithm online algorithm input sequence probabilities desired sample size output multiset containing random sample new empty multiset idx cumulativep robsum randomize binom cumulativep robsum add multiplicity cumulativep end contains result sample member runs fast slowly one using binomial distibution randomize every member population efficient practice large values small turns actually possible create hybrid algorithm combines strengths algorithm work first examining data choosing one previous versions runnig instead adapts fly capable switching back forth modes runtime needed need examine data advance keeps compatible online operation recall metaphor segment divided fragments corresponding population members lengths equal probabilities algorithm may imagined walking along segment picking sample along way make two kinds steps first beta step constant average length may pass multiple small population elements results adding sample population member ends multiplicity one disadvantage large population member encountered may take multiple beta steps pass kind step binomial step immediately travels forward end current population member adding sample multiplicity according result randomization obviously makes sense use type step traversing population members large probabilities achieved condition line expected number samples randomized current member population compared constant result comparison used determine whether proceed beta binomial mode positive constant good enough achieve theoretical bounds however practice best choose based relative costs sampling beta binomial distributions algorithm still online although presented form readability works constant memory online results immediately consumed caller instead stored careful reader might notice algoritm presented runs pessimistic time pessimistic algorithm final algorithm input sequence probabilities desired sample size output multiset containing random sample new empty multiset idx cumulativep robsum currentp osition cumulativep idx cumulativep robsum currentp osition currentp osition currentp beta currentp osition cumulativep robsum currentp osition idx cumulativep idx end increase multiplicity idx terminate algorithm contains result end end randomize binom cumulativep robsum currentp osition currentp osition increase multiplicity idx currentp osition cumulativep robsum end contains result time achieved algorithm encounters element population probability small enough decides use beta mode due bad luck proceeds draw infinitesimal samples beta distribution leaving element proceeding forward may easily avoided adding hard condition would force binomial mode constant number consecutive beta samples omitted main code presented readability also concern practical application however hard limit algorithm perform binomial samples binomial sample increases idx variable bounded hard limit possible perform beta samples therefore runtime bounded practical notes algorithms presented depend heavily good implementations functions sampling binomial beta distribution particular anyone undertaking implementation algorithm advised write custom version function sampling distribution algorithm always samples case distribution explicit invertible cdf custom sampler using inverse cdf method always faster sampler scientific library handle general case regarding binomial distribution function sampling implemented gnu project standard library unix systems inadequate task seems complexity regard parameters tests performed used function implemented boost library seems work faster also runs constant time regarding numerical stability algorithm algorithm computes cumulative sum encountered probabilities course tricky precise numerical correctness required summation done using kahan even shewchuk summation algorithms however practical purposes imaginable necessary special care must taken however sometimes due numerical errors last sampling binomial distribution might performed probability success slightly greater programmer must ensure sampling function assumes instead crashing side note algorithm benefit slightly input data sorted either ascending descending order matter casues algorithm perform less switches beta binomial mode minimises number elements partially dealt binomial mode partially beta mode slight speed benefit justify spending computational time especially loss asymptotic optimality ability work online needed sort data however means order avoid bias runtime tests described input data tests randomly shuffled comparison sampling methods runtime tests algorithms presented paper implemented programming language purposes testing speed comparison compared implementation standard walker alias method implemented programming language later referred name full implementation sampling function examines data heuristically chooses walker algorithm naive algorithm runs referred alternative standalone comparison runtimes ransampl numpy gaussian population population size fixed elapsed runtime sec elapsed runtime sec sample size fixed sample size population size uniform population population size fixed elapsed runtime sec elapsed runtime sec sample size fixed sample size population size geometric population population size fixed sample size fixed elapsed runtime sec elapsed runtime sec sample size population size figure comparison runtimes various random choice algorithms various versions novel algorithm proposed work marked large dots example algorithms marked small dots competing algorithms used various scientific libraries marked crosses algorithm algorithm numpy algorithm algorithm algorithm algorithm walker algorithm pessimistic runtime calls rng work online log additional memory used log min yes yes yes yes table comparison properties sampling algorithms tation walker algorithm ransampl library numpy implementation follows algorithm novel algorithm proposed work algorithm tested two versions one produces array size sample repetitions referred algorithm one produces multiset integer counts instead repetitions multiset implemented either hashtable using standard data structure trivially array size implementation using former referred plots one using latter similarly algorithm implemented tested outputting array repetitions multiset based array hashtable implementation produced skipped effort avoid complicating plots summary theoretical properties algorithms presented table functions written code code functions reused slightly modified remove dependencies internals numpy although written python compiled native code using cython run speeds like rest tested algorithms without suffering overhead due python interpreted language algorithms tested three different random populations one drawn uniform distribution representing population mostly equal probabilities one drawn geometric sequence starting ending representing population skewed probabilities third type population generated applying gaussian pdf function points evenly spaced times stdev gaussian function meant simulate usual application sampling function modelling population genetics fact inspiration research population genetics models selection reproduction modelled organisms often done precisely randomly sampling replacement organisms reproduce pass offspring next generation population probability given organism chosen reproduce proportional fitness function often gaussian population type uniform geometric gaussian rescaled sums randomly shuffled results tests presented figure evident results tests proposed algorithm asymptotically optimal unlike algorithm also efficient practice outperforming competing methods scenarios much several orders magnitude cases single pessimistic case distribution probabilities population close uniform although runs slightly slower still remains competetive moreover difference runtime grows smaller overtakes walker method data shown algorithm able adapt input data use skew uniform distribution advantage increase runtime evidenced tests gaussian especially geometric populations unlike popular algorithms works constant additional memory capable online operation implementation algorithm programming languages may downloaded http application mass sampling discrete distribution proposed algorithm online may accept infinite sequence states population still expected produce sample finite time without exhausting whole sequence one application immediately obvoius mass sampling iid variates discrete distribution one needs ito exhaustively walk configuration space distribution preferably though necessarily order decreasing probability mass function pmf feed resulting sequence proposed algorithm result sample input distribution desired size advantage proposed solution input distribution need easily invertible cdf computable pmf runtime usually sublinear wrt sample size depends exact properties distribution sampled distributions light tails faster sample ones example generating sample size poisson distribution using programming language rpois function takes seconds using scheme proposed elapses seconds algorithm consumes constant memory plus memory needed datastructures needed walk configuration space trivially constant case distributions integer support worst linear visited hashtable plus linear priority queue configuration space complicated needs traversed fashion algorithm works online meaning generated part sample immediately available consumption computations proceed generate rest could used provide alternative implementation sampling functions many programming languages accept argument denoting sample size proceed generate even large sample naive iterative fashion one point worth noting however algorithm presented returns sample sorted order configuration space traversed undesirable shuffle may performed resulting stream cost loss online property acknowledgments would like thank anna gambin miasojedow phd mateusz msc helpful comments research funded grant polish national science centre grant polonium matematyczne obliczeniowe modelowanie ewolucji ruchomych genetycznych references david nagaraja order statistics wiley online library dijkstra note two problems connexion graphs numerische mathematik fisher yates statistical tables biological agricultural medical research longman kahan pracniques remarks reducing truncation errors commun acm shewchuk adaptive precision arithmetic fast robust geometric predicates discrete computational geometry walker efficient method generating discrete random variables general distributions acm trans math softw
| 8 |
enhancement hybrid system falsification mar gidon ernst ichiro hasuo zhenya zhang sean sedwards national institute informatics tokyo japan university waterloo waterloo canada gidon hasuo zhangzy falsification employs stochastic optimization algorithms search error input hybrid systems paper introduce simple idea enhance falsification namely time staging allows structure signals exploited optimizers time staging consists running falsification solver multiple times one interval another incrementally constructing input signal candidate experiments show time staging dramatically increase performance realistic examples also present theoretical results suggest kinds models specifications time staging likely effective introduction hybrid systems quality assurance systems cps recognized important challenge many cps hybrid systems combine discrete dynamics computers continuous dynamics physical components unfortunately analysis hybrid systems poses unique challenges limited applicability formal verification formal verification one aims give mathematical proof system correctness much harder hybrid systems computer presence continuous dynamics makes many problems complex even undecidable reachability hybrid automata falsification bemore robustly way cause difficulties increasclimb true ing number researchers turning true input input falsification less quality assurance measure testing false quantitative method rather formal verifiboolean semantics robust semantics cation problem formalized folfigure boolean robust semantics lows given model takes input signal yields output signal specification temporal formula answer error input input signal corresponding output violates falsification approach falsification problem turned optimization problem possible thanks robust semantics temporal formulas instead boolean satisfaction relation robust semantics assigns quantity tells enhancement hybrid system falsification whether true sign also robustly formula true false allows one employ optimization see fig iteratively generate input signals direction decreasing robustness hoping eventually hit negative robustness falsification subclass testing adaptively chooses test cases input signals based previous observations one use stochastic algorithms optimization simulated annealing turn much scalable many model checking algorithms rely exhaustive search note also system model black box enough know correspondence input output error input concrete evidence system need improvement thus great appeal practitioners approach falsification initiated actively pursued ever since mature tools breach work simulink models contribution introduce simple idea time staging enhancement falsification time staging consists running falsification solver repeatedly one input segment another incrementally constructing input signal general solving concrete problem metaheuristic stochastic optimization evolutionary computation etc key success communicate much information possible translation let exploit structures unique idea time staging follows philosophy specifically via time staging communicate structure structure present instances falsification problem optimization problems stochastic optimization solvers implementation falsification based breach show simple idea dramatically enhance performance examples also present theoretical considerations kinds problem instances time staging likely work results aid implementation time staging structure paper informally outline falsification illustrate idea time staging turn formal developments review existing falsification works present algorithm augmented theoretical results aid implementation devoted theoretical consideration two specific settings time staging guaranteed work well settings serve useful rules thumb practical applications discuss implementation experimental results conclude schematic overview falsification time staging illustrate falsification time staging informally example example setting take system model simple automotive powertrain whose input signal throttle whose output signal vehicle speed assume exhibits following natural behavior larger quicker grows let specification denotes syntactic equality falsify vehicle speed must exceed assumption behavior expect large falsifying input signal note simplified version one experiments ernst hasuo zhang sedwards sampling sampling throttle throttle time choosing optimization vehicle speed time vehicle speed time time figure conventional falsification without time staging first stage second stage throttle throttle time time vehicle speed optimization optimization vehicle speed choosing best prefix optimization optimization time time figure falsification time staging cli llhi robustness input signal figure optimization infigure optimization falsification put space square unknown score function depicted contour lines figures wikipedia enhancement hybrid system falsification falsification fig illustrates conventional falsification procedure works sampling one tries input signal following falsification litera ture focus piecewise constant signals thus signal represented sequence real numbers see top left fig corresponding output signal shown since reach threshold move sampling try new input signal choice made optimization algorithm specifically optimization algorithm observes results previously sampled input signals robustness value input achieves current setting robustness value simply difference peak vehicle speed optimization algorithm tries derive general tendency uses increase probability next input signal make robustness smaller peak vehicle speed higher hill climbing prototype optimization algorithms use falsification illustrated fig depicted clarity actual curve robustness value gray dashed unknown still previous observations input suggest right climbing direction next candidate picked accordingly towards negative robustness another optimization algorithm neldermead algorithm see fig input space unknown robustness function depicted contour lines see right fig new input signal leads corresponding output signal reduces robustness value achieving higher peak speed continue way hoping eventually reach falsifying input signal absence information closer look fig reveals room improvement fig new input signal indeed achieves smaller overall robustness however initial segment smaller consequently vehicle speed smaller first seconds keeping initial segment would achieved even greater peak speed problem structure inherent problem explicitly communicated optimization algorithm relevant structure specifically time monotonicity input prefix achieves smaller robustness greater peak speed likely extend full falsifying input signal although possible stochastic optimization algorithm somehow learns time monotonicity guaranteed structure input spaces horizontal axis fig squares fig explicitly reflect structures time monotonicity shared instances falsification problems find many realistic instances approximately satisfy property discuss time monotonicity well context experiments falsification time staging proposal time staging consists incrementally synthesizing candidate input signal illustrate fig first stage left run falsification algorithm try find initial input segment achieves low robustness high peak speed first stage comprises running samplings illustrated fig process gradually improve candidates initial input segment way arrows left fig designate let assume last candidate tentative best achieving smallest robustness ernst hasuo zhang sedwards second stage right fig continue synthesize second input segment running falsification algorithm depicted note stage box fig whole iterated process fig conducted way continue stage always starting input segment performed best previous stage thus exploiting timecausal structure time staging difficult implement challenge using effectively immediate question whether choosing single best input segment stage optimal approach current strategy favors exploitation exploration might miss falsifying signal whose robustness must decrease slowly earlier segments quickly latter segments indeed working evolutionary variant algorithm multiple segments passed one stage another order maintain diversity conduct exploration said even current simple strategy picking best one observe significant performance enhancement falsification problems see summarize terms size search spaces let set candidates input segments number stages size set whole input signals choosing one input segment stage staged algorithm contrast search space stage overall search space reduction comes risk missing falsifying input signals experimental results suggest risk worth taking moreover present theoretical conditions absence risk help users decide practical applications time staging effective falsification section turn formal description analysis algorithm section presents review existing works falsification system models let formalize system models definition signal let positive real signal time horizon function let signals concatenation defined let restriction interval defined definition system model system model input function takes input signal returns common time horizon arbitrary furthermore impose following causality condition signals require note hold general feeding change internal state motivates following definition definition continuation let system model signal continuation denoted defined follows input signal enhancement hybrid system falsification signal temporal logic robust semantics review signal temporal logic stl robust semantics var set variables let variables stand physical quantities control modes etc denotes syntactic equality definition syntax stl atomic propositions formulas defined follows respectively function var closed interval omit subscripts temporal operators common connectives like always eventually introduced abbreviations atomic formulas like constant also accommodated using negation function definition robust semantics unbounded signal denotes let signal recall stl formula define robustness follows induction denote infimums supremums real numbers respectively jwt jwt intuitions consequences robustness stands vertical margin signal time negative robustness value indicates far formula true robustness eventually modality computed original semantics stl boolean given binary relation signals formulas robust semantics refines boolean one sense implies implies falsification via robust semantics hinges refinement see although definitions far unbounded signals note robust semantics well boolean satisfaction allows straightforward adaptation signals def see appendix falsification solvers next definition prototype score function given robustness given stl specification generality allowing needed later definition falsification solver falsification solver stochastic algorithm falsify takes input system model def input score function takes output signal returns score time horizon algorithm falsify returns signal invocation falsify solver called falsification trial successful returned signal satisfies note returned signal differ every trial since falsify stochastic algorithm ernst hasuo zhang sedwards algorithm internal structure falsification solver falsify require system model score function list collects candidates initialsampling sampled following recipe cons optimizationsampling sampled becomes small based previous samples cons arg return trial successful assume internal structure solver falsify follows scheme algorithm consists two phases first initial sampling phase collects candidates regardless system model score function second optimization sampling phase stochastic optimization algorithm employed sample candidate likely make score small implementation falsification solvers breach take simulink models system models input signal candidates tools focus piecewise constant signals represented sequences real numbers much like number control points staged algorithm use number stages tools offer multiple stochastic optimization algorithms optimization sampling phase including global simulated annealing initial sampling phase mostly random sampling additionally breach global corner samples added list number corner samples grows exponentially grows control points time staging falsification definition deployment falsification solver let system model stl formula time horizon let parameter number time stages deployment falsification solver falsify procedure algorithm line model continuation def score function defined whole procedure stochastic since falsify invocation called falsification trial successful returned signal satisfies falsification trial invocation algorithm iterative process sample likely obtain falsifying input signal since run multiple falsification trials algorithm one trial stages important question distribute available time different stages enhancement hybrid system falsification algorithm deployment falsification solver require falsification solver falsify system model stl formula input prefix obtained far start empty signal synthesizing input segment falsify concatenate length return falsification trial successful simple strategy fix number samples phase algorithm predicates init initialsamplingdone optimizationsamplingdone given nmax opt opt init nmax nmax nmax constants adaptive strategy also implemented optimization sampling phase continue stuck stop stuck consampling stop seeing progress fix parameter nmax max secutive samplings without reducing robustness similar strategy adaptively choosing number samples introduced random sampling initial sampling phase lines algorithm towards efficient implementation key speedup algorithm line specifically handle previous input prefix discuss two directions one model score function note suggested enhancements currently used implementation performance reasons see continuation models falsification wide application domain since requires model concrete form vary program simulink model even system hardware components hils models big usually bottleneck falsification lies simulation compute given input signal line algorithm therefore using definition def principle good strategy requires simulation whole prefix avoided directly simulate continuation simulink possible saving snapshot model simulation via savefinalstate model configuration parameter implementation though overhead saving loading snapshots currently greater cost simulating balance become different figure less expensive way use snapshots study complex models derivative formulas situation similar score function line algorithm using presentation requires scanning prefix repeatedly desired syntactic presentation given stl formula would allow one utilize available algorithms computing robustness values definition derivative flat stl formulas let signal given stl formula flat sense nested temporal operators derivative ernst hasuo zhang sedwards defined inductively follows cjv cjv interval obtained shifting endpoints notation abbreviates atomic formula thought constant function use fact formulas split evaluation signal prefix first disjunct continuation second disjunct constant injects robustness seen far residual formula recall take infimum follows proposition let signal flat stl formula proof appendix use derivatives timed specifications also found settings different though boolean semantics semantics quantitative restriction flat formulas comes mainly difference lifting flatness restriction seems hard sufficient conditions time staging present theoretical analyses performance time staging indicate class systems approach apply give sufficient conditions approach guaranteed work however noted necessary concrete system satisfies conditions strictly rather restrictive nevertheless believe users expert domain knowledge judge whether models satisfy conditions approximately way results provide users rules discussed last paragraph potential performance advantage time staging comes reduction search spaces set potential input segments stage number stages advantage comes risk missing error input signals following basic condition call incremental falsification ensures risk precisely decompose best input signal first stage remainder entire falsification problem left hand side solved greedy optimization initial segment inner arg subsequent optimization continuation outer choices ranges minjm min arg minjm algorithm repeatedly unfolds picking constant time horizon number stages rest section devoted search concrete sufficient conditions enhancement hybrid system falsification monotone systems ceiling specifications formalize time monotonicity property implies incremental falsification easily proved definition falsification problem system model stl formula said constitute falsification problem input signals implies investigate yet concrete conditions ensures time monotonicity following condition system models assumed example definition monotone system ceiling specification let variable output system model said monotone implies stl formula form variable constant called ceiling formula one speculate monotone system ceiling specification like constitute falsification problem speculation true unfortunately counterexample easily constructed using model whose output signal increasing instead show following weaker property definition truncated time monotonicity system model stl formula constitute truncated falsification problem input implies existence proposition let model monotone constitute truncated falsification problem proof appendix constructs concrete choice def specifically instant robustness minimum scenario instant vehicle speed peak note truncated time monotonicity guarantee incremental falsification per implies current rigid time staging optimal theoretical considerations suggest potential improvement staged procedure def adaptive choice stages left future work stateless systems reachability specifications another sufficient condition definition stateless system reachability formula system model said stateless input signals stl formula called reachability formula note stateless sufficient necessary condition statelessness requires insensitivity previous input prefixes stateless system still sensitive time proposition let stateless system reachability specification satisfy incremental falsification property proof easy typical situation would appeal prop specification hard falsify close system already stable state behavior depend much happened transient phase experiments demonstrate time staging drastically improve performance settings ernst hasuo zhang sedwards table experimental results column shows many falsification trials succeeded average runtime vmin vmax vmin vmax easy vmin vmax hard vmin easy vmin mid vmin hard specification abstract fuel control model ref init stable starred numbers indicate gnm deterministic trials yield result model automatic transmission spec easy hard easy algorithm time time time time time gnm mid time abst fuel ctrl hard init stable time time time corner samples global reduction search spaces analogue number corner samples breach global lines algorithm see last paragraph originally number corner samples number control points number input values introducing time stages total number corner samples reduced experiments compare success rate time consumption proposed method benchmarks use automotive simulink models commonly used falsification literature specifications chosen taking deliberations account namely ceiling specifications def including example reachability specification def combination thereof base line algorithm implemented breach methods proposed implemented top breach algorithm adaptive strategy one described def three algorithms plain combined different optimization solvers simulated annealing global gnm obtaining total nine configurations results table indicate success rate runtime performance significantly improved time staging often finding counterexamples breach fails columns hard init furthermore see adaptive algorithm necessarily lead higher success rate comparison one gives yet another runtime performance improvement however discussed detail overall best algorithm time staging affects optimization algorithms differently depending problem benchmarks automatic transmission simulink model proposed benchmark falsification input values throttle brake outputs car speed enhancement hybrid system falsification engine rotation selected gear model consider five specifications first two ceiling ones specification example states speed always threshold property easily falsified throttle specification states possible drive slowly high gear falsifying trajectory first speed reach gear subsequently roll speed falls threshold latter part trajectory seen ceiling specification note property interesting robustnes provide guidance unless gear entered system specification reachability problem vmin vmax encodes search trajectory keeps speed lower upper bound falsification problem consists two hitting speed interval precisely initial acceleration simulated time maintaining correct speed till time horizon suggests natural decomposition problem indeed achieved separating two aspects time specification vmin expresses speed vmin reached engine rotation exceeding threshold specification mentioned evaluated falsify trajectory must found reaches speed early engine rotation lower difficulty increases higher vmin lower respectively formula represents mixture ceiling reachability specifications second system model abstract fuel control takes two input values pedal angle engine speed outputs ratio influences fuel efficiency performance car value expected close reference value ref according setting corresponds normal mode ref constant used specification ref ratio deviate acceptable range seconds evaluated specification two parameter sets initial period larger error margin stable period smaller margin see table parameter values experimental setup results experiments automatic transmission model input signals piecewise constant control points time horizon parameters outlined follows maximum number samplings plain falsification trial initial optimization samplings combined trials make number stages coincide control points analogously sampling budget per stage set stages resulting overall samplings adaptive algorithm stall per five stages experiments abstract fuel ran threshold nmax control model run time horizon table used three five stages respectively initial stable specifications coincide number control points algorithms conducted samplings stage experiments ran breach version matlab amazon instance core intel xeon cpu main memory however use opportunity parallelize time reported order magnitude modern desktop workstation results shown table grouped underlying stochastic optimization algorithm simulated annealing global gnm group compare plain unstaged breach adaptive ones compare average runtimes lower better success rate higher better aggregated ernst hasuo zhang sedwards tion trials configuration good results deserve attention highlighted bold note implementation global algorithm breach uses deterministic source halton sequences implies whether gnm finds counterexample consistent across trials marked asterisk discussion focusing automatic transmission model first see works well although gnm performs even better supposedly uses corner samples see time staging introduces overhead gnm stage optimized individually contrast simulated annealing benefits time staging two ceiling specifications presume since emphasizes exploration benefits exploitation added time staging second specification slightly complex gear reached guidance robustness semantics masks quantitative information hence falsifying property needs luck collection initial samples algorithm cmaes apparently exploits see top column table considering algorithms gnm benefit time staging exploitation time causality prevents good trajectory prefixes discarded accidentally required gear reached fig results evaluated two different choices parameters harder instance falsified algorithms likely attributed flattening search space size evidently harder previous ones time staging improves performance general tendency cases hard results abstract fuel control model last two columns table show algorithms boost ability falsify rare events specification initial stage still unstable init considered rare event since three algorithms failed falsify gnm managed find error inputs last column stable period remarkable success rate run time gnm significantly improved overall performance algorithms suffers tightening bounds versions able find falsifying trajectories good success rates time exhibiting significantly shorter runtimes related work falsification special case testing considerable research efforts made towards coverage benefits coverage falsification guarantees twofold firstly indicate confidence correctness case counterexamples found paired sound robustness estimates simulations one cover infinite parameter space finitely many simulations recent tool computes approximations reachable states using approach secondly coverage utilized better balance exploration exploitation stochastic optimization algorithms called interleaved manner coverage guides exploration approach based random trees puts emphasis exploration achieving high coverage state space algorithm optimization plays supplementary role compared works current results orthogonal direction utilizing time causality enhance exploitation approach enhancement hybrid system falsification falsification seen generalization rrts consists upper layer searches abstract error trace given succession cells lower layer abstract error trace concretized actual error trace picking points cells approach discover falsifying traces backwards search goal region needs concatenate partial traces potential gaps fail furthermore unclear extend general stl specifications survey simulation based approaches done kapinski monotonicity exploited different ways falsification robust neighborhood descent red searches trajectories incrementally restarting search points low robustness descent computation red assumes explicit derivatives dynamics guarantee convergence local minima principle underlying prop experiments indicate principle useful optimization red paired simulated annealing combine local global search account exploration present work remains done future hoxha mine parameters specifications satisfied falsified system show robust semantics formulas monotone use fact tighten parameters orthogonal work use monotonicity system kim use idea similar partition specifications upper bounds lower ceilings however instead optimization use exhaustive exploration input space way turn requires system dynamics monotone choice input time point different def timemonotonicity aims incrementally composing good partial choices recent work introduces compositional falsification framework focusing systems include components perform tasks image recognition current work aims orthogonal direction finding rare counterexamples interested combination results given increasingly important roles algorithms cps conclusions future work introduced evaluated idea time staging enhance falsification hybrid systems proposed method emphasizes exploitation exploration part stochastic optimization single algorithm fits every problem consequence free lunch variety methods disposal permits user system choose one suitable problem hand shown proposed approach good fit problems suitable exhibit structures significantly outperforms algorithms two obvious directions future work pointed already instead picking best trajectory stage might beneficial retain potentially diverse ones spirit evolutionary algorithms example would interesting explore space work one hand random trees another idea discover time stages adaptively discussion prop experiments presented chose set uniformly fixed stages runs risk either coarse grained missing falsifying input fine grained wasting analysis time finally another future direction explore variations robust semantics mitigate discrete propositions like example using averaging modalities semantics could preserve information different subformulas ernst hasuo zhang sedwards acknowledgement work supported erato hasuo metamathematics systems design project japan science technology agency references abbas winn fainekos julius functional gradient descent method metric temporal logic specifications american control conference acc pages ieee abbas falsification conformance testing systems arizona state university adimoolam dang kapinski jin classification falsification embedded control systems majumdar kuncak editors computer aided verification int cav volume lncs pages springer akazaki hasuo time robustness mtl expressivity hybrid system falsification kroening pasareanu editors computer aided verification int cav volume lncs pages springer annpureddy liu fainekos sankaranarayanan tool temporal logic falsification hybrid systems abdulla leino editors tools algorithms construction analysis systems int tacas volume lncs pages springer auger hansen restart cma evolution strategy increasing population size proceedings ieee congress evolutionary computation cec pages ieee deshmukh jin kapinski maler stochastic local search falsification hybrid systems finkbeiner zhang editors automated technology verification analysis int atva volume lncs pages springer breach toolbox verification parameter synthesis hybrid systems touili cook jackson editors computer aided verification int cav volume lncs pages springer maler efficient robust monitoring stl sharygina veith editors computer aided verification int cav volume lncs pages springer maler robust satisfaction temporal logic signals chatterjee henzinger editors formal modeling analysis timed systems int formats volume lncs pages springer dreossi dang kapinski jin deshmukh efficient guiding strategies testing temporal properties hybrid systems havelund holzmann joshi editors nasa formal methods int nfm volume lncs pages springer dreossi seshia compositional falsification systems machine learning components barrett davies kahsai editors nasa formal methods int nfm volume lncs pages duggirala mitra viswanathan potok verification tool stateflow models baier tinelli editors tools algorithms construction analysis systems int tacas volume lncs pages springer fainekos pappas robustness temporal logic specifications signals theor comput hoxha abbas fainekos benchmarks temporal logic requirements automotive systems frehse althoff editors int workshops applied verification enhancement hybrid system falsification continuous hybrid systems arch cpsweek volume epic series computing pages easychair hoxha dokhanchi fainekos mining parametric temporal logic properties design systems sttt jin deshmukh kapinski ueda butts powertrain control verification benchmark lygeros editors international conference hybrid systems computation control part cps week hscc berlin germany april pages acm kapinski deshmukh jin ito butts approaches verification embedded control systems overview traditional advanced modeling testing verification techniques ieee control systems kim arcak seshia directed specifications assumption mining monotone dynamical systems abate fainekos editors proceedings international conference hybrid systems computation control hscc vienna austria april pages acm ratschan combined global local search falsification hybrid systems legay bozga editors formal modeling analysis timed systems int formats volume lncs pages springer maler nickovic monitoring temporal properties continuous signals lakhnech yovine editors formal techniques modelling analysis timed systems joint int confs formal modelling analysis timed systems formats formal techniques systems ftrtft volume lncs pages springer ulus asarin maler online timed pattern matching using derivatives chechik raskin editors tools algorithms construction analysis systems int tacas volume lncs pages springer wolpert macready free lunch theorems optimization ieee trans evolutionary computation zutshi deshmukh sankaranarayanan kapinski multiple shooting falsification hybrid systems mitra reineke editors international conference embedded software emsoft new delhi india october pages acm ernst hasuo zhang sedwards stl semantics signals definition robust semantics signals let timebounded signal stl formula define robustness respect follows induction superscript annotation designates time horizon jwt jwt boolean semantics found allows similar adaptation signals brzozowski derivative flat stl formulas falsification procedure often encounter following situation stl formula signal fixed compute robustness number different signals aid computation natural idea use syntactic construct brzozowski derivative compatible robust semantics sense reducing computation lhs rhs similar use derivatives found settings different though boolean semantics used use quantitative robust semantics fact definition derivatives section focuses flat formulas free nested modalities restriction mandated quantitative semantics proof later suggests anyway definitions results section new best knowledge need following extension stl syntax definition extended stl extend syntax stl def atomic propositions robust semantics def def appendix extended accordingly intuitively atomic proposition constantly returns robustness value definition derivative let signal extended stl formula define derivative following induction cjv cjv interval obtained shifting endpoints earlier definition flat stl formula stl formula flat nested temporal modal operators means subformula neither contains proposition let signal flat stl formula enhancement hybrid system falsification proof induction construction equalities follow definition cjv nontrivial case cjv jvt jvt used following facts firstly formula without temporal operators secondly domain note flatness assumption crucially used proof step modifying def order accommodate nested modalities seems hard analyzing proof step ernst hasuo zhang sedwards omitted proofs proof prop proof definitions input signal therefore assumption expands first infimum taken compact domain therefore exists achieves infimum let real number following obvious another immediate consequence derived using causality def goal show heavily used causality def example step causality used deriving step applied monotonicity signals note allows
| 3 |
vector clocks coq experience report christopher meiklejohn jun basho technologies cmeiklejohn abstract authoring code language performing extraction language problematic due required encoding type information constructors exported objects report documents process implementing vector clocks coq proof assistant extraction use distributed data store riak report focus technical challenges using core erlang code extracted proof assistant erlang application opposed verification model required use subset source language due inherent differences implementation destination language example extracting functions rely currying language support currying categories subject descriptors programming techniques concurrent programming programming techniques language constructs features abstract data types patterns control structures general terms tion keywords required use adapter layer perform translations tween native data structures destination language exported data structures source language main contributions work following algorithms design reliability formal coq module providing implementation vector clocks dynamo erlang riak vector clocks extracted erlang module providing implementation vector clocks authored core erlang extracted coq proof assistant introduction erlang module used glue code provides helpers distributed system data structures replicated shared multiple communicating nodes highly desirable method asserting certain invariants preserved objects manipulated industry one major approaches using testing utilities quickcheck verify certain properties hold true multiple operations applied data structures however given amount possible executions number nodes processes grow exhausting state space becomes difficult alternative approach type verification use interactive theorem prover explore vvclocks erlang library aims provide formally verified implementation vector clocks use erlang applications describe process building implementation described coq proof assistant walk process using coq code extraction utilities generate core erlang code vvclocks leverages verlang project extension coq providing extraction core erlang written tim carstens given differences source target languages outline process adapting generated core erlang coq source properly execute compile erlang environment verify applicability implementation replace vector clock implementation open source data store riak explore process writing adapter translate data structures base erlang data structures adapting existing test suite pass well running riak data store newly extracted implementation vector clocks based experience identified series issues makes approach immediately viable use production systems issues following perform type conversion native erlang types coq data structures details extraction process difficult ual modifications extracted source coq implementations following goals scope explore verification vector clock model ically proofs tied closely implementation data structures changed extensively development research focus using efficient implementation data structures found simple representation although inefficient much easier debug background verified vector clock implementation discussed authored coq proof assistant coq code extraction utilities used generate core erlang code using supporting library called verlang core erlang code compiled used replace one modules data store riak following subsections provide background information coq core erlang verlang vector clocks coq coq interactive theorem prover implements dependentlytyped functional programming language called gallina addition coq provides vernacular stating function definitions theorems tactic language allowing user define custom less equal comparson natural numbers fixpoint ble nat nat struct bool match true match false ble nat end end type definitions actors counts timestamps definition actor nat definition count nat definition timestamp nat model clocks triples definition clock prod actor prod count timestamp model vector clocks list clock triples definition vclock list clock type figure fixpoint computes less equal two natural numbers example taken benjamin pierce software foundations create vector clocks definition fresh vclock nil figure types specifications vector clocks coq ble nat fun case true true true case true false true call vvclock ble nat end end currying supported core erlang code relies produces code execute correctly extraction core erlang differentiates ule calls extraction code treats calls functions intermodule calls fully qualifies module name currently known good way handle erlang receive primitive vector clocks vector clocks provide mechanism reasoning relationship events distributed system specifically focusing differentiating causal concurrent events vector clocks traditionally modeled list pairs pair composed actor operation count events occur system entire vector clock shipped event actor event increments count allows partial ordering established events system comparing actor count two vector clocks able reason events happen concurrently causally calculating whether vector clocks descend diverge one another riak system objects replicated across series nodes object stored vector clock representing actor modifications object dealing divergent values across replicas vector clocks used reason whether value descendent another effectively replaces written concurrently another object case values need preserved figure generated core erlang function computes less equal two natural numbers modeled using peano numbers note recursive call line proof methods coq also provides ability perform extraction certified programs scheme haskell ocaml code vernacular figure provides example fixed point computation authored coq core erlang core erlang intermediate representation erlang programming language designed operable programs major design goals core erlang providing simple grammar converted normal erlang programs well providing regular concise structure allow tools operate programs translated core erlang semantics core erlang focus paper sample core erlang function figure implementation following subsections deal implementation vector clocks coq problems extracted core erlang code writing adapter layer erlang verlang verlang experimental project tim carstens adds core erlang extraction target coq attempt enable formally verified erlang development coq proof assistant verlang provides mapping miniml language serves intermediary translation step extraction core erlang number interesting problems address translation outlined tim carstens verlang code repository summary vector clocks coq package inside riak provides erlang vector clock implementation riak core implementing vector clocks coq use riak attempt stick closely possible existing api simplicity model clock triple natural numbers representing actor current count timestamp seen figure timestamps used pruning critical semantics vector clock timestamps modeled monotonically advancing natural numbers similar unix timestamp model vector clocks list clocks coq supports module nesting concept core erlang extracted code nested modules supported name mangling increment vector clock definition increment actor actor vclock vclock match find fun clock match clock pair beq nat actor end vclock none cons pair actor pair init count init timestamp vclock pair pair count timestamp cons pair pair incr count count incr timestamp timestamp filter fun clock match clock pair negb beq nat actor end vclock end figure increment function vector clock figure resh vector clock data constructor resh call ble nat actor figure incorrectly qualified call module name api provide mimics api exposed riak core includes following functions fresh fun descends fun case call fold left descends pair true used generate new empty vector clock increment figure missing arity passing function fold left call next descends used increment vector clock particular actor equal used test equality two vector clocks code extraction core erlang descends performing code extraction required extract code vector clocks implementation also supporting coq libraries used implementation example includes modules supporting core coq datatypes specifically peano numbers list module module providing arithmetic equality natural numbers extracting vector clock library core erlang encounter numerous problems implementation none extraction core libraries following subsections provide details example issues workaround identified used determine one vector clock ancestor another merge used merge two vector clocks get counter used extract count particular actor get timestamp used extract timestamp particular actor nodes used extract actors incremented vector clock missing data constructors prune first problem experience data constructors export correctly figure shows addition code resolve problem missing data constructor resh given series timestamps representing bounds pruning prune vector clock return result pruning included paper available provided source code implementation functions figure example showing increment function destructures vector clock attempts locate appropriate actor exists either adds new actor increments existing actors count important things note example incorrectly qualified calls also run problems related calls incorrectly qualified manually modified specifically run cases files containing modules overqualified filename repeated module name figure cases function calls missing arity resulting failed function call runtime figure workaround identified dealing issues manually modify extracted code fix locations missing arities incorrectly qualified functions abstract increment function operates tamps important coq notion generating erlang unix timestamp patch generated core erlang code call timestamp aware replacements two anonymous functions passed arguments lack currying find filter functions respectively initial version library abstracted function used many api functions relies use partial application functions inlined ensure extracted code compiled correctly figure provides example using currying coq example method returns closure returns true given actor associated clock example extracted core erlang code execute turned call returned function call immediately computes final result source coq function definition definition find actor actor fun clock clock match clock pair negb beq nat actor end generated core erlang function find fun actor clock case clock pair true call beq nat actor end doc convert natural number peano natural peano natural peano natural natural peano natural doc return natural timestamp timestamp calendar datetime gregorian seconds erlang universaltime doc convert peano number natural peano natural peano natural peano peano doc determine one vector clock ancestor another descends case vvclock descends true true false false end figure wrapper functions converting coq boolean type erlang representation booleans atoms dealing equalities inequalities figure coq code generates incorrect erlang code extracted doc compare equality two vclocks equal case vvclock equal true true false false end doc peanoized timestamp peano timestamp term peano timestamp figure conversion erlang representation unix timestamps peano numbers figure conversion peano numbers erlang built numerics ture figure shows wrapper function used convert back forth data types next look booleans implemented part module coq booleans modeled abstract data type two constructors rue alse erlang booleans modeled using atoms specifically true alse atom immutable constants write series small wrapper function seen figure call exported module pattern match return value workaround identified issue manually inline function definitions addition leads large amount code duplication implementation erlang adapter riak vvclocks next look adapter layer required convert data structures exported core erlang data structures provided erlang following subsections look conversion peano numbers boolean types unix timestamps data located application environment timestamps timestamps another area provide wrapper functions performing type conversions coq notion unix timestamp original implementation prune method models timestamps monotonically advancing natural numbers account provide helper functions used convert erlang notion timestamp peano number used functions exported coq figure provides example conversion type conversion verlang models abstract data types provided coq using nsized tuples first position tuple representing constructor name remaining slots tuple constructor arguments nested function applications used recursive inductively defined data types modeled nested tuples see two examples specifically working booleans natural numbers first look modeling naturals coq using peano numbers provided eano concretely numbers modeled using inductive data type two constructors one base case takes arguments one inductive case takes argument peano number example represent arabic numeral following would used translated following erlang actors original increment function arguments provided riak core vector clock implementation includes actor typically atom erlang term coq implementation model string inductively defined data type ascii characters defined inductively eight bits see figure example type specifications form annotations well debugging symbols compiled byte code neither supported core erlang definition zero ascii false false false false false false false false figure coq representation ascii character future work based experience implementing vvclocks library identified series work items would improve viability approach use production systems authored erlang doc prune vclocks prune vclock timestamp bprops old term peano get property old vclock bprops young term peano get property young vclock bprops large term peano get property large vclock bprops small term peano get property small vclock bprops vvclock prune vclock small large young old bugs verlang clear issues experienced extraction vector clocks implementation related bugs verlang extraction process specifically refer following issues figure wrapper function dealing information must come parts system example get property retrieving information riak bucket could easily also accessing something stored runtime environment function term peano used convert erlang timestamp format peano number leveraging datetime gregorian seconds function incorrectly qualified function calls section missing data constructors section appears bugs addressed extraction process environment variables applications original prune function provided riak core vector clock implementation takes three arguments vector clock timestamp object provides series settings apply vector clock specifically referred bucket properties simple data structure stores properties given operate data structure coq break apart certif ied prune function implemented coq wrapper function extracts values environment directly passes formal arguments figure provides example original motivation work take partially verified implementations two convergent replicated data types specifically described shapiro extract use riak system performing extraction code ran similar problems discussed however rather issues occurring vector clocks implementation authored occurred extraction support libraries data structures provided coq made working debugging implementation difficult lack control internal implementation coq data structures beacuse decided simplify model focus implementation vector clock library using basic list data structures provided coq provides similar semantics evaluation evaluating vvclock package riak data store immediately ran problems related inductive data structures modeled erlang explore one particular example related timestamps example get current time erlang convert unix timestamp takes anywhere microseconds however timestamp year peano number takes much longer given nested tuple encoding memory allocated stack tuples matter minutes able exhaust memory attempting encode timestamp like process terminated ran virtual memory allow continue testing rest library modified timestamp encoding function store timestamp much smaller value easier encode completed able successfully validate existing test suite exported module able successfully validate unit tests passed still use library production riak instance problems data structure encoding addition even though vvclock module may verified vclock module provides adapter layer vvclock module rest system must still fully tested verify correct behavior likely would done using erlang existing unit testing utility eunit aforementioned quickcheck finally debugging modules compiled core erlang still difficult tooling dialyzer proper rely adapter layer highly desirable eliminate adapter layer several reasons erf ormance process converting coq data structures erlang data structures requires encoding constructor calls order retain typing information language without abstract data types overhead performing conversion high especially dealing large complex data structures timestamps esting erlang functions must made perform sions data structures coq representations problematic requires additional level testing bugs creation objects introduced however problematic consider case naturals modeled peano numbers axioms defined theorems proven functions executing types guaranteed way data structure modeled specifically inductive types certain functions guaranteed terminate properties held inductive hypothesis case naturals modeled using erlang built integer type none properties regarding subtraction would hold passed zero regardless feel worth exploring alternate means extraction attempts leverage erlang types efficiently sacrificing safety performance example function operating coq string would exported core erlang code operated erlang binary instead tuple encoded coq structure uppsala university core erlang initiative http quickcheck property generation finally feel ability generate quickcheck models instead generating source code data structures might viable approach based series axioms defined regarding erlang system given theorems defined proof assistant could generate series properties guide development data structures preserve invariants code availability code discussed available github apache license http acknowledgments thanks andy gross andrew stone ryan zezeski scott lystig fritchie reid draper providing feedback research development work references arts castro hughes testing erlang data types quviq quickcheck proceedings acm sigplan workshop erlang erlang pages new york usa acm isbn url http carlsson gustavsson johansson lindgren pettersson virding core erlang language specification technical report department information technology uppsala university carstens verlang source code repository https fidge timestamps systems preserve partial ordering proceedings australian computer science conference url http inria coq proof assistant homepage http inria library http lamport time clocks ordering events distributed system commun acm july issn url http meiklejohn source code repository https meiklejohn vvclocks source code https repository pierce software foundations http sagonas detecting defects erlang programs using static analysis proceedings acm sigplan international conference principles practice declarative programming ppdp pages new york usa acm isbn url http sagonas kostis proper source code repository https shapiro baquero zawirski comprehensive study convergent commutative replicated data types rapport recherche inria url http
| 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.